🚀 適用於土耳其語的便捷命名實體識別(NER)應用
這是一個易於使用的土耳其語命名實體識別(NER)應用程序。它基於Python,採用了Bert + 遷移學習技術,能夠高效地進行命名實體識別。
🚀 快速開始
數據下載
感謝 @stefan-it 提供的數據,可通過以下命令下載預處理後的數據集,包含訓練集、開發集和測試集,並將它們放在 tr-data
文件夾中:
cd tr-data
for file in train.txt dev.txt test.txt labels.txt
do
wget https://schweter.eu/storage/turkish-bert-wikiann/$file
done
cd ..
預訓練設置
下載數據集後,可開始預訓練。只需設置以下環境變量:
export MAX_LENGTH=128
export BERT_MODEL=dbmdz/bert-base-turkish-cased
export OUTPUT_DIR=tr-new-model
export BATCH_SIZE=32
export NUM_EPOCHS=3
export SAVE_STEPS=625
export SEED=1
運行預訓練
設置好環境變量後,運行以下命令開始預訓練:
python3 run_ner_old.py --data_dir ./tr-data3 \
--model_type bert \
--labels ./tr-data/labels.txt \
--model_name_or_path $BERT_MODEL \
--output_dir $OUTPUT_DIR-$SEED \
--max_seq_length $MAX_LENGTH \
--num_train_epochs $NUM_EPOCHS \
--per_gpu_train_batch_size $BATCH_SIZE \
--save_steps $SAVE_STEPS \
--seed $SEED \
--do_train \
--do_eval \
--do_predict \
--fp16
💻 使用示例
基礎用法
from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("savasy/bert-base-turkish-ner-cased")
tokenizer = AutoTokenizer.from_pretrained("savasy/bert-base-turkish-ner-cased")
ner = pipeline('ner', model=model, tokenizer=tokenizer)
ner("Mustafa Kemal Atatürk 19 Mayıs 1919'da Samsun'a ayak bastı.")
📚 詳細文檔
引用信息
如果您在研究中使用了該項目,請進行如下引用:
@misc{yildirim2024finetuning,
title={Fine-tuning Transformer-based Encoder for Turkish Language Understanding Tasks},
author={Savas Yildirim},
year={2024},
eprint={2401.17396},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@book{yildirim2021mastering,
title={Mastering Transformers: Build state-of-the-art models from scratch with advanced natural language processing techniques},
author={Yildirim, Savas and Asgari-Chenaghlu, Meysam},
year={2021},
publisher={Packt Publishing Ltd}
}
部分結果展示
數據1
對於上述數據,評估和測試結果如下:
- 評估結果:
- 精確率(precision) = 0.916400580551524
- 召回率(recall) = 0.9342309684101502
- F1值(f1) = 0.9252298787412536
- 損失值(loss) = 0.11335893666411284
- 測試結果:
- 精確率(precision) = 0.9192058759362955
- 召回率(recall) = 0.9303010230367262
- F1值(f1) = 0.9247201697271198
- 損失值(loss) = 0.11182546521618497
數據2
數據鏈接:https://github.com/stefan-it/turkish-bert/files/4558187/nerdata.txt
由 @kemalaraz 提供的數據的性能如下:
savas@savas-lenova:~/Desktop/trans/tr-new-model-1$ cat eval_results.txt
* 精確率(precision) = 0.9461980692049029
* 召回率(recall) = 0.959309358847465
* F1值(f1) = 0.9527086063783312
* 損失值(loss) = 0.037054269206847804
savas@savas-lenova:~/Desktop/trans/tr-new-model-1$ cat test_results.txt
* 精確率(precision) = 0.9458370635631155
* 召回率(recall) = 0.9588201928530913
* F1值(f1) = 0.952284378344882
* 損失值(loss) = 0.035431676572445225