🚀 CamemBERT-bio:一款有益於健康的美味法語語言模型
CamemBERT-bio是一款先進的法語生物醫學語言模型,它基於camembert-base進行持續預訓練構建而成。該模型在一個包含4.13億個單詞的法語公共生物醫學語料庫上進行訓練,語料庫涵蓋了科學文獻、藥品說明書以及從論文和文章中提取的臨床病例。與camembert-base相比,它在5種不同的生物醫學命名實體識別任務中,平均F1分數提高了2.54分。
✨ 主要特性
- 專業領域優化:專為法語生物醫學領域設計,在生物醫學命名實體識別任務上表現出色,相比基礎模型有顯著的性能提升。
- 豐富語料訓練:使用包含科學文獻、藥品說明書和臨床病例的大規模法語生物醫學語料庫進行訓練,數據涵蓋面廣。
📦 模型信息
屬性 |
詳情 |
模型類型 |
基於持續預訓練的法語生物醫學語言模型 |
訓練數據 |
一個包含4.13億個單詞的法語公共生物醫學語料庫,包含科學文獻、藥品說明書以及從論文和文章中提取的臨床病例 |
🔧 技術細節
訓練數據
語料庫 |
詳情 |
規模 |
ISTEX |
ISTEX上索引的多樣化科學文獻 |
2.76億 |
CLEAR |
藥品說明書 |
7300萬 |
E3C |
來自期刊、藥品說明書和臨床病例的各種文檔 |
6400萬 |
總計 |
|
4.13億 |
訓練過程
我們基於camembert-base進行持續預訓練。使用帶全詞掩碼的掩碼語言建模(MLM)目標對模型進行訓練,在39小時內進行了50000步訓練,使用了2塊Tesla V100。
📚 評估
微調
在微調過程中,我們使用Optuna來選擇超參數。學習率設置為5e - 5,熱身比例為0.224,批量大小為16。微調過程進行了2000步。在預測時,在模型頂部添加了一個簡單的線性層。值得注意的是,在微調過程中,沒有凍結任何CamemBERT層。
評分
為了評估模型的性能,我們使用seqeval工具在嚴格模式下采用IOB2方案進行評估。對於每次評估,選擇在驗證集上表現最佳的微調模型來計算測試集的最終分數。為確保可靠性,我們對10次使用不同種子的評估結果進行了平均。
結果
風格 |
數據集 |
分數 |
CamemBERT |
CamemBERT - bio |
臨床 |
CAS1 |
F1 |
70.50 ± 1.75 |
73.03 ± 1.29 |
|
|
P |
70.12 ± 1.93 |
71.71 ± 1.61 |
|
|
R |
70.89 ± 1.78 |
74.42 ± 1.49 |
|
CAS2 |
F1 |
79.02 ± 0.92 |
81.66 ± 0.59 |
|
|
P |
77.3 ± 1.36 |
80.96 ± 0.91 |
|
|
R |
80.83 ± 0.96 |
82.37 ± 0.69 |
|
E3C |
F1 |
67.63 ± 1.45 |
69.85 ± 1.58 |
|
|
P |
78.19 ± 0.72 |
79.11 ± 0.42 |
|
|
R |
59.61 ± 2.25 |
62.56 ± 2.50 |
藥品說明書 |
EMEA |
F1 |
74.14 ± 1.95 |
76.71 ± 1.50 |
|
|
P |
74.62 ± 1.97 |
76.92 ± 1.96 |
|
|
R |
73.68 ± 2.22 |
76.52 ± 1.62 |
科學 |
MEDLINE |
F1 |
65.73 ± 0.40 |
68.47 ± 0.54 |
|
|
P |
64.94 ± 0.82 |
67.77 ± 0.88 |
|
|
R |
66.56 ± 0.56 |
69.21 ± 1.32 |
🌱 環境影響估計
- 硬件類型:2塊Tesla V100
- 使用時長:39小時
- 服務提供商:INRIA集群
- 計算區域:法國巴黎
- 碳排放:0.84千克二氧化碳當量
📄 許可證
本項目採用MIT許可證。
📖 引用信息
@inproceedings{touchent-de-la-clergerie-2024-camembert-bio,
title = "{C}amem{BERT}-bio: Leveraging Continual Pre-training for Cost-Effective Models on {F}rench Biomedical Data",
author = "Touchent, Rian and
de la Clergerie, {\'E}ric",
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.241",
pages = "2692--2701",
abstract = "Clinical data in hospitals are increasingly accessible for research through clinical data warehouses. However these documents are unstructured and it is therefore necessary to extract information from medical reports to conduct clinical studies. Transfer learning with BERT-like models such as CamemBERT has allowed major advances for French, especially for named entity recognition. However, these models are trained for plain language and are less efficient on biomedical data. Addressing this gap, we introduce CamemBERT-bio, a dedicated French biomedical model derived from a new public French biomedical dataset. Through continual pre-training of the original CamemBERT, CamemBERT-bio achieves an improvement of 2.54 points of F1-score on average across various biomedical named entity recognition tasks, reinforcing the potential of continual pre-training as an equally proficient yet less computationally intensive alternative to training from scratch. Additionally, we highlight the importance of using a standard evaluation protocol that provides a clear view of the current state-of-the-art for French biomedical models.",
}
@inproceedings{touchent:hal-04130187,
TITLE = {{CamemBERT-bio : Un mod{\`e}le de langue fran{\c c}ais savoureux et meilleur pour la sant{\'e}}},
AUTHOR = {Touchent, Rian and Romary, Laurent and De La Clergerie, Eric},
URL = {https://hal.science/hal-04130187},
BOOKTITLE = {{18e Conf{\'e}rence en Recherche d'Information et Applications \\ 16e Rencontres Jeunes Chercheurs en RI \\ 30e Conf{\'e}rence sur le Traitement Automatique des Langues Naturelles \\ 25e Rencontre des {\'E}tudiants Chercheurs en Informatique pour le Traitement Automatique des Langues}},
ADDRESS = {Paris, France},
EDITOR = {Servan, Christophe and Vilnat, Anne},
PUBLISHER = {{ATALA}},
PAGES = {323-334},
YEAR = {2023},
KEYWORDS = {comptes rendus m{\'e}dicaux ; TAL clinique ; CamemBERT ; extraction d'information ; biom{\'e}dical ; reconnaissance d'entit{\'e}s nomm{\'e}es},
HAL_ID = {hal-04130187},
HAL_VERSION = {v1},
}
👥 開發信息