Nuner V1 Orgs
FewNERD-fine-supervisedでファインチューニングされたnumind/NuNER-v1.0モデルで、テキスト中の組織エンティティ(ORG)を識別するために使用されます
ダウンロード数 6,836
リリース時間 : 3/28/2024
モデル概要
このモデルはNER-ORGSデータセットでファインチューニングされたNuNERモデルで、特にテキスト中の組織名を識別するための固有表現認識タスクに特化しています。NuNERモデルはRoBERTa-baseをバックボーンエンコーダーとして使用し、大規模で多様なデータセットで事前学習されています。
モデル特徴
高品質な事前学習
GPT-3.5-turbo-0301で合成アノテーションされた100万文の大規模で多様なデータセットを使用して事前学習を行い、高品質なトークン埋め込みを生成
専門分野のファインチューニング
NER-ORGSデータセットでファインチューニングを行い、組織エンティティ認識能力を特別に最適化
バランスの取れた性能
精度(0.76)と再現率(0.80)の間で良好なバランスを達成し、F1値は0.78に達します
モデル能力
テキスト中の組織エンティティ認識
固有表現タグ分類
使用事例
ニュース分析
ニュース中の組織エンティティ抽出
ニューステキストから言及されている企業、政府機関などの組織エンティティを識別
CNN、アップル、グーグルなどの組織名を正確に識別可能
ビジネスインテリジェンス
ビジネス文書分析
ビジネス文書、契約書、またはレポートで言及されている関連組織を分析
license: cc-by-sa-4.0 base_model: numind/NuNER-v1.0 tags:
- token-classification
- ner
- named-entity-recognition metrics:
- precision
- recall
- f1
- accuracy model-index:
- name: nuner-v1_orgs
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
name: FewNERD, CoNLL2003, and OntoNotes v5
type: tomaarsen/ner-orgs
split: test
metrics:
- type: f1 value: 0.7798010380622837 name: F1
- type: precision value: 0.7605247616637139 name: Precision
- type: recall value: 0.800079879293512 name: Recall
- type: accuracy value: 0.9769673789973878 name: Accuracy datasets:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
name: FewNERD, CoNLL2003, and OntoNotes v5
type: tomaarsen/ner-orgs
split: test
metrics:
- tomaarsen/ner-orgs language:
- en library_name: transformers pipeline_tag: token-classification widget:
- text: Concern and scepticism surround Niger uranium mining waste storage plans. Towering mounds dot the desert landscape in northern Niger's Arlit region, but they are heaps of partially radioactive waste left from four decades of operations at one of the world's biggest uranium mines. An ambitious 10-year scheme costing $160 million is underway to secure the waste and avoid risks to health and the environment, but many local people are worried or sceptical. France's nuclear giant Areva, now called Orano, worked the area under a subsidiary, the Akouta Mining Company (Cominak). Cominak closed the site in 2021 after extracting 75,000 tonnes of uranium, much of which went to fuelling the scores of nuclear reactors that provide the backbone of France's electricity supply. Cominak's director general Mahaman Sani Abdoulaye showcased the rehabilitation project to the first French journalists to visit the site since 2010, when seven Areva employees were kidnapped by jihadists.
- text: SE Michigan counties allege insulin gouging; Localities file lawsuit against pharmaceutical makers. Four metro Detroit counties filed federal lawsuits Wednesday against some of the nation's biggest pharmaceutical manufacturers and pharmacy benefit managers alleging illegal price fixing for insulin products. Macomb, Monroe, Wayne and Washtenaw counties filed the lawsuits in U.S. District Court in New Jersey against more than a dozen companies, including Lilly, Sanofi Aventis, Novo Nordisk, Express Scripts, Optum Rx and CVS Caremark, per their attorneys. "These are the first such lawsuits that have been filed in the state of Michigan and probably more to come," said attorney Melvin Butch Hollowell of the Miller Law Firm. He described the allegations during a news conference, saying that nationally "the pharmacies and manufacturers get together. They control about 90% of the market each, of the insulin market. They talk to each other secretly. And they jack up the prices through anticompetitive means. And what we've seen is over the past 20 years, when we talk about jacking up the prices, they jack them up 1,500% in the last 20 years. 1,500%."
- text: Foreign governments may be spying on your smartphone notifications, senator says. Washington (CNN) — Foreign governments have reportedly attempted to spy on iPhone and Android users through the mobile app notifications they receive on their smartphones - and the US government has forced Apple and Google to keep quiet about it, according to a top US senator. Through legal demands sent to the tech giants, governments have allegedly tried to force Apple and Google to turn over sensitive information that could include the contents of a notification - such as previews of a text message displayed on a lock screen, or an update about app activity, Oregon Democratic Sen. Ron Wyden said in a new report. Wyden's report reflects the latest example of long-running tensions between tech companies and governments over law enforcement demands, which have stretched on for more than a decade. Governments around the world have particularly battled with tech companies over encryption, which provides critical protections to users and businesses while in some cases preventing law enforcement from pursuing investigations into messages sent over the internet.
- text: Tech giants ‘could severely disable UK spooks from stopping online harms’. Silicon Valley tech giants’ actions could “severely disable” UK spooks from preventing harm caused by online paedophiles and fraudsters, Suella Braverman has suggested. The Conservative former home secretary named Facebook owner Meta , and Apple, and their use of technologies such as end-to-end encryption as a threat to attempts to tackle digital crimes. She claimed the choice to back these technologies without “safeguards” could “enable and indeed facilitate some of the worst atrocities that our brave men and women in law enforcement agencies deal with every day”, as MPs began considering changes to investigatory powers laws. The Investigatory Powers (Amendment) Bill includes measures to make it easier for agencies to examine and retain bulk datasets, such as publicly available online telephone records, and would allow intelligence agencies to use internet connection records to aid detection of their targets. We know that the terrorists, the serious organised criminals, and fraudsters, and the online paedophiles, all take advantage of the dark web and encrypted spaces
- text: Camargo Corrêa asks Toffoli to suspend the fine agreed with Lava Jato. The Camargo Corrêa group has asked Justice Dias Toffoli to suspend the R$1.4 billion fine it agreed to pay in its leniency agreement under Operation Car Wash. The company asked for an extension of the minister's decisions that benefited J&F and Odebrecht. Like the other companies, it claimed that it suffered undue pressure from members of the Federal Public Prosecutor's Office (MPF) to close the deal. Much of the request is based on messages exchanged between prosecutors from the Curitiba task force and former judge Sergio Moro - Camargo Corrêa requested full access to the material, seized in Operation Spoofing, which arrested the hackers who broke into cell phones. The dialogues, according to the group's defense, indicate that the executives did not freely agree to the deal, since they were the targets of lawsuits and pre-trial detentions.
numind/NuNER-v1.0 fine-tuned on FewNERD-fine-supervised
This is a NuNER model fine-tuned on the NER-ORGS dataset that can be used for Named Entity Recognition. NuNER model uses RoBERTa-base as the backbone encoder and it was trained on the NuNER dataset, which is a large and diverse dataset synthetically labeled by gpt-3.5-turbo-0301 of 1M sentences. This further pre-training phase allowed the generation of high quality token embeddings, a good starting point for fine-tuning on more specialized datasets.
Model Details
The model was fine-tuned as a regular BERT-based model for NER task using HuggingFace Trainer class.
Model labels
Entity Types: ORG
Uses
Direct Use for Inference
>>> from transformers import pipeline
>>> text = """Foreign governments may be spying on your smartphone notifications, senator says. Washington (CNN) — Foreign governments have reportedly attempted to spy on iPhone and Android users through the mobile app notifications they receive on their smartphones - and the US government has forced Apple and Google to keep quiet about it, according to a top US senator. Through legal demands sent to the tech giants, governments have allegedly tried to force Apple and Google to turn over sensitive information that could include the contents of a notification - such as previews of a text message displayed on a lock screen, or an update about app activity, Oregon Democratic Sen. Ron Wyden said in a new report. Wyden's report reflects the latest example of long-running tensions between tech companies and governments over law enforcement demands, which have stretched on for more than a decade. Governments around the world have particularly battled with tech companies over encryption, which provides critical protections to users and businesses while in some cases preventing law enforcement from pursuing investigations into messages sent over the internet."""
>>> classifier = pipeline(
"ner",
model="guishe/nuner-v1_orgs",
aggregation_strategy="simple",
)
>>> classifier(text)
[{'entity_group': 'ORG',
'score': 0.9821347,
'word': 'CNN',
'start': 94,
'end': 97},
{'entity_group': 'ORG',
'score': 0.99382174,
'word': ' Apple',
'start': 288,
'end': 293},
{'entity_group': 'ORG',
'score': 0.99351865,
'word': ' Google',
'start': 298,
'end': 304},
{'entity_group': 'ORG',
'score': 0.992792,
'word': ' Apple',
'start': 449,
'end': 454},
{'entity_group': 'ORG',
'score': 0.99385214,
'word': ' Google',
'start': 459,
'end': 465}]
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
0.0631 | 1.0 | 1710 | 0.0566 | 0.7635 | 0.7952 | 0.7790 | 0.9778 |
0.0572 | 2.0 | 3420 | 0.0580 | 0.7816 | 0.7925 | 0.7870 | 0.9785 |
0.0429 | 3.0 | 5130 | 0.0562 | 0.7869 | 0.8084 | 0.7975 | 0.9790 |
0.0336 | 4.0 | 6840 | 0.0631 | 0.7912 | 0.8045 | 0.7978 | 0.9790 |
Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2
Citation
BibTeX
@misc{bogdanov2024nuner,
title={NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data},
author={Sergei Bogdanov and Alexandre Constantin and Timothée Bernard and Benoit Crabbé and Etienne Bernard},
year={2024},
eprint={2402.15343},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Indonesian Roberta Base Posp Tagger
MIT
これはインドネシア語RoBERTaモデルをファインチューニングした品詞タグ付けモデルで、indonluデータセットで訓練され、インドネシア語テキストの品詞タグ付けタスクに使用されます。
シーケンスラベリング
Transformers その他

I
w11wo
2.2M
7
Bert Base NER
MIT
BERTを微調整した命名エンティティ識別モデルで、4種類のエンティティ(場所(LOC)、組織(ORG)、人名(PER)、その他(MISC))を識別できます。
シーケンスラベリング 英語
B
dslim
1.8M
592
Deid Roberta I2b2
MIT
このモデルはRoBERTaをファインチューニングしたシーケンスラベリングモデルで、医療記録内の保護対象健康情報(PHI/PII)を識別・除去します。
シーケンスラベリング
Transformers 複数言語対応

D
obi
1.1M
33
Ner English Fast
Flairに組み込まれた英語の高速4クラス固有表現認識モデルで、Flair埋め込みとLSTM-CRFアーキテクチャを使用し、CoNLL-03データセットで92.92のF1スコアを達成しています。
シーケンスラベリング
PyTorch 英語
N
flair
978.01k
24
French Camembert Postag Model
Camembert-baseをベースとしたフランス語の品詞タグ付けモデルで、free-french-treebankデータセットを使用して学習されました。
シーケンスラベリング
Transformers フランス語

F
gilf
950.03k
9
Xlm Roberta Large Ner Spanish
XLM - Roberta - largeアーキテクチャに基づいて微調整されたスペイン語の命名エンティティ認識モデルで、CoNLL - 2002データセットで優れた性能を発揮します。
シーケンスラベリング
Transformers スペイン語

X
MMG
767.35k
29
Nusabert Ner V1.3
MIT
NusaBert-v1.3を基にインドネシア語NERタスクでファインチューニングした固有表現認識モデル
シーケンスラベリング
Transformers その他

N
cahya
759.09k
3
Ner English Large
Flairフレームワークに組み込まれた英語の4種類の大型NERモデルで、文書レベルのXLM - R埋め込みとFLERT技術に基づいており、CoNLL - 03データセットでF1スコアが94.36に達します。
シーケンスラベリング
PyTorch 英語
N
flair
749.04k
44
Punctuate All
MIT
xlm - roberta - baseを微調整した多言語句読点予測モデルで、12種類の欧州言語の句読点自動補完に対応しています。
シーケンスラベリング
Transformers

P
kredor
728.70k
20
Xlm Roberta Ner Japanese
MIT
xlm-roberta-baseをファインチューニングした日本語固有表現認識モデル
シーケンスラベリング
Transformers 複数言語対応

X
tsmatz
630.71k
25
おすすめAIモデル
Llama 3 Typhoon V1.5x 8b Instruct
タイ語専用に設計された80億パラメータの命令モデルで、GPT-3.5-turboに匹敵する性能を持ち、アプリケーションシナリオ、検索拡張生成、制限付き生成、推論タスクを最適化
大規模言語モデル
Transformers 複数言語対応

L
scb10x
3,269
16
Cadet Tiny
Openrail
Cadet-TinyはSODAデータセットでトレーニングされた超小型対話モデルで、エッジデバイス推論向けに設計されており、体積はCosmo-3Bモデルの約2%です。
対話システム
Transformers 英語

C
ToddGoldfarb
2,691
6
Roberta Base Chinese Extractive Qa
RoBERTaアーキテクチャに基づく中国語抽出型QAモデルで、与えられたテキストから回答を抽出するタスクに適しています。
質問応答システム 中国語
R
uer
2,694
98