Chonky Modernbert Large 1
模型简介
该模型处理文本并将其划分为语义连贯的片段,这些分块可以作为RAG流程的一部分,输入到基于嵌入的检索系统或语言模型中。
模型特点
智能语义分块
能够将文本分割成有意义的语义块,保持内容的连贯性。
RAG系统优化
专为检索增强生成(RAG)系统设计,优化了分块质量。
长序列支持
在长度为1024的序列上进行了微调(基础模型支持最长8192的序列)。
模型能力
文本语义分块
段落分割
RAG系统预处理
使用案例
信息检索
RAG系统预处理
为检索增强生成系统准备语义连贯的文本块
提高检索系统的准确性和相关性
文本处理
文档分割
将长文档分割成有意义的段落
便于后续处理和分析
🚀 超大型现代BERT模型v1
超大型现代BERT模型(Chonky) 是一个能够智能地将文本分割成有意义语义块的Transformer模型。该模型可用于检索增强生成(RAG)系统。
🚀 快速开始
超大型现代BERT模型(Chonky)能够处理文本并将其分割成语义连贯的片段。这些片段随后可以作为RAG流程的一部分,被输入到基于嵌入的检索系统或语言模型中。
⚠️ 重要提示
该模型在长度为1024的序列上进行了微调(默认情况下,现代BERT支持的序列长度最大为8192)。
✨ 主要特性
- 智能地将文本分割成有意义的语义块。
- 可用于检索增强生成(RAG)系统。
📦 安装指南
你可以使用作者开发的小型Python库来使用这个模型:chonky
💻 使用示例
基础用法
from chonky import ParagraphSplitter
# 首次运行时,它将下载Transformer模型
splitter = ParagraphSplitter(
model_id="mirth/chonky_modernbert_large_1",
device="cpu"
)
text = """Before college the two main things I worked on, outside of school, were writing and programming. I didn't write essays. I wrote what beginning writers were supposed to write then, and probably still are: short stories. My stories were awful. They had hardly any plot, just characters with strong feelings, which I imagined made them deep. The first programs I tried writing were on the IBM 1401 that our school district used for what was then called "data processing." This was in 9th grade, so I was 13 or 14. The school district's 1401 happened to be in the basement of our junior high school, and my friend Rich Draves and I got permission to use it. It was like a mini Bond villain's lair down there, with all these alien-looking machines — CPU, disk drives, printer, card reader — sitting up on a raised floor under bright fluorescent lights."""
for chunk in splitter(text):
print(chunk)
print("--")
基础用法示例输出
Before college the two main things I worked on, outside of school, were writing and programming. I didn't write essays. I wrote what beginning writers were supposed to write then, and probably still are: short stories.
--
My stories were awful. They had hardly any plot, just characters with strong feelings, which I imagined made them deep. The first programs I tried writing were on the IBM 1401 that our school district used for what was then called "data processing."
--
This was in 9th grade, so I was 13 or 14. The school district's 1401 happened to be in the basement of our junior high school, and my friend Rich Draves and I got permission to use it.
--
It was like a mini Bond villain's lair down there, with all these alien-looking machines — CPU, disk drives, printer, card reader — sitting up on a raised floor under bright fluorescent lights.
--
高级用法
你也可以使用标准的命名实体识别(NER)流程来使用这个模型:
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
model_name = "mirth/chonky_modernbert_large_1"
tokenizer = AutoTokenizer.from_pretrained(model_name, model_max_length=1024)
id2label = {
0: "O",
1: "separator",
}
label2id = {
"O": 0,
"separator": 1,
}
model = AutoModelForTokenClassification.from_pretrained(
model_name,
num_labels=2,
id2label=id2label,
label2id=label2id,
)
pipe = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="simple")
text = """Before college the two main things I worked on, outside of school, were writing and programming. I didn't write essays. I wrote what beginning writers were supposed to write then, and probably still are: short stories. My stories were awful. They had hardly any plot, just characters with strong feelings, which I imagined made them deep. The first programs I tried writing were on the IBM 1401 that our school district used for what was then called "data processing." This was in 9th grade, so I was 13 or 14. The school district's 1401 happened to be in the basement of our junior high school, and my friend Rich Draves and I got permission to use it. It was like a mini Bond villain's lair down there, with all these alien-looking machines — CPU, disk drives, printer, card reader — sitting up on a raised floor under bright fluorescent lights."""
pipe(text)
高级用法示例输出
[
{'entity_group': 'separator', 'score': np.float32(0.91590524), 'word': ' stories.', 'start': 209, 'end': 218},
{'entity_group': 'separator', 'score': np.float32(0.6210419), 'word': ' processing."', 'start': 455, 'end': 468},
{'entity_group': 'separator', 'score': np.float32(0.7071036), 'word': '.', 'start': 652, 'end': 653}
]
📚 详细文档
训练数据
该模型在minipile和bookcorpus数据集上进行训练,以实现段落分割。
评估指标
minipile数据集基于标记的评估指标
指标 | 值 |
---|---|
F1值 | 0.85 |
精确率 | 0.87 |
召回率 | 0.82 |
准确率 | 0.99 |
bookcorpus数据集基于标记的评估指标
指标 | 值 |
---|---|
F1值 | 0.79 |
精确率 | 0.85 |
召回率 | 0.74 |
准确率 | 0.99 |
硬件
该模型在单个H100上进行了数小时的微调。
📄 许可证
本项目采用MIT许可证。
Indonesian Roberta Base Posp Tagger
MIT
这是一个基于印尼语RoBERTa模型微调的词性标注模型,在indonlu数据集上训练,用于印尼语文本的词性标注任务。
序列标注
Transformers 其他

I
w11wo
2.2M
7
Bert Base NER
MIT
基于BERT微调的命名实体识别模型,可识别四类实体:地点(LOC)、组织机构(ORG)、人名(PER)和杂项(MISC)
序列标注 英语
B
dslim
1.8M
592
Deid Roberta I2b2
MIT
该模型是基于RoBERTa微调的序列标注模型,用于识别和移除医疗记录中的受保护健康信息(PHI/PII)。
序列标注
Transformers 支持多种语言

D
obi
1.1M
33
Ner English Fast
Flair自带的英文快速4类命名实体识别模型,基于Flair嵌入和LSTM-CRF架构,在CoNLL-03数据集上达到92.92的F1分数。
序列标注
PyTorch 英语
N
flair
978.01k
24
French Camembert Postag Model
基于Camembert-base的法语词性标注模型,使用free-french-treebank数据集训练
序列标注
Transformers 法语

F
gilf
950.03k
9
Xlm Roberta Large Ner Spanish
基于XLM-Roberta-large架构微调的西班牙语命名实体识别模型,在CoNLL-2002数据集上表现优异。
序列标注
Transformers 西班牙语

X
MMG
767.35k
29
Nusabert Ner V1.3
MIT
基于NusaBert-v1.3在印尼语NER任务上微调的命名实体识别模型
序列标注
Transformers 其他

N
cahya
759.09k
3
Ner English Large
Flair框架内置的英文4类大型NER模型,基于文档级XLM-R嵌入和FLERT技术,在CoNLL-03数据集上F1分数达94.36。
序列标注
PyTorch 英语
N
flair
749.04k
44
Punctuate All
MIT
基于xlm-roberta-base微调的多语言标点符号预测模型,支持12种欧洲语言的标点符号自动补全
序列标注
Transformers

P
kredor
728.70k
20
Xlm Roberta Ner Japanese
MIT
基于xlm-roberta-base微调的日语命名实体识别模型
序列标注
Transformers 支持多种语言

X
tsmatz
630.71k
25
精选推荐AI模型
Llama 3 Typhoon V1.5x 8b Instruct
专为泰语设计的80亿参数指令模型,性能媲美GPT-3.5-turbo,优化了应用场景、检索增强生成、受限生成和推理任务
大型语言模型
Transformers 支持多种语言

L
scb10x
3,269
16
Cadet Tiny
Openrail
Cadet-Tiny是一个基于SODA数据集训练的超小型对话模型,专为边缘设备推理设计,体积仅为Cosmo-3B模型的2%左右。
对话系统
Transformers 英语

C
ToddGoldfarb
2,691
6
Roberta Base Chinese Extractive Qa
基于RoBERTa架构的中文抽取式问答模型,适用于从给定文本中提取答案的任务。
问答系统 中文
R
uer
2,694
98