Clinical Trials All MiniLM L6 V2
这是一个基于sentence-transformers/all-MiniLM-L6-v2微调的句子转换器模型,用于将文本映射到384维向量空间,支持语义相似度计算等任务。
下载量 49
发布时间 : 1/25/2025
模型简介
该模型专门用于句子和段落的向量化表示,可应用于语义文本相似度、语义搜索、文本分类和聚类等多种自然语言处理任务。
模型特点
高效语义编码
将句子和段落高效编码为384维稠密向量,保留语义信息
医学领域优化
针对医学文本进行了专门优化,能更好处理专业医学术语
多重损失函数
采用俄罗斯套娃损失和多重负样本排序损失的组合训练方式
模型能力
语义文本相似度计算
语义搜索
复述挖掘
文本分类
文本聚类
使用案例
医学研究
临床试验文档匹配
匹配相似临床试验描述,辅助研究设计
医学文献检索
基于语义的医学文献检索系统
生物医药
药物研究文档分析
分析药物研究文档的相似性
🚀 基于sentence-transformers/all-MiniLM-L6-v2的句子转换器
本项目是一个基于sentence-transformers框架,从sentence-transformers/all-MiniLM-L6-v2微调而来的模型。它能够将句子和段落映射到一个384维的密集向量空间,可用于语义文本相似度计算、语义搜索、释义挖掘、文本分类、聚类等任务。
🚀 快速开始
安装依赖
首先,你需要安装sentence-transformers
库:
pip install -U sentence-transformers
运行推理
安装完成后,你可以加载该模型并进行推理:
from sentence_transformers import SentenceTransformer
# 从🤗 Hub下载模型
model = SentenceTransformer("sentence_transformers_model_id")
# 定义待编码的句子
sentences = [
'open label trial safety efficacy sym001 treatment immune thrombocytopenic purpura itp. If your serious vaccine-induced adverse event has been entered in the CDC Vaccine Adverse Event Reporting System (VAERS) we are interested in enrolling you for this study in order to log your symptoms.\n\nThe primary goal of this study is to create a national database and gather vaccine-associated serious adverse events/injury data from newly vaccinated individuals in the US in order to identify the possible underlying causal relationships and plausible underlying biological mechanisms. The project aims to identify the genetic determinants of vaccine-induced adverse response by studying host genetics. We plan to use whole genome sequencing to identify single nucleotide polymorphisms associated with cardiovascular, neurological, gastrointestinal, musculoskeletal and immunological symptoms induced by vaccine administration.\n\nThe secondary goal is to establish criteria that enable classification of vaccine-induced adverse events/injuries compare data from our database with the official Vaccine Injury Table National Vaccine Injury Compensation Program on or after March 21, 2017.\n\nThe tertiary goal is to establish a database to gather detailed long-term adverse reaction data from subjects enrolled in FDA Emergency Use Authorized vaccine clinical trials.',
'Immune Thrombocytopenic Purpura inclusion criterion confirm presence thrombocytopenia platelet count 30000mm3 predose visit history isolated itp rhdpositive serology previous treatment response line therapy itpexclusion criterion know clinical picture suggestive cause thrombocytopenia especially systematic lupus erythematosusantiphospholipid syndrome evans syndrome immunodeficiency state lymphoproliferative disorder liver diseaseingestion drug quinidinequinine heparin sulfonamide hereditary thrombocytopenia confirm relevant laboratory finding suspect infection hiv hepatitis c h pylori clinical splenomegaly history abnormal bone marrow examination ongoing haemorrhage correspond grade 3 4 bleeding scale underlie haemolytic condition history splenectomy subject pregnant breast feeding intend pregnant incidence severity adverse event aes include adverse event saes 6 week post dose measurement platelet count day 1 week 6',
'Multiple System Atrophy inclusion criteriadiagnostic1 participant diagnosis possible probable msa modify gilman et al 2008 diagnostic criteria2 participant onset msa symptom occur 4 year screen assess investigator3 evidence msa specific symptom deficit measure umsars scaleexclusion criteriamedical history1 participant contraindication study proceduresdiagnostic assessments1 presence confound diagnosis andor condition affect participant safety study investigator judgement2 participant participation previous study diseasemodifye therapy prove receipt active treatment compromise interpretability datum present study consultation medical monitor designeeother1 participant participate study investigate active passive immunization αsynuclein αsyn progressive disease pd msa immunoglobulin g therapy 6 month screen change baseline modify unified multiple system atrophy rating scale umsar week 52 umsar historical review 11item scale adapt unify parkinson disease rating scale updrs assess activity relate motor disability relate autonomic dysfunction item score 0 normal 3 severe total score sum score domain range 0 33 high score mean poor health 52 week change baseline 11item umsar week 52 11 item umsar include 11 item ii assess motor autonomic disability umsar historical review assess activity relate motor disability autonomic dysfunction umsar ii motor examination measure functional impairment specific parkinsonian cerebellar feature item score 0 normal 4 severe total score sum score domain range 0 44 high score mean poor health 52 weekschange baseline umsar total score umsar ii week 52 umsar total scale consist item umsars part ii umsar historical review 12item scale assess activity relate motor disability autonomic dysfunction item score 0 normal 4 severe umsar ii motor examination 14item scale measure functional impairment eg speech rapid alternate movement hand finger tap leg agility select complex movement specific parkinsonian tremor rest cerebellar ocular motor dysfunction heelshin test feature item score 0 normal 4 severe 52 weekschange baseline umsars week 52 umsar historical review modified 11item scale adapt updrs assess activity relate motor disability 8 item 4 novel item relate autonomic dysfunction item score 0 normal 4 severe total score sum score item range 0 44 high score mean poor health 52 weekschange baseline umsars ii week 52 umsar ii motor examination 14item scale item eg speech rapid alternate movement hand finger tap leg agility measure functional impairment select complex movement item directly refer specific parkinsonian tremor rest cerebellar ocular motor dysfunction heelshin test feature motor examination section umsar base modify updrsiii item addition novel item heelkneeshin ataxia item score 0 normal 4 severe total score sum score item range 0 56 high score mean poor health 52 weeksclinical global impressionseverity cgis score cgis assess clinicians impression participant clinical condition clinician use total clinical experience participant population rate current severity participant illness 7point scale range 1 normal ill 7 extremely ill participant high score mean well health 52 weekschange baseline scale outcome parkinson disease autonomic dysfunction scopaaut total score scopaaut patientreported outcome assess autonomic function autonomic function critical symptom domain msa scale selfcomplete participant consist 25 item assess follow domain gastrointestinal 7 item urinary 6 item cardiovascular 3 item thermoregulatory 4 item pupillomotor 1 item sexual 2 item man 2 item woman score item range 0 experience symptom 3 experience symptom total composite score include domain report score range 0 symptom 69 high burden symptom 52 weeksoverall survival os os define time day study drug administration death cause 52 weekschange baseline level cerebrospinal fluid csf free alphasynuclein αsyn 52 weekscmax maximum observe serum concentration tak341 predose day 1 29 57 85 169 253 337 multiple timepoint 24 hour postdose day 1 57 85 169 337 anytime day 365 427 early termination day 57 applicable early pk cohortstmax time occurrence cmax serum tak341 predose day 1 29 57 85 169 253 337 multiple timepoint 24 hour postdose day 1 57 85 169 337 anytime day 365 427 early termination day 57 applicable early pk cohortsaucτ area concentrationtime curve dose interval serum tak341 predose day 1 29 57 85 169 253 337 multiple timepoint 24 hour postdose day 1 57 85 169 337 anytime day 365 427 early termination day 57 applicable early pk cohortscsf concentration tak341 lumbar puncture csf sampling perform predose day 1 85 applicable early pk cohort 365number participant adverse event ae adverse event ae define untoward medical occurrence participant administer pharmaceutical product untoward medical occurrence necessarily causal relationship treatment datum report number participant analyze safety parameter include clinically significant abnormal value clinical laboratory evaluation vital sign ecg parameters physical examination neurological examination columbiasuicide severity rating scale cssrs 52 weeksnumber participant antidrug antibody 52 week',
]
# 对句子进行编码
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# 计算嵌入向量之间的相似度分数
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
✨ 主要特性
- 语义表示能力:能够将句子和段落映射到一个384维的密集向量空间,从而捕捉文本的语义信息。
- 多任务适用性:可用于语义文本相似度计算、语义搜索、释义挖掘、文本分类、聚类等多种自然语言处理任务。
- 微调灵活性:基于
sentence-transformers
框架,可在自定义数据集上进行微调,以适应特定的任务需求。
📦 安装指南
安装本模型所需的依赖库,可使用以下命令:
pip install -U sentence-transformers
📚 详细文档
模型详情
模型描述
属性 | 详情 |
---|---|
模型类型 | 句子转换器 |
基础模型 | sentence-transformers/all-MiniLM-L6-v2 |
最大序列长度 | 256个词元 |
输出维度 | 384维 |
相似度函数 | 余弦相似度 |
模型来源
- 文档:Sentence Transformers Documentation
- 仓库:Sentence Transformers on GitHub
- Hugging Face:Sentence Transformers on Hugging Face
完整模型架构
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
训练详情
训练数据集
未命名数据集
- 规模:92,934个训练样本
- 列信息:包含
<code>Text1</code>
和<code>Text2</code>
两列 - 基于前1000个样本的近似统计信息:
Text1 Text2 类型 字符串 字符串 详情 - 最小值:29个词元
- 平均值:104.36个词元
- 最大值:256个词元
- 最小值:8个词元
- 平均值:227.98个词元
- 最大值:256个词元
- 样本示例:
Text1 Text2 study people normal kidney function people reduce kidney function test bi 1467335 process body. The primary objective of the current study is to investigate the influence of moderate renal impairment on the pharmacokinetics of multiple doses in comparison to a matched control group with normal renal function.
Renal Insufficiency
16w interventional study titration doseefficacy assessment exelon chinese alzheimers disease patient. To investigate the efficacy of Exelon capsule at maximal tolerated dose in mild to moderate Chinese AD patients via dosage titration from 3mg/d to 12mg/d in a 16 weeks duration
Alzheimer's Disease key inclusion criterion diagnosis dementia alzheimers type accord dsmiv criterion clinical diagnosis probable ad accord nincdsadrda criteria mmse score 10 26 treatment naïve patient stop donepezil galantamine huperzine memantine 2 week stable medical condition sign inform consent form patient hisher legal guardiankey exclusion criterion severe ad patient history cerebrovascular disease active uncontrolled epilepsy active hypothyroidism asthma cns infection neurodegenerative disorder advanced severe progressive unstable medical condition attend clinical trial take clinical trial drug score 4 modify hachinski ischemic scale mhis patient achei memantine mean change baseline alzheimer disease assessment scale cognitive subscale adascog alzheimer disease assessment scale cognitive subscale adascog measure change cognitive function alzheimer disease assessment scale adas scale measure specific cognitive behavior disorder alzheimer disease ad patient alzheimer di...
case series saneso 360 gastroscope. To confirm the procedural performance of the Saneso 360° gastroscope in Esophago-gastro-duodenoscopy (EGD) procedures.
EGD Procedure inclusion criterion 18 74 year age willing able comply study procedure provide write inform consent participate study schedule clinically indicate routine egd procedure asa class 13exclusion criterion alter esophageal gastric duodenal anatomy pregnant woman child 18 year age adult 75 year age subject routine endoscopic procedure contraindicate comorbid medical condition patient currently enrol investigational study directly interfere current study prior write approval sponsor asa class 45 successful egd procedure success assess end procedure 1 procedure success define successful intubation portion duodenum photograph portion duodenum take 24 hour study day endoscopist qualitative rating saneso 360 gastroscope endoscopist rate experience saneso 360 gastroscope immediately follow completion study procedure 1 fivepoint likert scale 5 excellent 4 good 3 acceptable 2 difficult 1unacceptable 24 hour study dayendoscopist qualitative rating saneso 360 gastroscope compare past...
- 损失函数:使用
MatryoshkaLoss
,参数如下:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 384, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1 ], "n_dims_per_step": -1 }
训练超参数
非默认超参数
per_device_train_batch_size
:16learning_rate
:2e-05num_train_epochs
:1warmup_ratio
:0.1fp16
:Truebatch_sampler
:no_duplicates
所有超参数
点击展开
overwrite_output_dir
:Falsedo_predict
:Falseeval_strategy
:noprediction_loss_only
:Trueper_device_train_batch_size
:16per_device_eval_batch_size
:8per_gpu_train_batch_size
:Noneper_gpu_eval_batch_size
:Nonegradient_accumulation_steps
:1eval_accumulation_steps
:Nonetorch_empty_cache_steps
:Nonelearning_rate
:2e-05weight_decay
:0.0adam_beta1
:0.9adam_beta2
:0.999adam_epsilon
:1e-08max_grad_norm
:1.0num_train_epochs
:1max_steps
:-1lr_scheduler_type
:linearlr_scheduler_kwargs
:{}warmup_ratio
:0.1warmup_steps
:0log_level
:passivelog_level_replica
:warninglog_on_each_node
:Truelogging_nan_inf_filter
:Truesave_safetensors
:Truesave_on_each_node
:Falsesave_only_model
:Falserestore_callback_states_from_checkpoint
:Falseno_cuda
:Falseuse_cpu
:Falseuse_mps_device
:Falseseed
:42data_seed
:Nonejit_mode_eval
:Falseuse_ipex
:Falsebf16
:Falsefp16
:Truefp16_opt_level
:O1half_precision_backend
:autobf16_full_eval
:Falsefp16_full_eval
:Falsetf32
:Nonelocal_rank
:0ddp_backend
:Nonetpu_num_cores
:Nonetpu_metrics_debug
:Falsedebug
:[]dataloader_drop_last
:Falsedataloader_num_workers
:0dataloader_prefetch_factor
:Nonepast_index
:-1disable_tqdm
:Falseremove_unused_columns
:Truelabel_names
:Noneload_best_model_at_end
:Falseignore_data_skip
:Falsefsdp
:[]fsdp_min_num_params
:0fsdp_config
:{'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
:Noneaccelerator_config
:{'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
:Nonelabel_smoothing_factor
:0.0optim
:adamw_torchoptim_args
:Noneadafactor
:Falsegroup_by_length
:Falselength_column_name
:lengthddp_find_unused_parameters
:Noneddp_bucket_cap_mb
:Noneddp_broadcast_buffers
:Falsedataloader_pin_memory
:Truedataloader_persistent_workers
:Falseskip_memory_metrics
:Trueuse_legacy_prediction_loop
:Falsepush_to_hub
:Falseresume_from_checkpoint
:Nonehub_model_id
:Nonehub_strategy
:every_savehub_private_repo
:Nonehub_always_push
:Falsegradient_checkpointing
:Falsegradient_checkpointing_kwargs
:Noneinclude_inputs_for_metrics
:Falseinclude_for_metrics
:[]eval_do_concat_batches
:Truefp16_backend
:autopush_to_hub_model_id
:Nonepush_to_hub_organization
:Nonemp_parameters
:auto_find_batch_size
:Falsefull_determinism
:Falsetorchdynamo
:Noneray_scope
:lastddp_timeout
:1800torch_compile
:Falsetorch_compile_backend
:Nonetorch_compile_mode
:Nonedispatch_batches
:Nonesplit_batches
:Noneinclude_tokens_per_second
:Falseinclude_num_input_tokens_seen
:Falseneftune_noise_alpha
:Noneoptim_target_modules
:Nonebatch_eval_metrics
:Falseeval_on_start
:Falseuse_liger_kernel
:Falseeval_use_gather_object
:Falseaverage_tokens_across_devices
:Falseprompts
:Nonebatch_sampler
:no_duplicatesmulti_dataset_batch_sampler
:proportional
训练日志
轮次 | 步数 | 训练损失 |
---|---|---|
0.0861 | 500 | 2.1564 |
0.1721 | 1000 | 1.6731 |
0.2582 | 1500 | 1.3615 |
0.3443 | 2000 | 1.331 |
0.4304 | 2500 | 1.2666 |
0.5164 | 3000 | 1.1645 |
0.6025 | 3500 | 1.159 |
0.6886 | 4000 | 1.0752 |
0.7747 | 4500 | 1.0458 |
0.8607 | 5000 | 1.0803 |
0.9468 | 5500 | 1.0237 |
框架版本
- Python:3.10.12
- Sentence Transformers:3.3.1
- Transformers:4.48.1
- PyTorch:2.1.0a0+32f93b1
- Accelerate:1.3.0
- Datasets:3.2.0
- Tokenizers:0.21.0
📄 许可证
文档中未提及许可证相关信息。
🔧 技术细节
本模型基于sentence-transformers
框架,从sentence-transformers/all-MiniLM-L6-v2微调而来。模型结构包含一个Transformer层、一个池化层和一个归一化层。在训练过程中,使用了MatryoshkaLoss
损失函数,并设置了一系列超参数进行优化。
📖 引用
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Jina Embeddings V3
Jina Embeddings V3 是一个多语言句子嵌入模型,支持超过100种语言,专注于句子相似度和特征提取任务。
文本嵌入
Transformers 支持多种语言

J
jinaai
3.7M
911
Ms Marco MiniLM L6 V2
Apache-2.0
基于MS Marco段落排序任务训练的交叉编码器模型,用于信息检索中的查询-段落相关性评分
文本嵌入 英语
M
cross-encoder
2.5M
86
Opensearch Neural Sparse Encoding Doc V2 Distill
Apache-2.0
基于蒸馏技术的稀疏检索模型,专为OpenSearch优化,支持免推理文档编码,在搜索相关性和效率上优于V1版本
文本嵌入
Transformers 英语

O
opensearch-project
1.8M
7
Sapbert From PubMedBERT Fulltext
Apache-2.0
基于PubMedBERT的生物医学实体表征模型,通过自对齐预训练优化语义关系捕捉
文本嵌入 英语
S
cambridgeltl
1.7M
49
Gte Large
MIT
GTE-Large 是一个强大的句子转换器模型,专注于句子相似度和文本嵌入任务,在多个基准测试中表现出色。
文本嵌入 英语
G
thenlper
1.5M
278
Gte Base En V1.5
Apache-2.0
GTE-base-en-v1.5 是一个英文句子转换器模型,专注于句子相似度任务,在多个文本嵌入基准测试中表现优异。
文本嵌入
Transformers 支持多种语言

G
Alibaba-NLP
1.5M
63
Gte Multilingual Base
Apache-2.0
GTE Multilingual Base 是一个多语言的句子嵌入模型,支持超过50种语言,适用于句子相似度计算等任务。
文本嵌入
Transformers 支持多种语言

G
Alibaba-NLP
1.2M
246
Polybert
polyBERT是一个化学语言模型,旨在实现完全由机器驱动的超快聚合物信息学。它将PSMILES字符串映射为600维密集指纹,以数值形式表示聚合物化学结构。
文本嵌入
Transformers

P
kuelumbus
1.0M
5
Bert Base Turkish Cased Mean Nli Stsb Tr
Apache-2.0
基于土耳其语BERT的句子嵌入模型,专为语义相似度任务优化
文本嵌入
Transformers 其他

B
emrecan
1.0M
40
GIST Small Embedding V0
MIT
基于BAAI/bge-small-en-v1.5模型微调的文本嵌入模型,通过MEDI数据集与MTEB分类任务数据集训练,优化了检索任务的查询编码能力。
文本嵌入
Safetensors 英语
G
avsolatorio
945.68k
29
精选推荐AI模型
Llama 3 Typhoon V1.5x 8b Instruct
专为泰语设计的80亿参数指令模型,性能媲美GPT-3.5-turbo,优化了应用场景、检索增强生成、受限生成和推理任务
大型语言模型
Transformers 支持多种语言

L
scb10x
3,269
16
Cadet Tiny
Openrail
Cadet-Tiny是一个基于SODA数据集训练的超小型对话模型,专为边缘设备推理设计,体积仅为Cosmo-3B模型的2%左右。
对话系统
Transformers 英语

C
ToddGoldfarb
2,691
6
Roberta Base Chinese Extractive Qa
基于RoBERTa架构的中文抽取式问答模型,适用于从给定文本中提取答案的任务。
问答系统 中文
R
uer
2,694
98