Clinical Trials All MiniLM L6 V2
這是一個基於sentence-transformers/all-MiniLM-L6-v2微調的句子轉換器模型,用於將文本映射到384維向量空間,支持語義相似度計算等任務。
下載量 49
發布時間 : 1/25/2025
模型概述
該模型專門用於句子和段落的向量化表示,可應用於語義文本相似度、語義搜索、文本分類和聚類等多種自然語言處理任務。
模型特點
高效語義編碼
將句子和段落高效編碼為384維稠密向量,保留語義信息
醫學領域優化
針對醫學文本進行了專門優化,能更好處理專業醫學術語
多重損失函數
採用俄羅斯套娃損失和多重負樣本排序損失的組合訓練方式
模型能力
語義文本相似度計算
語義搜索
複述挖掘
文本分類
文本聚類
使用案例
醫學研究
臨床試驗文檔匹配
匹配相似臨床試驗描述,輔助研究設計
醫學文獻檢索
基於語義的醫學文獻檢索系統
生物醫藥
藥物研究文檔分析
分析藥物研究文檔的相似性
🚀 基於sentence-transformers/all-MiniLM-L6-v2的句子轉換器
本項目是一個基於sentence-transformers框架,從sentence-transformers/all-MiniLM-L6-v2微調而來的模型。它能夠將句子和段落映射到一個384維的密集向量空間,可用於語義文本相似度計算、語義搜索、釋義挖掘、文本分類、聚類等任務。
🚀 快速開始
安裝依賴
首先,你需要安裝sentence-transformers
庫:
pip install -U sentence-transformers
運行推理
安裝完成後,你可以加載該模型並進行推理:
from sentence_transformers import SentenceTransformer
# 從🤗 Hub下載模型
model = SentenceTransformer("sentence_transformers_model_id")
# 定義待編碼的句子
sentences = [
'open label trial safety efficacy sym001 treatment immune thrombocytopenic purpura itp. If your serious vaccine-induced adverse event has been entered in the CDC Vaccine Adverse Event Reporting System (VAERS) we are interested in enrolling you for this study in order to log your symptoms.\n\nThe primary goal of this study is to create a national database and gather vaccine-associated serious adverse events/injury data from newly vaccinated individuals in the US in order to identify the possible underlying causal relationships and plausible underlying biological mechanisms. The project aims to identify the genetic determinants of vaccine-induced adverse response by studying host genetics. We plan to use whole genome sequencing to identify single nucleotide polymorphisms associated with cardiovascular, neurological, gastrointestinal, musculoskeletal and immunological symptoms induced by vaccine administration.\n\nThe secondary goal is to establish criteria that enable classification of vaccine-induced adverse events/injuries compare data from our database with the official Vaccine Injury Table National Vaccine Injury Compensation Program on or after March 21, 2017.\n\nThe tertiary goal is to establish a database to gather detailed long-term adverse reaction data from subjects enrolled in FDA Emergency Use Authorized vaccine clinical trials.',
'Immune Thrombocytopenic Purpura inclusion criterion confirm presence thrombocytopenia platelet count 30000mm3 predose visit history isolated itp rhdpositive serology previous treatment response line therapy itpexclusion criterion know clinical picture suggestive cause thrombocytopenia especially systematic lupus erythematosusantiphospholipid syndrome evans syndrome immunodeficiency state lymphoproliferative disorder liver diseaseingestion drug quinidinequinine heparin sulfonamide hereditary thrombocytopenia confirm relevant laboratory finding suspect infection hiv hepatitis c h pylori clinical splenomegaly history abnormal bone marrow examination ongoing haemorrhage correspond grade 3 4 bleeding scale underlie haemolytic condition history splenectomy subject pregnant breast feeding intend pregnant incidence severity adverse event aes include adverse event saes 6 week post dose measurement platelet count day 1 week 6',
'Multiple System Atrophy inclusion criteriadiagnostic1 participant diagnosis possible probable msa modify gilman et al 2008 diagnostic criteria2 participant onset msa symptom occur 4 year screen assess investigator3 evidence msa specific symptom deficit measure umsars scaleexclusion criteriamedical history1 participant contraindication study proceduresdiagnostic assessments1 presence confound diagnosis andor condition affect participant safety study investigator judgement2 participant participation previous study diseasemodifye therapy prove receipt active treatment compromise interpretability datum present study consultation medical monitor designeeother1 participant participate study investigate active passive immunization αsynuclein αsyn progressive disease pd msa immunoglobulin g therapy 6 month screen change baseline modify unified multiple system atrophy rating scale umsar week 52 umsar historical review 11item scale adapt unify parkinson disease rating scale updrs assess activity relate motor disability relate autonomic dysfunction item score 0 normal 3 severe total score sum score domain range 0 33 high score mean poor health 52 week change baseline 11item umsar week 52 11 item umsar include 11 item ii assess motor autonomic disability umsar historical review assess activity relate motor disability autonomic dysfunction umsar ii motor examination measure functional impairment specific parkinsonian cerebellar feature item score 0 normal 4 severe total score sum score domain range 0 44 high score mean poor health 52 weekschange baseline umsar total score umsar ii week 52 umsar total scale consist item umsars part ii umsar historical review 12item scale assess activity relate motor disability autonomic dysfunction item score 0 normal 4 severe umsar ii motor examination 14item scale measure functional impairment eg speech rapid alternate movement hand finger tap leg agility select complex movement specific parkinsonian tremor rest cerebellar ocular motor dysfunction heelshin test feature item score 0 normal 4 severe 52 weekschange baseline umsars week 52 umsar historical review modified 11item scale adapt updrs assess activity relate motor disability 8 item 4 novel item relate autonomic dysfunction item score 0 normal 4 severe total score sum score item range 0 44 high score mean poor health 52 weekschange baseline umsars ii week 52 umsar ii motor examination 14item scale item eg speech rapid alternate movement hand finger tap leg agility measure functional impairment select complex movement item directly refer specific parkinsonian tremor rest cerebellar ocular motor dysfunction heelshin test feature motor examination section umsar base modify updrsiii item addition novel item heelkneeshin ataxia item score 0 normal 4 severe total score sum score item range 0 56 high score mean poor health 52 weeksclinical global impressionseverity cgis score cgis assess clinicians impression participant clinical condition clinician use total clinical experience participant population rate current severity participant illness 7point scale range 1 normal ill 7 extremely ill participant high score mean well health 52 weekschange baseline scale outcome parkinson disease autonomic dysfunction scopaaut total score scopaaut patientreported outcome assess autonomic function autonomic function critical symptom domain msa scale selfcomplete participant consist 25 item assess follow domain gastrointestinal 7 item urinary 6 item cardiovascular 3 item thermoregulatory 4 item pupillomotor 1 item sexual 2 item man 2 item woman score item range 0 experience symptom 3 experience symptom total composite score include domain report score range 0 symptom 69 high burden symptom 52 weeksoverall survival os os define time day study drug administration death cause 52 weekschange baseline level cerebrospinal fluid csf free alphasynuclein αsyn 52 weekscmax maximum observe serum concentration tak341 predose day 1 29 57 85 169 253 337 multiple timepoint 24 hour postdose day 1 57 85 169 337 anytime day 365 427 early termination day 57 applicable early pk cohortstmax time occurrence cmax serum tak341 predose day 1 29 57 85 169 253 337 multiple timepoint 24 hour postdose day 1 57 85 169 337 anytime day 365 427 early termination day 57 applicable early pk cohortsaucτ area concentrationtime curve dose interval serum tak341 predose day 1 29 57 85 169 253 337 multiple timepoint 24 hour postdose day 1 57 85 169 337 anytime day 365 427 early termination day 57 applicable early pk cohortscsf concentration tak341 lumbar puncture csf sampling perform predose day 1 85 applicable early pk cohort 365number participant adverse event ae adverse event ae define untoward medical occurrence participant administer pharmaceutical product untoward medical occurrence necessarily causal relationship treatment datum report number participant analyze safety parameter include clinically significant abnormal value clinical laboratory evaluation vital sign ecg parameters physical examination neurological examination columbiasuicide severity rating scale cssrs 52 weeksnumber participant antidrug antibody 52 week',
]
# 對句子進行編碼
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# 計算嵌入向量之間的相似度分數
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
✨ 主要特性
- 語義表示能力:能夠將句子和段落映射到一個384維的密集向量空間,從而捕捉文本的語義信息。
- 多任務適用性:可用於語義文本相似度計算、語義搜索、釋義挖掘、文本分類、聚類等多種自然語言處理任務。
- 微調靈活性:基於
sentence-transformers
框架,可在自定義數據集上進行微調,以適應特定的任務需求。
📦 安裝指南
安裝本模型所需的依賴庫,可使用以下命令:
pip install -U sentence-transformers
📚 詳細文檔
模型詳情
模型描述
屬性 | 詳情 |
---|---|
模型類型 | 句子轉換器 |
基礎模型 | sentence-transformers/all-MiniLM-L6-v2 |
最大序列長度 | 256個詞元 |
輸出維度 | 384維 |
相似度函數 | 餘弦相似度 |
模型來源
- 文檔:Sentence Transformers Documentation
- 倉庫:Sentence Transformers on GitHub
- Hugging Face:Sentence Transformers on Hugging Face
完整模型架構
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
訓練詳情
訓練數據集
未命名數據集
- 規模:92,934個訓練樣本
- 列信息:包含
<code>Text1</code>
和<code>Text2</code>
兩列 - 基於前1000個樣本的近似統計信息:
Text1 Text2 類型 字符串 字符串 詳情 - 最小值:29個詞元
- 平均值:104.36個詞元
- 最大值:256個詞元
- 最小值:8個詞元
- 平均值:227.98個詞元
- 最大值:256個詞元
- 樣本示例:
Text1 Text2 study people normal kidney function people reduce kidney function test bi 1467335 process body. The primary objective of the current study is to investigate the influence of moderate renal impairment on the pharmacokinetics of multiple doses in comparison to a matched control group with normal renal function.
Renal Insufficiency
16w interventional study titration doseefficacy assessment exelon chinese alzheimers disease patient. To investigate the efficacy of Exelon capsule at maximal tolerated dose in mild to moderate Chinese AD patients via dosage titration from 3mg/d to 12mg/d in a 16 weeks duration
Alzheimer's Disease key inclusion criterion diagnosis dementia alzheimers type accord dsmiv criterion clinical diagnosis probable ad accord nincdsadrda criteria mmse score 10 26 treatment naïve patient stop donepezil galantamine huperzine memantine 2 week stable medical condition sign inform consent form patient hisher legal guardiankey exclusion criterion severe ad patient history cerebrovascular disease active uncontrolled epilepsy active hypothyroidism asthma cns infection neurodegenerative disorder advanced severe progressive unstable medical condition attend clinical trial take clinical trial drug score 4 modify hachinski ischemic scale mhis patient achei memantine mean change baseline alzheimer disease assessment scale cognitive subscale adascog alzheimer disease assessment scale cognitive subscale adascog measure change cognitive function alzheimer disease assessment scale adas scale measure specific cognitive behavior disorder alzheimer disease ad patient alzheimer di...
case series saneso 360 gastroscope. To confirm the procedural performance of the Saneso 360° gastroscope in Esophago-gastro-duodenoscopy (EGD) procedures.
EGD Procedure inclusion criterion 18 74 year age willing able comply study procedure provide write inform consent participate study schedule clinically indicate routine egd procedure asa class 13exclusion criterion alter esophageal gastric duodenal anatomy pregnant woman child 18 year age adult 75 year age subject routine endoscopic procedure contraindicate comorbid medical condition patient currently enrol investigational study directly interfere current study prior write approval sponsor asa class 45 successful egd procedure success assess end procedure 1 procedure success define successful intubation portion duodenum photograph portion duodenum take 24 hour study day endoscopist qualitative rating saneso 360 gastroscope endoscopist rate experience saneso 360 gastroscope immediately follow completion study procedure 1 fivepoint likert scale 5 excellent 4 good 3 acceptable 2 difficult 1unacceptable 24 hour study dayendoscopist qualitative rating saneso 360 gastroscope compare past...
- 損失函數:使用
MatryoshkaLoss
,參數如下:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 384, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1 ], "n_dims_per_step": -1 }
訓練超參數
非默認超參數
per_device_train_batch_size
:16learning_rate
:2e-05num_train_epochs
:1warmup_ratio
:0.1fp16
:Truebatch_sampler
:no_duplicates
所有超參數
點擊展開
overwrite_output_dir
:Falsedo_predict
:Falseeval_strategy
:noprediction_loss_only
:Trueper_device_train_batch_size
:16per_device_eval_batch_size
:8per_gpu_train_batch_size
:Noneper_gpu_eval_batch_size
:Nonegradient_accumulation_steps
:1eval_accumulation_steps
:Nonetorch_empty_cache_steps
:Nonelearning_rate
:2e-05weight_decay
:0.0adam_beta1
:0.9adam_beta2
:0.999adam_epsilon
:1e-08max_grad_norm
:1.0num_train_epochs
:1max_steps
:-1lr_scheduler_type
:linearlr_scheduler_kwargs
:{}warmup_ratio
:0.1warmup_steps
:0log_level
:passivelog_level_replica
:warninglog_on_each_node
:Truelogging_nan_inf_filter
:Truesave_safetensors
:Truesave_on_each_node
:Falsesave_only_model
:Falserestore_callback_states_from_checkpoint
:Falseno_cuda
:Falseuse_cpu
:Falseuse_mps_device
:Falseseed
:42data_seed
:Nonejit_mode_eval
:Falseuse_ipex
:Falsebf16
:Falsefp16
:Truefp16_opt_level
:O1half_precision_backend
:autobf16_full_eval
:Falsefp16_full_eval
:Falsetf32
:Nonelocal_rank
:0ddp_backend
:Nonetpu_num_cores
:Nonetpu_metrics_debug
:Falsedebug
:[]dataloader_drop_last
:Falsedataloader_num_workers
:0dataloader_prefetch_factor
:Nonepast_index
:-1disable_tqdm
:Falseremove_unused_columns
:Truelabel_names
:Noneload_best_model_at_end
:Falseignore_data_skip
:Falsefsdp
:[]fsdp_min_num_params
:0fsdp_config
:{'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
:Noneaccelerator_config
:{'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
:Nonelabel_smoothing_factor
:0.0optim
:adamw_torchoptim_args
:Noneadafactor
:Falsegroup_by_length
:Falselength_column_name
:lengthddp_find_unused_parameters
:Noneddp_bucket_cap_mb
:Noneddp_broadcast_buffers
:Falsedataloader_pin_memory
:Truedataloader_persistent_workers
:Falseskip_memory_metrics
:Trueuse_legacy_prediction_loop
:Falsepush_to_hub
:Falseresume_from_checkpoint
:Nonehub_model_id
:Nonehub_strategy
:every_savehub_private_repo
:Nonehub_always_push
:Falsegradient_checkpointing
:Falsegradient_checkpointing_kwargs
:Noneinclude_inputs_for_metrics
:Falseinclude_for_metrics
:[]eval_do_concat_batches
:Truefp16_backend
:autopush_to_hub_model_id
:Nonepush_to_hub_organization
:Nonemp_parameters
:auto_find_batch_size
:Falsefull_determinism
:Falsetorchdynamo
:Noneray_scope
:lastddp_timeout
:1800torch_compile
:Falsetorch_compile_backend
:Nonetorch_compile_mode
:Nonedispatch_batches
:Nonesplit_batches
:Noneinclude_tokens_per_second
:Falseinclude_num_input_tokens_seen
:Falseneftune_noise_alpha
:Noneoptim_target_modules
:Nonebatch_eval_metrics
:Falseeval_on_start
:Falseuse_liger_kernel
:Falseeval_use_gather_object
:Falseaverage_tokens_across_devices
:Falseprompts
:Nonebatch_sampler
:no_duplicatesmulti_dataset_batch_sampler
:proportional
訓練日誌
輪次 | 步數 | 訓練損失 |
---|---|---|
0.0861 | 500 | 2.1564 |
0.1721 | 1000 | 1.6731 |
0.2582 | 1500 | 1.3615 |
0.3443 | 2000 | 1.331 |
0.4304 | 2500 | 1.2666 |
0.5164 | 3000 | 1.1645 |
0.6025 | 3500 | 1.159 |
0.6886 | 4000 | 1.0752 |
0.7747 | 4500 | 1.0458 |
0.8607 | 5000 | 1.0803 |
0.9468 | 5500 | 1.0237 |
框架版本
- Python:3.10.12
- Sentence Transformers:3.3.1
- Transformers:4.48.1
- PyTorch:2.1.0a0+32f93b1
- Accelerate:1.3.0
- Datasets:3.2.0
- Tokenizers:0.21.0
📄 許可證
文檔中未提及許可證相關信息。
🔧 技術細節
本模型基於sentence-transformers
框架,從sentence-transformers/all-MiniLM-L6-v2微調而來。模型結構包含一個Transformer層、一個池化層和一個歸一化層。在訓練過程中,使用了MatryoshkaLoss
損失函數,並設置了一系列超參數進行優化。
📖 引用
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Jina Embeddings V3
Jina Embeddings V3 是一個多語言句子嵌入模型,支持超過100種語言,專注於句子相似度和特徵提取任務。
文本嵌入
Transformers 支持多種語言

J
jinaai
3.7M
911
Ms Marco MiniLM L6 V2
Apache-2.0
基於MS Marco段落排序任務訓練的交叉編碼器模型,用於信息檢索中的查詢-段落相關性評分
文本嵌入 英語
M
cross-encoder
2.5M
86
Opensearch Neural Sparse Encoding Doc V2 Distill
Apache-2.0
基於蒸餾技術的稀疏檢索模型,專為OpenSearch優化,支持免推理文檔編碼,在搜索相關性和效率上優於V1版本
文本嵌入
Transformers 英語

O
opensearch-project
1.8M
7
Sapbert From PubMedBERT Fulltext
Apache-2.0
基於PubMedBERT的生物醫學實體表徵模型,通過自對齊預訓練優化語義關係捕捉
文本嵌入 英語
S
cambridgeltl
1.7M
49
Gte Large
MIT
GTE-Large 是一個強大的句子轉換器模型,專注於句子相似度和文本嵌入任務,在多個基準測試中表現出色。
文本嵌入 英語
G
thenlper
1.5M
278
Gte Base En V1.5
Apache-2.0
GTE-base-en-v1.5 是一個英文句子轉換器模型,專注於句子相似度任務,在多個文本嵌入基準測試中表現優異。
文本嵌入
Transformers 支持多種語言

G
Alibaba-NLP
1.5M
63
Gte Multilingual Base
Apache-2.0
GTE Multilingual Base 是一個多語言的句子嵌入模型,支持超過50種語言,適用於句子相似度計算等任務。
文本嵌入
Transformers 支持多種語言

G
Alibaba-NLP
1.2M
246
Polybert
polyBERT是一個化學語言模型,旨在實現完全由機器驅動的超快聚合物信息學。它將PSMILES字符串映射為600維密集指紋,以數值形式表示聚合物化學結構。
文本嵌入
Transformers

P
kuelumbus
1.0M
5
Bert Base Turkish Cased Mean Nli Stsb Tr
Apache-2.0
基於土耳其語BERT的句子嵌入模型,專為語義相似度任務優化
文本嵌入
Transformers 其他

B
emrecan
1.0M
40
GIST Small Embedding V0
MIT
基於BAAI/bge-small-en-v1.5模型微調的文本嵌入模型,通過MEDI數據集與MTEB分類任務數據集訓練,優化了檢索任務的查詢編碼能力。
文本嵌入
Safetensors 英語
G
avsolatorio
945.68k
29
精選推薦AI模型
Llama 3 Typhoon V1.5x 8b Instruct
專為泰語設計的80億參數指令模型,性能媲美GPT-3.5-turbo,優化了應用場景、檢索增強生成、受限生成和推理任務
大型語言模型
Transformers 支持多種語言

L
scb10x
3,269
16
Cadet Tiny
Openrail
Cadet-Tiny是一個基於SODA數據集訓練的超小型對話模型,專為邊緣設備推理設計,體積僅為Cosmo-3B模型的2%左右。
對話系統
Transformers 英語

C
ToddGoldfarb
2,691
6
Roberta Base Chinese Extractive Qa
基於RoBERTa架構的中文抽取式問答模型,適用於從給定文本中提取答案的任務。
問答系統 中文
R
uer
2,694
98