Modernbert Base Sts
This is a sentence transformer model fine-tuned on the stsb dataset based on the ModernBERT-base model, used for generating 768-dimensional dense vector representations of sentences and paragraphs.
Downloads 315
Release Time : 1/12/2025
Model Overview
This model maps sentences and paragraphs into a 768-dimensional dense vector space, which can be used for tasks such as semantic text similarity, semantic search, paraphrase mining, text classification, clustering, etc.
Model Features
Long Text Support
Supports sequences up to 8192 tokens in length, suitable for processing long texts.
Efficient Similarity Calculation
Optimized with CoSENTLoss function, excelling in semantic similarity tasks.
Versatile Vector Representation
The generated 768-dimensional vectors can be used for various downstream NLP tasks.
Model Capabilities
Semantic text similarity calculation
Semantic search
Paraphrase mining
Text classification
Text clustering
Use Cases
Information Retrieval
Similar Document Retrieval
Recommends related documents by calculating document vector similarity.
Question Answering Systems
Question Matching
Calculates similarity between user questions and knowledge base questions to find the best matching answer.
đ SentenceTransformer based on answerdotai/ModernBERT-base
This is a Sentence Transformer model fine-tuned from answerdotai/ModernBERT-base on the stsb dataset. It maps sentences and paragraphs to a 768 - dimensional dense vector space, which can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
đ Quick Start
This Sentence Transformer model is fine - tuned from [answerdotai/ModernBERT - base](https://huggingface.co/answerdotai/ModernBERT - base) on the [stsb](https://huggingface.co/datasets/sentence - transformers/stsb) dataset. It can map text to a 768 - dimensional vector space for various NLP tasks.
⨠Features
- Semantic Similarity: Capable of calculating semantic similarity between sentences.
- Feature Extraction: Can extract 768 - dimensional feature vectors from sentences and paragraphs.
- Multiple Applications: Applicable to semantic search, paraphrase mining, text classification, clustering, etc.
đĻ Installation
First, you need to install the Sentence Transformers library:
pip install -U sentence-transformers
đģ Usage Examples
Basic Usage
from sentence_transformers import SentenceTransformer
# Download from the đ¤ Hub
model = SentenceTransformer("nickprock/ModernBERT-base-sts")
# Run inference
sentences = [
'While Queen may refer to both Queen regent (sovereign) or Queen consort, the King has always been the sovereign.',
'There is a very good reason not to refer to the Queen\'s spouse as "King" - because they aren\'t the King.',
'A man sitting on the floor in a room is strumming a guitar.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
đ Documentation
Model Details
Model Information
Property | Details |
---|---|
Model Type | Sentence Transformer |
Base model | [answerdotai/ModernBERT - base](https://huggingface.co/answerdotai/ModernBERT - base) |
Maximum Sequence Length | 8192 tokens |
Output Dimensionality | 768 dimensions |
Similarity Function | Cosine Similarity |
Training Dataset | [stsb](https://huggingface.co/datasets/sentence - transformers/stsb) |
Language | en |
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence - transformers)
- Hugging Face: [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence - transformers)
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Evaluation
Metrics - Semantic Similarity
- Datasets:
sts - dev
andsts - test
- Evaluated with
EmbeddingSimilarityEvaluator
Metric | sts - dev | sts - test |
---|---|---|
pearson_cosine | 0.8824 | 0.8564 |
spearman_cosine | 0.8877 | 0.8684 |
Training Details
Training Dataset - stsb
- Dataset: [stsb](https://huggingface.co/datasets/sentence - transformers/stsb) at [ab7a5ac](https://huggingface.co/datasets/sentence - transformers/stsb/tree/ab7a5ac0e35aa22088bdcf23e7fd99b220e53308)
- Size: 5,749 training samples
- Columns:
sentence1
,sentence2
, andscore
- Approximate statistics based on the first 1000 samples:
sentence1 sentence2 score type string string float details - min: 6 tokens
- mean: 10.16 tokens
- max: 28 tokens
- min: 6 tokens
- mean: 10.12 tokens
- max: 25 tokens
- min: 0.0
- mean: 0.45
- max: 1.0
- Samples:
sentence1 sentence2 score A plane is taking off.
An air plane is taking off.
1.0
A man is playing a large flute.
A man is playing a flute.
0.76
A man is spreading shreded cheese on a pizza.
A man is spreading shredded cheese on an uncooked pizza.
0.76
- Loss:
CoSENTLoss
with parameters:{ "scale": 20.0, "similarity_fct": "pairwise_cos_sim" }
Evaluation Dataset - stsb
- Dataset: [stsb](https://huggingface.co/datasets/sentence - transformers/stsb) at [ab7a5ac](https://huggingface.co/datasets/sentence - transformers/stsb/tree/ab7a5ac0e35aa22088bdcf23e7fd99b220e53308)
- Size: 1,500 evaluation samples
- Columns:
sentence1
,sentence2
, andscore
- Approximate statistics based on the first 1000 samples:
sentence1 sentence2 score type string string float details - min: 5 tokens
- mean: 15.11 tokens
- max: 44 tokens
- min: 6 tokens
- mean: 15.1 tokens
- max: 50 tokens
- min: 0.0
- mean: 0.42
- max: 1.0
- Samples:
sentence1 sentence2 score A man with a hard hat is dancing.
A man wearing a hard hat is dancing.
1.0
A young child is riding a horse.
A child is riding a horse.
0.95
A man is feeding a mouse to a snake.
The man is feeding a mouse to the snake.
1.0
- Loss:
CoSENTLoss
with parameters:{ "scale": 20.0, "similarity_fct": "pairwise_cos_sim" }
Training Hyperparameters
Non - Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 16per_device_eval_batch_size
: 16num_train_epochs
: 4warmup_ratio
: 0.1fp16
: True
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e - 05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e - 08max_grad_norm
: 1.0num_train_epochs
: 4max_steps
: - 1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: - 1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | Validation Loss | sts - dev_spearman_cosine | sts - test_spearman_cosine |
---|---|---|---|---|---|
0.2778 | 100 | 4.5713 | 4.3257 | 0.8018 | - |
0.5556 | 200 | 4.3301 | 4.3966 | 0.8042 | - |
0.8333 | 300 | 4.3008 | 4.2251 | 0.8613 | - |
1.1111 | 400 | 4.156 | 4.5078 | 0.8687 | - |
1.3889 | 500 | 4.0776 | 4.3005 | 0.8801 | - |
1.6667 | 600 | 4.0256 | 4.2623 | 0.8804 | - |
1.9444 | 700 | 4.0178 | 4.3090 | 0.8807 | - |
2.2222 | 800 | 3.7932 | 4.5140 | 0.8812 | - |
2.5 | 900 | 3.7444 | 4.5806 | 0.8803 | - |
2.7778 | 1000 | 3.7099 | 4.6048 | 0.8818 | - |
3.0556 | 1100 | 3.6924 | 4.7359 | 0.8841 | - |
3.3333 | 1200 | 3.4517 | 5.0212 | 0.8858 | - |
3.6111 | 1300 | 3.3672 | 5.1527 | 0.8871 | - |
3.8889 | 1400 | 3.3959 | 5.1539 | 0.8877 | - |
-1 | -1 | - | - | - | 0.8684 |
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.4.0.dev0
- Transformers: 4.49.0.dev0
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.2.0
- Tokenizers: 0.21.0
đ License
No license information is provided in the original document.
đ Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
CoSENTLoss
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
Jina Embeddings V3
Jina Embeddings V3 is a multilingual sentence embedding model supporting over 100 languages, specializing in sentence similarity and feature extraction tasks.
Text Embedding
Transformers Supports Multiple Languages

J
jinaai
3.7M
911
Ms Marco MiniLM L6 V2
Apache-2.0
A cross-encoder model trained on the MS Marco passage ranking task for query-passage relevance scoring in information retrieval
Text Embedding English
M
cross-encoder
2.5M
86
Opensearch Neural Sparse Encoding Doc V2 Distill
Apache-2.0
A sparse retrieval model based on distillation technology, optimized for OpenSearch, supporting inference-free document encoding with improved search relevance and efficiency over V1
Text Embedding
Transformers English

O
opensearch-project
1.8M
7
Sapbert From PubMedBERT Fulltext
Apache-2.0
A biomedical entity representation model based on PubMedBERT, optimized for semantic relation capture through self-aligned pre-training
Text Embedding English
S
cambridgeltl
1.7M
49
Gte Large
MIT
GTE-Large is a powerful sentence transformer model focused on sentence similarity and text embedding tasks, excelling in multiple benchmark tests.
Text Embedding English
G
thenlper
1.5M
278
Gte Base En V1.5
Apache-2.0
GTE-base-en-v1.5 is an English sentence transformer model focused on sentence similarity tasks, excelling in multiple text embedding benchmarks.
Text Embedding
Transformers Supports Multiple Languages

G
Alibaba-NLP
1.5M
63
Gte Multilingual Base
Apache-2.0
GTE Multilingual Base is a multilingual sentence embedding model supporting over 50 languages, suitable for tasks like sentence similarity calculation.
Text Embedding
Transformers Supports Multiple Languages

G
Alibaba-NLP
1.2M
246
Polybert
polyBERT is a chemical language model designed to achieve fully machine-driven ultrafast polymer informatics. It maps PSMILES strings into 600-dimensional dense fingerprints to numerically represent polymer chemical structures.
Text Embedding
Transformers

P
kuelumbus
1.0M
5
Bert Base Turkish Cased Mean Nli Stsb Tr
Apache-2.0
A sentence embedding model based on Turkish BERT, optimized for semantic similarity tasks
Text Embedding
Transformers Other

B
emrecan
1.0M
40
GIST Small Embedding V0
MIT
A text embedding model fine-tuned based on BAAI/bge-small-en-v1.5, trained with the MEDI dataset and MTEB classification task datasets, optimized for query encoding in retrieval tasks.
Text Embedding
Safetensors English
G
avsolatorio
945.68k
29
Featured Recommended AI Models
Š 2025AIbase