Measuring Embeddings V4.2
This is a fine-tuned sentence transformer model on measurement domain datasets, used for generating semantic embedding vectors, supporting tasks like semantic text similarity and semantic search.
Downloads 61
Release Time : 3/12/2025
Model Overview
This model is fine-tuned based on intfloat/multilingual-e5-large-instruct, specifically designed for processing texts in the field of measurement engineering, mapping sentences and paragraphs into a 1024-dimensional dense vector space.
Model Features
Optimized for measurement domain
Fine-tuned on the measuring-embeddings-v4 dataset, particularly suitable for handling professional terminology and concepts in measurement engineering.
High-dimensional semantic space
Maps text into a 1024-dimensional dense vector space, capable of capturing subtle semantic differences.
Multilingual support
Based on the multilingual-e5-large-instruct base model, it possesses multilingual processing capabilities.
Long text processing
Supports sequences up to 512 tokens, capable of handling longer professional descriptive texts.
Model Capabilities
Semantic text similarity calculation
Semantic search
Text classification
Clustering analysis
Paraphrase mining
Use Cases
Measurement Engineering
Calibration record matching
Automatically matches and associates equipment calibration records with relevant technical documents.
Improves the efficiency and accuracy of calibration document management.
Technical document retrieval
Semantic similarity-based retrieval of measurement system technical documents.
Helps engineers quickly find relevant technical materials.
Quality Control
Uncertainty analysis
Associates uncertainty point data with relevant measurement system documents.
Supports a more comprehensive uncertainty assessment process.
🚀 SentenceTransformer based on intfloat/multilingual-e5-large-instruct
This is a sentence-transformers model fine-tuned from intfloat/multilingual-e5-large-instruct on the measuring-embeddings-v4 dataset. It maps sentences and paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
✨ Features
- Semantic Understanding: Maps sentences and paragraphs to a 1024-dimensional dense vector space, enabling semantic textual similarity analysis.
- Versatile Applications: Suitable for various tasks such as semantic search, paraphrase mining, text classification, and clustering.
- Fine-tuned Model: Fine-tuned from intfloat/multilingual-e5-large-instruct on the measuring-embeddings-v4 dataset.
📦 Installation
First, install the Sentence Transformers library:
pip install -U sentence-transformers
💻 Usage Examples
Basic Usage
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Lauther/measuring-embeddings-v4.2")
# Run inference
sentences = [
'uncertainty points',
'What is a Fluid?\nA Fluid is the substance measured within a measurement system. It can be a gas or liquid, such as hydrocarbons, water, or other industrial fluids. Proper classification of fluids is essential for ensuring measurement accuracy, regulatory compliance, and operational efficiency. By identifying fluids correctly, the system applies the appropriate measurement techniques, processing methods, and reporting standards.',
'What is a Calibration Point?\nA Calibration Point represents a specific data entry in a calibration process, comparing an expected reference value to an actual measured value. These points are fundamental in ensuring measurement accuracy and identifying deviations.\n\nKey Aspects of Calibration Points:\n- Calibration Report Association: Each calibration point belongs to a specific calibration report, linking it to a broader calibration procedure.\n- Reference Values: Theoretical or expected values used as a benchmark for measurement validation.\n- Measured Values: The actual recorded values during calibration, reflecting the instrument’s response.\n- Errors: The difference between reference and measured values, indicating possible measurement inaccuracies.\nCalibration points are essential for evaluating instrument performance, ensuring compliance with standards, and maintaining measurement reliability.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
📚 Documentation
Model Details
Model Description
Property | Details |
---|---|
Model Type | Sentence Transformer |
Base model | intfloat/multilingual-e5-large-instruct |
Maximum Sequence Length | 512 tokens |
Output Dimensionality | 1024 dimensions |
Similarity Function | Cosine Similarity |
Training Dataset | measuring-embeddings-v4 |
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Training Details
Training Dataset
- Dataset: measuring-embeddings-v4 at 1e3ca2c
- Size: 3,075 training samples
- Columns:
sentence1
,sentence2
, andscore
- Approximate statistics based on the first 1000 samples:
sentence1 sentence2 score type string string float details - min: 3 tokens
- mean: 7.55 tokens
- max: 17 tokens
- min: 80 tokens
- mean: 180.22 tokens
- max: 406 tokens
- min: 0.07
- mean: 0.21
- max: 0.95
- Samples:
sentence1 sentence2 score last calibrated span
What are historical report values?<br>These represent the recorded data points within flow computer reports. Unlike the report index, which serves as a reference to locate reports, these values contain the actual measurements and calculated data stored in the historical records.<br><br>Flow computer reports store two types of data values:<br><br>- **Hourly data values**: Contain measured or calculated values (e.g., operational minutes, alarms set, etc.) recorded on an hourly basis.<br>- **Daily data values**: Contain measured or calculated values (e.g., operational minutes, alarms set, etc.) recorded on a daily basis.<br>Each value is directly linked to its respective report index, ensuring traceability to the original flow computer record. These values maintain their raw integrity, providing a reliable source for analysis and validation.
0.1
flow computer configuration
`What is a Measurement Type?
Measurement types define the classification of measurements used within a system based on their purpose and regulatory requirements. These types include fiscal, appropriation, operational, and custody measurements.
- Fiscal measurements are used for tax and regulatory reporting, ensuring accurate financial transactions based on measured quantities.
- Appropriation measurements track resource allocation and ownership distribution among stakeholders.
- Operational measurements support real-time monitoring and process optimization within industrial operations.
- Custody measurements are essential for legal and contractual transactions, ensuring precise handover of fluids between parties.
These classifications play a crucial role in compliance, financial accuracy, and operational efficiency across industries such as oil and gas, water management, and energy distribution.0.1
uncertainty certificate number
What is an Uncertainty Composition?<br>An Uncertainty Composition represents a specific factor that contributes to the overall uncertainty of a measurement system. These components are essential for evaluating the accuracy and reliability of measurements by identifying and quantifying the sources of uncertainty.<br><br>Key Aspects of an Uncertainty Component:<br>- Component Name: Defines the uncertainty factor (e.g., diameter, density, variance, covariance) influencing the measurement system.<br>- Value of Composition: Quantifies the component’s contribution to the total uncertainty, helping to analyze which factors have the greatest impact.<br>- Uncertainty File ID: Links the component to a specific uncertainty dataset for traceability and validation.<br>Understanding these components is critical for uncertainty analysis, ensuring compliance with industry standards and improving measurement precision.
0.1
- Loss:
CoSENTLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "pairwise_cos_sim" }
Evaluation Dataset
- Dataset: measuring-embeddings-v4 at 1e3ca2c
- Size: 659 evaluation samples
- Columns:
sentence1
,sentence2
, andscore
- Approximate statistics based on the first 659 samples:
sentence1 sentence2 score type string string float details - min: 3 tokens
- mean: 7.63 tokens
- max: 17 tokens
- min: 80 tokens
- mean: 186.36 tokens
- max: 406 tokens
- min: 0.07
- mean: 0.2
- max: 0.9
- Samples:
sentence1 sentence2 score measurement system details
What is an Uncertainty Composition?<br>An Uncertainty Composition represents a specific factor that contributes to the overall uncertainty of a measurement system. These components are essential for evaluating the accuracy and reliability of measurements by identifying and quantifying the sources of uncertainty.<br><br>Key Aspects of an Uncertainty Component:<br>- Component Name: Defines the uncertainty factor (e.g., diameter, density, variance, covariance) influencing the measurement system.<br>- Value of Composition: Quantifies the component’s contribution to the total uncertainty, helping to analyze which factors have the greatest impact.<br>- Uncertainty File ID: Links the component to a specific uncertainty dataset for traceability and validation.<br>Understanding these components is critical for uncertainty analysis, ensuring compliance with industry standards and improving measurement precision.
0.15
measurement system tag EMED-3102-02-010
What is a report index or historic index?<br>Indexes represent the recorded reports generated by flow computers, classified into two types: <br>- **Hourly reports Index**: Store data for hourly events.<br>- **Daily reports Index**: Strore data for daily events.<br><br>These reports, also referred to as historical data or flow computer historical records, contain raw, first-hand measurements directly collected from the flow computer. The data has not been processed or used in any calculations, preserving its original state for analysis or validation.<br><br>The index is essential for locating specific values within the report.
0.24
static pressure
What is a Meter Stream?<br>A Meter Stream represents a measurement system configured within a flow computer. It serves as the interface between the physical measurement system and the computational processes that record and analyze flow data.<br><br>Key Aspects of a Meter Stream:<br>- Status: Indicates whether the meter stream is active or inactive.<br>- Measurement System Association: Links the meter stream to a specific measurement system, ensuring that the data collected corresponds to a defined physical setup.<br>- Flow Computer Association: Identifies the flow computer responsible for managing and recording the measurement system's data.<br>Why is a Meter Stream Important?<br>A **meter stream** is a critical component in flow measurement, as it ensures that the measurement system is correctly integrated into the flow computer for accurate monitoring and reporting. Since each flow computer can handle multiple meter streams, proper configuration is essential for maintaining data integrity and traceability.
0.1
- Loss:
CoSENTLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "pairwise_cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 4per_device_eval_batch_size
: 4gradient_accumulation_steps
: 4learning_rate
: 2e-05num_train_epochs
: 10warmup_ratio
: 0.1
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 4per_device_eval_batch_size
: 4per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 4eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 10max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Training Logs
Click to expand
Epoch | Step | Training Loss | Validation Loss |
---|---|---|---|
2.3953 | 460 | 0.8121 | - |
2.4473 | 470 | 1.7843 | - |
2.4993 | 480 | 3.0975 | - |
2.5514 | 490 | 0.8585 | - |
2.6034 | 500 | 2.7931 | - |
2.6554 | 510 | 1.4479 | - |
2.7074 | 520 | 1.6132 | - |
2.7594 | 530 | 0.8279 | - |
2.8114 | 540 | 2.0968 | - |
2.8635 | 550 | 1.5086 | - |
2.9155 | 560 | 1.7022 | - |
2.9675 | 570 | 1.7252 | - |
3.0208 | 580 | 0.329 | - |
3.0728 | 590 | 3.0231 | - |
3.1248 | 600 | 1.2077 | 0.4939 |
3.1769 | 610 | 1.7389 | - |
3.2289 | 620 | 1.747 | - |
3.2809 | 630 | 2.608 | - |
3.3329 | 640 | 2.3748 | - |
3.3849 | 650 | 0.9898 | - |
3.4369 | 660 | 3.6768 | - |
3.4889 | 670 | 1.7257 | - |
3.5410 | 680 | 1.2324 | - |
3.5930 | 690 | 1.4847 | - |
3.6450 | 700 | 0.5312 | - |
3.6970 | 710 | 2.6352 | - |
3.7490 | 720 | 3.3293 | - |
3.8010 | 730 | 1.0756 | - |
3.8531 | 740 | 1.2176 | - |
3.9051 | 750 | 1.4641 | 0.2318 |
3.9571 | 760 | 0.4642 | - |
4.0052 | 770 | 0.8467 | - |
4.0572 | 780 | 0.6422 | - |
4.1092 | 790 | 1.2341 | - |
4.1612 | 800 | 1.2382 | - |
4.2133 | 810 | 0.8518 | - |
4.2653 | 820 | 2.2545 | - |
4.3173 | 830 | 1.0461 | - |
4.3693 | 840 | 1.4097 | - |
4.4213 | 850 | 1.6382 | - |
4.4733 | 860 | 3.3653 | - |
4.5254 | 870 | 1.6778 | - |
4.5774 | 880 | 2.4592 | - |
4.6294 | 890 | 2.3244 | - |
4.6814 | 900 | 0.7048 | 0.2351 |
4.7334 | 910 | 1.507 | - |
4.7854 | 920 | 1.9508 | - |
4.8375 | 930 | 0.9046 | - |
4.8895 | 940 | 1.3923 | - |
4.9415 | 950 | 2.8222 | - |
4.9935 | 960 | 0.8341 | - |
5.0416 | 970 | 1.7129 | - |
5.0936 | 980 | 0.5792 | - |
5.1456 | 990 | 1.5091 | - |
5.1977 | 1000 | 0.8392 | - |
5.2497 | 1010 | 1.3499 | - |
5.3017 | 1020 | 1.1315 | - |
5.3537 | 1030 | 0.8192 | - |
5.4057 | 1040 | 0.3839 | - |
5.4577 | 1050 | 0.887 | 0.3572 |
5.5098 | 1060 | 0.9957 | - |
5.5618 | 1070 | 1.4341 | - |
5.6138 | 1080 | 0.5888 | - |
5.6658 | 1090 | 1.4963 | - |
5.7178 | 1100 | 1.5912 | - |
5.7698 | 1110 | 1.3382 | - |
5.8218 | 1120 | 1.4406 | - |
5.8739 | 1130 | 1.0845 | - |
5.9259 | 1140 | 0.2931 | - |
5.9779 | 1150 | 0.8994 | - |
6.0260 | 1160 | 1.1391 | - |
6.0780 | 1170 | 1.4646 | - |
6.1300 | 1180 | 0.509 | - |
6.1821 | 1190 | 0.4108 | - |
6.2341 | 1200 | 0.418 | 0.2573 |
6.2861 | 1210 | 1.4609 | - |
6.3381 | 1220 | 1.4237 | - |
6.3901 | 1230 | 0.6612 | - |
6.4421 | 1240 | 1.52 | - |
6.4941 | 1250 | 0.9426 | - |
6.5462 | 1260 | 1.5047 | - |
6.5982 | 1270 | 0.2918 | - |
6.6502 | 1280 | 0.96 | - |
6.7022 | 1290 | 1.6685 | - |
6.7542 | 1300 | 0.6779 | - |
6.8062 | 1310 | 0.0522 | - |
6.8583 | 1320 | 1.5055 | - |
6.9103 | 1330 | 0.2947 | - |
6.9623 | 1340 | 0.7499 | - |
7.0104 | 1350 | 2.6794 | 0.1881 |
7.0624 | 1360 | 1.4322 | - |
7.1144 | 1370 | 0.1859 | - |
7.1664 | 1380 | 1.0946 | - |
7.2185 | 1390 | 1.0941 | - |
7.2705 | 1400 | 0.8873 | - |
7.3225 | 1410 | 0.3996 | - |
7.3745 | 1420 | 0.159 | - |
7.4265 | 1430 | 0.7672 | - |
7.4785 | 1440 | 0.6511 | - |
7.5306 | 1450 | 0.2682 | - |
7.5826 | 1460 | 1.5488 | - |
7.6346 | 1470 | 0.4513 | - |
7.6866 | 1480 | 0.7482 | - |
7.7386 | 1490 | 1.4327 | - |
7.7906 | 1500 | 1.0277 | 0.1801 |
7.8427 | 1510 | 0.4197 | - |
7.8947 | 1520 | 3.3415 | - |
7.9467 | 1530 | 0.7131 | - |
7.9987 | 1540 | 0.7276 | - |
8.0468 | 1550 | 1.1939 | - |
8.0988 | 1560 | 0.4333 | - |
8.1508 | 1570 | 1.3594 | - |
8.2029 | 1580 | 0.9792 | - |
8.2549 | 1590 | 0.4581 | - |
8.3069 | 1600 | 0.5785 | - |
8.3589 | 1610 | 0.4015 | - |
8.4109 | 1620 | 0.5693 | - |
8.4629 | 1630 | 1.4925 | - |
8.5150 | 1640 | 0.6028 | - |
8.5670 | 1650 | 0.2087 | 0.1802 |
8.6190 | 1660 | 1.0404 | - |
8.6710 | 1670 | 0.8293 | - |
8.7230 | 1680 | 1.1231 | - |
8.7750 | 1690 | 0.4747 | - |
8.8270 | 1700 | 1.0668 | - |
8.8791 | 1710 | 1.2665 | - |
8.9311 | 1720 | 0.3004 | - |
8.9831 | 1730 | 0.1333 | - |
9.0312 | 1740 | 1.0171 | - |
9.0832 | 1750 | 1.3999 | - |
9.1352 | 1760 | 0.1939 | - |
9.1873 | 1770 | 0.1591 | - |
9.2393 | 1780 | 0.1243 | - |
9.2913 | 1790 | 0.8689 | - |
9.3433 | 1800 | 0.4325 | 0.1501 |
9.3953 | 1810 | 0.5094 | - |
9.4473 | 1820 | 0.3178 | - |
9.4993 | 1830 | 0.211 | - |
9.5514 | 1840 | 1.3497 | - |
9.6034 | 1850 | 0.6287 | - |
9.6554 | 1860 | 0.4895 | - |
9.7074 | 1870 | 0.3925 | - |
9.7594 | 1880 | 0.4384 | - |
9.8114 | 1890 | 0.8487 | - |
9.8635 | 1900 | 0.9134 | - |
9.9155 | 1910 | 0.1522 | - |
9.9675 | 1920 | 0.3798 | - |
Framework Versions
- Python: 3.11.0
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.6.0+cu124
- Accelerate: 1.4.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
📄 License
This model is based on the Sentence Transformers library, which is licensed under the Apache License 2.0. Please refer to the Sentence Transformers repository for more details.
📚 Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
CoSENTLoss
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
Jina Embeddings V3
Jina Embeddings V3 is a multilingual sentence embedding model supporting over 100 languages, specializing in sentence similarity and feature extraction tasks.
Text Embedding
Transformers Supports Multiple Languages

J
jinaai
3.7M
911
Ms Marco MiniLM L6 V2
Apache-2.0
A cross-encoder model trained on the MS Marco passage ranking task for query-passage relevance scoring in information retrieval
Text Embedding English
M
cross-encoder
2.5M
86
Opensearch Neural Sparse Encoding Doc V2 Distill
Apache-2.0
A sparse retrieval model based on distillation technology, optimized for OpenSearch, supporting inference-free document encoding with improved search relevance and efficiency over V1
Text Embedding
Transformers English

O
opensearch-project
1.8M
7
Sapbert From PubMedBERT Fulltext
Apache-2.0
A biomedical entity representation model based on PubMedBERT, optimized for semantic relation capture through self-aligned pre-training
Text Embedding English
S
cambridgeltl
1.7M
49
Gte Large
MIT
GTE-Large is a powerful sentence transformer model focused on sentence similarity and text embedding tasks, excelling in multiple benchmark tests.
Text Embedding English
G
thenlper
1.5M
278
Gte Base En V1.5
Apache-2.0
GTE-base-en-v1.5 is an English sentence transformer model focused on sentence similarity tasks, excelling in multiple text embedding benchmarks.
Text Embedding
Transformers Supports Multiple Languages

G
Alibaba-NLP
1.5M
63
Gte Multilingual Base
Apache-2.0
GTE Multilingual Base is a multilingual sentence embedding model supporting over 50 languages, suitable for tasks like sentence similarity calculation.
Text Embedding
Transformers Supports Multiple Languages

G
Alibaba-NLP
1.2M
246
Polybert
polyBERT is a chemical language model designed to achieve fully machine-driven ultrafast polymer informatics. It maps PSMILES strings into 600-dimensional dense fingerprints to numerically represent polymer chemical structures.
Text Embedding
Transformers

P
kuelumbus
1.0M
5
Bert Base Turkish Cased Mean Nli Stsb Tr
Apache-2.0
A sentence embedding model based on Turkish BERT, optimized for semantic similarity tasks
Text Embedding
Transformers Other

B
emrecan
1.0M
40
GIST Small Embedding V0
MIT
A text embedding model fine-tuned based on BAAI/bge-small-en-v1.5, trained with the MEDI dataset and MTEB classification task datasets, optimized for query encoding in retrieval tasks.
Text Embedding
Safetensors English
G
avsolatorio
945.68k
29
Featured Recommended AI Models