Entity Matching Jobs
This is a model trained based on the sentence-transformers framework, specifically designed for semantic matching and similarity calculation of job titles.
Downloads 47
Release Time : 1/8/2025
Model Overview
The model can map job titles and descriptions into a 1024-dimensional dense vector space, suitable for tasks such as semantic similarity calculation of job titles, job classification, and job search.
Model Features
Semantic Understanding of Job Titles
Specially optimized for job titles and position descriptions, capable of accurately understanding semantic relationships between different professions
Efficient Vector Representation
Converts text into 1024-dimensional dense vectors, facilitating subsequent similarity calculations and retrieval
Multiple Negative Ranking Training
Trained using multiple negative ranking loss, improving the model's ability to distinguish between similar professions
Model Capabilities
Job Title Similarity Calculation
Job Classification
Job Search
Job Matching
Text Vectorization
Use Cases
Human Resources
Job Matching System
Automatically matches the similarity between job descriptions in resumes and recruitment positions
Improves recruitment efficiency and matching accuracy
Job Classification
Classifies different expressions of essentially the same job title
Standardizes job classification systems
Data Analysis
Job Data Analysis
Analyzes the correlation and similarity between different professions
Identifies career development paths and transition possibilities
đ SentenceTransformer
This is a model trained with sentence-transformers. It maps sentences and paragraphs to a 1024-dimensional dense vector space, enabling applications such as semantic textual similarity, semantic search, paraphrase mining, text classification, and clustering.
đ Quick Start
This is a sentence-transformers model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
⨠Features
- Maps sentences and paragraphs to a 1024 - dimensional dense vector space.
- Applicable for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, etc.
đĻ Installation
First, install the Sentence Transformers library:
pip install -U sentence-transformers
đģ Usage Examples
Basic Usage
from sentence_transformers import SentenceTransformer
# Download from the đ¤ Hub
model = SentenceTransformer("engineai/entity_matching_jobs")
# Run inference
sentences = [
'Nigh Auditor',
'Night Auditor',
'Security Shift Supervisor',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
đ Documentation
Model Details
Model Description
Property | Details |
---|---|
Model Type | Sentence Transformer |
Maximum Sequence Length | 512 tokens |
Output Dimensionality | 1024 dimensions |
Similarity Function | Cosine Similarity |
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Training Details
Training Dataset
Unnamed Dataset
- Size: 4,997 training samples
- Columns:
text_a
,text_b
, andlabel
- Approximate statistics based on the first 1000 samples:
| | text_a | text_b | label |
|---------|---------------------------------------------------------------------------------|---------------------------------------------------------------------------------|-----------------------------|
| type | string | string | int |
| details |
- min: 3 tokens
- mean: 5.66 tokens
- max: 10 tokens
- min: 3 tokens
- mean: 5.48 tokens
- max: 12 tokens
- 1: 100.00%
- Samples:
| text_a | text_b | label |
|--------------------------------------|----------------------------------------------|----------------|
|
Nrs
|Nurse
|1
| |Nirse
|Nurse
|1
| |Consumer Services Agent
|Customer Service Representative
|1
| - Loss:
MultipleNegativesRankingLoss
with these parameters:
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
Evaluation Dataset
Unnamed Dataset
- Size: 5,707 evaluation samples
- Columns:
text_a
,text_b
, andlabel
- Approximate statistics based on the first 1000 samples:
| | text_a | text_b | label |
|---------|---------------------------------------------------------------------------------|---------------------------------------------------------------------------------|-----------------------------|
| type | string | string | int |
| details |
- min: 3 tokens
- mean: 5.69 tokens
- max: 11 tokens
- min: 3 tokens
- mean: 5.54 tokens
- max: 15 tokens
- 1: 100.00%
- Samples:
| text_a | text_b | label |
|-----------------------------------|--------------------------------------|----------------|
|
Catering Supervisor
|Food Service Supervisor
|1
| |Catering Supervisor
|Food Service Supervisor
|1
| |Cshier
|Cashier
|1
| - Loss:
MultipleNegativesRankingLoss
with these parameters:
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
Training Hyperparameters
Non - Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 200learning_rate
: 4e - 05weight_decay
: 0.01num_train_epochs
: 40warmup_ratio
: 0.2load_best_model_at_end
: True
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 200per_device_eval_batch_size
: 8per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 4e - 05weight_decay
: 0.01adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e - 08max_grad_norm
: 1.0num_train_epochs
: 40max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.2warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss |
---|---|---|
20.0 | 500 | 0.0692 |
40.0 | 1000 | 0.0508 |
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.1
- Transformers: 4.47.1
- PyTorch: 2.1.0+cu118
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
đ License
No license information provided.
đ Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Jina Embeddings V3
Jina Embeddings V3 is a multilingual sentence embedding model supporting over 100 languages, specializing in sentence similarity and feature extraction tasks.
Text Embedding
Transformers Supports Multiple Languages

J
jinaai
3.7M
911
Ms Marco MiniLM L6 V2
Apache-2.0
A cross-encoder model trained on the MS Marco passage ranking task for query-passage relevance scoring in information retrieval
Text Embedding English
M
cross-encoder
2.5M
86
Opensearch Neural Sparse Encoding Doc V2 Distill
Apache-2.0
A sparse retrieval model based on distillation technology, optimized for OpenSearch, supporting inference-free document encoding with improved search relevance and efficiency over V1
Text Embedding
Transformers English

O
opensearch-project
1.8M
7
Sapbert From PubMedBERT Fulltext
Apache-2.0
A biomedical entity representation model based on PubMedBERT, optimized for semantic relation capture through self-aligned pre-training
Text Embedding English
S
cambridgeltl
1.7M
49
Gte Large
MIT
GTE-Large is a powerful sentence transformer model focused on sentence similarity and text embedding tasks, excelling in multiple benchmark tests.
Text Embedding English
G
thenlper
1.5M
278
Gte Base En V1.5
Apache-2.0
GTE-base-en-v1.5 is an English sentence transformer model focused on sentence similarity tasks, excelling in multiple text embedding benchmarks.
Text Embedding
Transformers Supports Multiple Languages

G
Alibaba-NLP
1.5M
63
Gte Multilingual Base
Apache-2.0
GTE Multilingual Base is a multilingual sentence embedding model supporting over 50 languages, suitable for tasks like sentence similarity calculation.
Text Embedding
Transformers Supports Multiple Languages

G
Alibaba-NLP
1.2M
246
Polybert
polyBERT is a chemical language model designed to achieve fully machine-driven ultrafast polymer informatics. It maps PSMILES strings into 600-dimensional dense fingerprints to numerically represent polymer chemical structures.
Text Embedding
Transformers

P
kuelumbus
1.0M
5
Bert Base Turkish Cased Mean Nli Stsb Tr
Apache-2.0
A sentence embedding model based on Turkish BERT, optimized for semantic similarity tasks
Text Embedding
Transformers Other

B
emrecan
1.0M
40
GIST Small Embedding V0
MIT
A text embedding model fine-tuned based on BAAI/bge-small-en-v1.5, trained with the MEDI dataset and MTEB classification task datasets, optimized for query encoding in retrieval tasks.
Text Embedding
Safetensors English
G
avsolatorio
945.68k
29
Featured Recommended AI Models
Š 2025AIbase