3gpp Embedding Model V0
3
3gpp Embedding Model V0
iris49によって開発
これはBAAI/bge-base-en-v1.5をファインチューニングした文変換モデルで、3GPP関連技術文書のQ&Aシステム向けに最適化されており、テキストを768次元ベクトル空間にマッピングできます。
ダウンロード数 104
リリース時間 : 3/14/2025
モデル概要
このモデルは主に意味的テキスト類似性、意味検索、言い換えマイニング、テキスト分類とクラスタリングなどのタスクに使用され、特に3GPP技術文書の専門的な内容を処理するのに適しています。
モデル特徴
専門分野最適化
3GPP技術文書向けに特別にファインチューニングされており、通信技術の専門的な内容を処理する際に優れた性能を発揮します
多次元出力
複数の次元出力(768/512/256/128/64)をサポートしており、精度と効率のバランスをニーズに応じて調整できます
高性能検索
情報検索タスクで優れた性能を発揮し、精度@1は83.47%、精度@10は99.27%に達します
長文処理
最大512トークンのシーケンス長をサポートしており、技術文書の長い段落を処理するのに適しています
モデル能力
意味的テキスト類似性計算
専門文書情報検索
技術Q&Aシステムサポート
テキスト分類とクラスタリング
言い換えマイニング
使用事例
通信技術文書処理
3GPP標準文書Q&Aシステム
3GPP技術標準向けのインテリジェントQ&Aシステム構築に使用され、関連技術コンテンツを迅速に特定できます
技術文書検索タスクで83.47%の精度を達成
技術文書類似性分析
異なる技術文書段落間の意味的類似性を分析し、文書の理解と管理を支援します
専門情報検索
通信プロトコル検索
特定の通信プロトコルに関連する技術記述と定義を迅速に検索します
language:
- en license: apache-2.0 tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:56041
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss base_model: BAAI/bge-base-en-v1.5 widget:
- source_sentence: What is the significance of the tables 6.1.6.2.5-1 and 6.1.6.2.6-1
in the context of the Namf_Communication Service API?
sentences:
- The 'notifId' attribute in the PolicyDataSubscription type serves as a Notification Correlation ID assigned by the NF service consumer. It is included when the 'ConditionalSubscriptionwithPartialNotification' or the 'ConditionalSubscriptionWithExcludeNotification' feature is supported. This ID is used to correlate notifications with the specific subscription request, ensuring that the NF service consumer can track and manage notifications effectively.
- The 'sessRuleReports' attribute in the 'ErrorReport' type is specifically used to report failures related to session rules, whereas the 'ruleReports' attribute reports failures related to PCC rules. 'sessRuleReports' contains an array of 'SessionRuleReport' objects, which provide details about the session rule failures. Like 'ruleReports', it is optional and can have one or more entries (cardinality 1..N).
- Tables 6.1.6.2.5-1 and 6.1.6.2.6-1 are significant in the Namf_Communication Service API as they provide the definitions for the types 'AssignEbiData' and 'AssignedEbiData', respectively. These tables outline the structure, attributes, and possibly the constraints or rules associated with these data types, which are essential for understanding and implementing the API's functionality related to EBI assignment and management.
- source_sentence: What document defines the basic principles for online charging,
and where is this information referenced?
sentences:
- The UDM (Unified Data Management) returns the Ranging and Sidelink Positioning Subscription Data for the UE (User Equipment) identified by the supi (Subscription Permanent Identifier). This data is retrieved using the GET method, which supports the URI query parameters outlined in table 6.1.3.37.3.1-1.
- The Nsmf_PDUSession_SMContextStatusNotify service operation is used by the SMF (Session Management Function) to notify its consumers about the status of an SM (Session Management) context related to a PDU (Packet Data Unit) Session. In the context of I-SMF (Intermediate SMF) context transfer, this service operation is used to indicate the transfer of the SM context to a new I-SMF or SMF set. It also allows the SMF to update the SMF-derived CN (Core Network) assisted RAN (Radio Access Network) parameters tuning in the AMF (Access and Mobility Management Function). Additionally, it can report DDN (Downlink Data Notification) failures and provide target DNAI (Data Network Access Identifier) information for the current or next PDU session.
- The basic principles for online charging are defined in TS 32.240 [1]. This information is referenced in section 5.2.1 of the document, which is part of the '5.2 Online charging scenario' chapter.
- source_sentence: What are the possible values for the 'ReportingLevel' enumeration,
and what do they indicate?
sentences:
- If protected User Plane (UP) messages reach the SN before the SN has received the SN Counter value in the SN Reconfiguration Complete message, the SN chooses the first unused KSN key of the UE to establish the security association. This ensures that communication can proceed securely even if the SN Counter value has not yet been received. Once the SN Counter value is received, the SN verifies it to ensure there is no KSN mismatch.
- 'The ''ReportingLevel'' enumeration has three possible values: ''SER_ID_LEVEL'', ''RAT_GR_LEVEL'', and ''SPON_CON_LEVEL''. ''SER_ID_LEVEL'' indicates that usage should be reported at the service ID and rating group combination level. ''RAT_GR_LEVEL'' indicates that usage should be reported at the rating group level. ''SPON_CON_LEVEL'' indicates that usage should be reported at the sponsor identity and rating group combination level. These levels help in categorizing and reporting usage data based on different granularities.'
- Structured data types in the Nudr_GroupIDmap Service API are more complex than simple data types. While simple data types represent single values like integers or strings, structured data types are composed of multiple simple data types or other structured data types, forming a more complex data structure. For example, a structured data type might represent a user profile containing fields for name, age, and address, each of which could be a simple data type. This allows for the representation of more intricate and hierarchical data within the API.
- source_sentence: What is the purpose of the Intermediate Spending Limit Report Request
procedure described in the document?
sentences:
- The Resource URI variables defined in table 6.1.3.8.2-1 for the 'sm-data' resource
serve to dynamically construct the URI based on specific parameters. These variables
include {apiRoot},
, and {supi}. The {apiRoot} variable specifies the base URL of the API, indicates the version of the API to be used, and {supi} represents the Subscription Permanent Identifier, which is used to uniquely identify the subscriber. These variables ensure that the URI is correctly formatted and points to the appropriate resource for the given subscriber and API version. - The purpose of the Intermediate Spending Limit Report Request procedure is to allow the PCF (Policy Control Function) to request the status of additional policy counters available at the CHF (Charging Function) or to remove the request for the status of policy counters. The PCF can modify the list of subscribed policy counters based on its policy decisions, and the CHF responds by providing the policy counter status, optionally including pending statuses and their activation times, for the requested policy counters.
- When ABC online charging is employed, the TDF uses Debit / Reserve Units Request[Initial], update, or termination to convey charging information related to the detected application traffic. The OCS responds with Debit / Reserve Units Response, which includes quotas for rating groups or instructions on handling the application traffic (e.g., terminate, continue, reroute). The TDF must request a quota before service delivery. If only certain quotas are authorized by the OCS (e.g., due to insufficient credit), the rating groups without authorized quotas are handled according to the received Result Code value. The quota supervision mechanism is further described in TS 32.299 [50].
- The Resource URI variables defined in table 6.1.3.8.2-1 for the 'sm-data' resource
serve to dynamically construct the URI based on specific parameters. These variables
include {apiRoot},
- source_sentence: What types of data structures are supported by the GET request
body on the resource described in table 5.2.11.3.4-2, and how do they influence
the request?
sentences:
- In Direct Communication mode, the NF Service consumer can subscribe to status change notifications of NF instances from the NRF. If the NF Service consumer is notified by the NRF or detects by itself (e.g., through a lack of response to a request) that the NF producer instance is no longer available, it selects another available NF producer instance within the same NF Set. In Indirect Communication mode, the SCP or NF Service consumer may also subscribe to status change notifications from the NRF and select another NF producer instance within the same NF Set if the original instance serving the UE becomes unavailable. The specific implementation details of how the SCP detects the unavailability of an NF producer instance are left to the implementation.
- The data structures supported by the GET request body on the resource are detailed in table 5.2.11.3.4-2. These structures define the format and content of the data that can be sent in the request body. They might include fields such as 'filterCriteria', 'sortOrder', or 'pagination', which influence how the server processes the request and returns the appropriate data.
- 'The specific triggers on the Ro interface that can lead to the termination of the IMS service include: 1) Reception of an unsuccessful Operation Result different from DIAMETER_CREDIT_CONTROL_NOT_APPLICABLE in the Debit/Reserve Units Response message. 2) Reception of an unsuccessful Result Code different from DIAMETER_CREDIT_CONTROL_NOT_APPLICABLE within the multiple units operation in the Debit/Reserve Units Response message when only one instance of the multiple units operation field is used. 3) Execution of the termination action procedure as defined in TS 32.299 when only one instance of the Multiple Unit Operation field is used. 4) Execution of the failure handling procedures when the Failure Action is set to ''Terminate'' or ''Retry & Terminate''. 5) Reception in the IMS-GWF of an Abort-Session-Request message from OCS.' pipeline_tag: sentence-similarity library_name: sentence-transformers metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100 model-index:
- name: BGE_base_3gpp-qa-v2_Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1 value: 0.8347103013864849 name: Cosine Accuracy@1
- type: cosine_accuracy@3 value: 0.9628129405256866 name: Cosine Accuracy@3
- type: cosine_accuracy@5 value: 0.9806391748898128 name: Cosine Accuracy@5
- type: cosine_accuracy@10 value: 0.9927196159954319 name: Cosine Accuracy@10
- type: cosine_precision@1 value: 0.8347103013864849 name: Cosine Precision@1
- type: cosine_precision@3 value: 0.32093764684189546 name: Cosine Precision@3
- type: cosine_precision@5 value: 0.1961278349779626 name: Cosine Precision@5
- type: cosine_precision@10 value: 0.09927196159954321 name: Cosine Precision@10
- type: cosine_recall@1 value: 0.8347103013864849 name: Cosine Recall@1
- type: cosine_recall@3 value: 0.9628129405256866 name: Cosine Recall@3
- type: cosine_recall@5 value: 0.9806391748898128 name: Cosine Recall@5
- type: cosine_recall@10 value: 0.9927196159954319 name: Cosine Recall@10
- type: cosine_ndcg@10 value: 0.9235193716202091 name: Cosine Ndcg@10
- type: cosine_mrr@10 value: 0.9002603606826465 name: Cosine Mrr@10
- type: cosine_map@100 value: 0.9006611894428589 name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1 value: 0.8341214467978801 name: Cosine Accuracy@1
- type: cosine_accuracy@3 value: 0.9630270694669973 name: Cosine Accuracy@3
- type: cosine_accuracy@5 value: 0.980835459752681 name: Cosine Accuracy@5
- type: cosine_accuracy@10 value: 0.9925947074463339 name: Cosine Accuracy@10
- type: cosine_precision@1 value: 0.8341214467978801 name: Cosine Precision@1
- type: cosine_precision@3 value: 0.32100902315566576 name: Cosine Precision@3
- type: cosine_precision@5 value: 0.19616709195053625 name: Cosine Precision@5
- type: cosine_precision@10 value: 0.09925947074463341 name: Cosine Precision@10
- type: cosine_recall@1 value: 0.8341214467978801 name: Cosine Recall@1
- type: cosine_recall@3 value: 0.9630270694669973 name: Cosine Recall@3
- type: cosine_recall@5 value: 0.980835459752681 name: Cosine Recall@5
- type: cosine_recall@10 value: 0.9925947074463339 name: Cosine Recall@10
- type: cosine_ndcg@10 value: 0.9232781516394674 name: Cosine Ndcg@10
- type: cosine_mrr@10 value: 0.8999735171216805 name: Cosine Mrr@10
- type: cosine_map@100 value: 0.9003855301087177 name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1 value: 0.8326047001302618 name: Cosine Accuracy@1
- type: cosine_accuracy@3 value: 0.9624382148783927 name: Cosine Accuracy@3
- type: cosine_accuracy@5 value: 0.9801930729287486 name: Cosine Accuracy@5
- type: cosine_accuracy@10 value: 0.9922913581128102 name: Cosine Accuracy@10
- type: cosine_precision@1 value: 0.8326047001302618 name: Cosine Precision@1
- type: cosine_precision@3 value: 0.3208127382927975 name: Cosine Precision@3
- type: cosine_precision@5 value: 0.19603861458574973 name: Cosine Precision@5
- type: cosine_precision@10 value: 0.09922913581128105 name: Cosine Precision@10
- type: cosine_recall@1 value: 0.8326047001302618 name: Cosine Recall@1
- type: cosine_recall@3 value: 0.9624382148783927 name: Cosine Recall@3
- type: cosine_recall@5 value: 0.9801930729287486 name: Cosine Recall@5
- type: cosine_recall@10 value: 0.9922913581128102 name: Cosine Recall@10
- type: cosine_ndcg@10 value: 0.9223721780180253 name: Cosine Ndcg@10
- type: cosine_mrr@10 value: 0.898869719250338 name: Cosine Mrr@10
- type: cosine_map@100 value: 0.8993021227310489 name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1 value: 0.8294462982459271 name: Cosine Accuracy@1
- type: cosine_accuracy@3 value: 0.9610642208383148 name: Cosine Accuracy@3
- type: cosine_accuracy@5 value: 0.9796399064970289 name: Cosine Accuracy@5
- type: cosine_accuracy@10 value: 0.991720347602648 name: Cosine Accuracy@10
- type: cosine_precision@1 value: 0.8294462982459271 name: Cosine Precision@1
- type: cosine_precision@3 value: 0.3203547402794382 name: Cosine Precision@3
- type: cosine_precision@5 value: 0.19592798129940583 name: Cosine Precision@5
- type: cosine_precision@10 value: 0.09917203476026483 name: Cosine Precision@10
- type: cosine_recall@1 value: 0.8294462982459271 name: Cosine Recall@1
- type: cosine_recall@3 value: 0.9610642208383148 name: Cosine Recall@3
- type: cosine_recall@5 value: 0.9796399064970289 name: Cosine Recall@5
- type: cosine_recall@10 value: 0.991720347602648 name: Cosine Recall@10
- type: cosine_ndcg@10 value: 0.9204835891487085 name: Cosine Ndcg@10
- type: cosine_mrr@10 value: 0.8965493659262566 name: Cosine Mrr@10
- type: cosine_map@100 value: 0.897020544909686 name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1 value: 0.8210595813779198 name: Cosine Accuracy@1
- type: cosine_accuracy@3 value: 0.9574775610713585 name: Cosine Accuracy@3
- type: cosine_accuracy@5 value: 0.9771595795935119 name: Cosine Accuracy@5
- type: cosine_accuracy@10 value: 0.9906497028960939 name: Cosine Accuracy@10
- type: cosine_precision@1 value: 0.8210595813779198 name: Cosine Precision@1
- type: cosine_precision@3 value: 0.3191591870237861 name: Cosine Precision@3
- type: cosine_precision@5 value: 0.19543191591870243 name: Cosine Precision@5
- type: cosine_precision@10 value: 0.09906497028960942 name: Cosine Precision@10
- type: cosine_recall@1 value: 0.8210595813779198 name: Cosine Recall@1
- type: cosine_recall@3 value: 0.9574775610713585 name: Cosine Recall@3
- type: cosine_recall@5 value: 0.9771595795935119 name: Cosine Recall@5
- type: cosine_recall@10 value: 0.9906497028960939 name: Cosine Recall@10
- type: cosine_ndcg@10 value: 0.9158816707476002 name: Cosine Ndcg@10
- type: cosine_mrr@10 value: 0.8908051588080549 name: Cosine Mrr@10
- type: cosine_map@100 value: 0.8913320555914594 name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
BGE_base_3gpp-qa-v2_Matryoshka
This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5 on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-base-en-v1.5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
- json
- Language: en
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("iris49/3gpp-embedding-model-v0")
# Run inference
sentences = [
'What types of data structures are supported by the GET request body on the resource described in table 5.2.11.3.4-2, and how do they influence the request?',
"The data structures supported by the GET request body on the resource are detailed in table 5.2.11.3.4-2. These structures define the format and content of the data that can be sent in the request body. They might include fields such as 'filterCriteria', 'sortOrder', or 'pagination', which influence how the server processes the request and returns the appropriate data.",
"The specific triggers on the Ro interface that can lead to the termination of the IMS service include: 1) Reception of an unsuccessful Operation Result different from DIAMETER_CREDIT_CONTROL_NOT_APPLICABLE in the Debit/Reserve Units Response message. 2) Reception of an unsuccessful Result Code different from DIAMETER_CREDIT_CONTROL_NOT_APPLICABLE within the multiple units operation in the Debit/Reserve Units Response message when only one instance of the multiple units operation field is used. 3) Execution of the termination action procedure as defined in TS 32.299 when only one instance of the Multiple Unit Operation field is used. 4) Execution of the failure handling procedures when the Failure Action is set to 'Terminate' or 'Retry & Terminate'. 5) Reception in the IMS-GWF of an Abort-Session-Request message from OCS.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Datasets:
dim_768
,dim_512
,dim_256
,dim_128
anddim_64
- Evaluated with
InformationRetrievalEvaluator
Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
---|---|---|---|---|---|
cosine_accuracy@1 | 0.8347 | 0.8341 | 0.8326 | 0.8294 | 0.8211 |
cosine_accuracy@3 | 0.9628 | 0.963 | 0.9624 | 0.9611 | 0.9575 |
cosine_accuracy@5 | 0.9806 | 0.9808 | 0.9802 | 0.9796 | 0.9772 |
cosine_accuracy@10 | 0.9927 | 0.9926 | 0.9923 | 0.9917 | 0.9906 |
cosine_precision@1 | 0.8347 | 0.8341 | 0.8326 | 0.8294 | 0.8211 |
cosine_precision@3 | 0.3209 | 0.321 | 0.3208 | 0.3204 | 0.3192 |
cosine_precision@5 | 0.1961 | 0.1962 | 0.196 | 0.1959 | 0.1954 |
cosine_precision@10 | 0.0993 | 0.0993 | 0.0992 | 0.0992 | 0.0991 |
cosine_recall@1 | 0.8347 | 0.8341 | 0.8326 | 0.8294 | 0.8211 |
cosine_recall@3 | 0.9628 | 0.963 | 0.9624 | 0.9611 | 0.9575 |
cosine_recall@5 | 0.9806 | 0.9808 | 0.9802 | 0.9796 | 0.9772 |
cosine_recall@10 | 0.9927 | 0.9926 | 0.9923 | 0.9917 | 0.9906 |
cosine_ndcg@10 | 0.9235 | 0.9233 | 0.9224 | 0.9205 | 0.9159 |
cosine_mrr@10 | 0.9003 | 0.9 | 0.8989 | 0.8965 | 0.8908 |
cosine_map@100 | 0.9007 | 0.9004 | 0.8993 | 0.897 | 0.8913 |
Training Details
Training Dataset
json
- Dataset: json
- Size: 56,041 training samples
- Columns:
anchor
andpositive
- Approximate statistics based on the first 1000 samples:
anchor positive type string string details - min: 15 tokens
- mean: 30.56 tokens
- max: 66 tokens
- min: 42 tokens
- mean: 109.65 tokens
- max: 298 tokens
- Samples:
anchor positive What does the 'dataStatProps' attribute represent in the 'AnalyticsMetadataInfo' type, and what is its data type?
The 'dataStatProps' attribute in the 'AnalyticsMetadataInfo' type represents a list of dataset statistical properties of the data used to generate the analytics. It is defined as an optional attribute with a data type of 'array(DatasetStatisticalProperty)' and a cardinality of 1..N, meaning it can contain one or more elements.
Why is it important to have standardized methods for resource management in the Nudm_SubscriberDataManagement Service API?
Standardized methods for resource management in the Nudm_SubscriberDataManagement Service API are important because they ensure uniformity, predictability, and compatibility across different implementations and systems. This standardization facilitates seamless integration, reduces errors, and enhances the efficiency of managing subscriber data, which is critical for maintaining reliable communication services.
What is the purpose of the Nsmf_PDUSession_SMContextStatusNotify service operation in the context of I-SMF context transfer?
The Nsmf_PDUSession_SMContextStatusNotify service operation is used by the SMF (Session Management Function) to notify its consumers about the status of an SM (Session Management) context related to a PDU (Packet Data Unit) Session. In the context of I-SMF (Intermediate SMF) context transfer, this service operation is used to indicate the transfer of the SM context to a new I-SMF or SMF set. It also allows the SMF to update the SMF-derived CN (Core Network) assisted RAN (Radio Access Network) parameters tuning in the AMF (Access and Mobility Management Function). Additionally, it can report DDN (Downlink Data Notification) failures and provide target DNAI (Data Network Access Identifier) information for the current or next PDU session.
- Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 32per_device_eval_batch_size
: 16gradient_accumulation_steps
: 16learning_rate
: 2e-05num_train_epochs
: 4lr_scheduler_type
: cosinewarmup_ratio
: 0.1fp16
: Trueload_best_model_at_end
: Trueoptim
: adamw_torch_fusedbatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 32per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 16eval_accumulation_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 4max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torch_fusedoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseprompts
: Nonebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
---|---|---|---|---|---|---|---|
0.0913 | 10 | 1.4273 | - | - | - | - | - |
0.1826 | 20 | 0.5399 | - | - | - | - | - |
0.2740 | 30 | 0.1252 | - | - | - | - | - |
0.3653 | 40 | 0.0625 | - | - | - | - | - |
0.4566 | 50 | 0.0507 | - | - | - | - | - |
0.5479 | 60 | 0.0366 | - | - | - | - | - |
0.6393 | 70 | 0.029 | - | - | - | - | - |
0.7306 | 80 | 0.0239 | - | - | - | - | - |
0.8219 | 90 | 0.0252 | - | - | - | - | - |
0.9132 | 100 | 0.0237 | - | - | - | - | - |
0.9954 | 109 | - | 0.9199 | 0.9195 | 0.9180 | 0.9150 | 0.9081 |
1.0046 | 110 | 0.026 | - | - | - | - | - |
1.0959 | 120 | 0.017 | - | - | - | - | - |
1.1872 | 130 | 0.02 | - | - | - | - | - |
1.2785 | 140 | 0.0125 | - | - | - | - | - |
1.3699 | 150 | 0.0134 | - | - | - | - | - |
1.4612 | 160 | 0.0128 | - | - | - | - | - |
1.5525 | 170 | 0.0123 | - | - | - | - | - |
1.6438 | 180 | 0.0097 | - | - | - | - | - |
1.7352 | 190 | 0.0101 | - | - | - | - | - |
1.8265 | 200 | 0.0124 | - | - | - | - | - |
1.9178 | 210 | 0.0116 | - | - | - | - | - |
2.0 | 219 | - | 0.9220 | 0.9216 | 0.9206 | 0.9184 | 0.9130 |
2.0091 | 220 | 0.012 | - | - | - | - | - |
2.1005 | 230 | 0.0111 | - | - | - | - | - |
2.1918 | 240 | 0.0101 | - | - | - | - | - |
2.2831 | 250 | 0.0101 | - | - | - | - | - |
2.3744 | 260 | 0.009 | - | - | - | - | - |
2.4658 | 270 | 0.0103 | - | - | - | - | - |
2.5571 | 280 | 0.009 | - | - | - | - | - |
2.6484 | 290 | 0.0083 | - | - | - | - | - |
2.7397 | 300 | 0.0076 | - | - | - | - | - |
2.8311 | 310 | 0.0093 | - | - | - | - | - |
2.9224 | 320 | 0.0104 | - | - | - | - | - |
2.9954 | 328 | - | 0.9234 | 0.9230 | 0.9221 | 0.9201 | 0.9156 |
3.0137 | 330 | 0.0104 | - | - | - | - | - |
3.1050 | 340 | 0.0089 | - | - | - | - | - |
3.1963 | 350 | 0.0084 | - | - | - | - | - |
3.2877 | 360 | 0.0082 | - | - | - | - | - |
3.3790 | 370 | 0.0089 | - | - | - | - | - |
3.4703 | 380 | 0.0083 | - | - | - | - | - |
3.5616 | 390 | 0.0061 | - | - | - | - | - |
3.6530 | 400 | 0.0065 | - | - | - | - | - |
3.7443 | 410 | 0.0063 | - | - | - | - | - |
3.8356 | 420 | 0.0084 | - | - | - | - | - |
3.9269 | 430 | 0.0083 | - | - | - | - | - |
3.9817 | 436 | - | 0.9235 | 0.9233 | 0.9224 | 0.9205 | 0.9159 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.3.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 1.2.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Jina Embeddings V3
Jina Embeddings V3 は100以上の言語をサポートする多言語文埋め込みモデルで、文の類似度と特徴抽出タスクに特化しています。
テキスト埋め込み
Transformers 複数言語対応

J
jinaai
3.7M
911
Ms Marco MiniLM L6 V2
Apache-2.0
MS Marcoパッセージランキングタスクで訓練されたクロスエンコーダモデル、情報検索におけるクエリ-パッセージ関連性スコアリング用
テキスト埋め込み 英語
M
cross-encoder
2.5M
86
Opensearch Neural Sparse Encoding Doc V2 Distill
Apache-2.0
蒸留技術に基づくスパース検索モデルで、OpenSearch向けに最適化されており、推論不要のドキュメントエンコーディングをサポートし、検索関連性と効率性においてV1版を上回ります
テキスト埋め込み
Transformers 英語

O
opensearch-project
1.8M
7
Sapbert From PubMedBERT Fulltext
Apache-2.0
PubMedBERTに基づく生物医学エンティティ表現モデルで、自己アライメント事前学習により意味関係の捕捉を最適化します。
テキスト埋め込み 英語
S
cambridgeltl
1.7M
49
Gte Large
MIT
GTE-Largeは強力なセンテンストランスフォーマーモデルで、文の類似度とテキスト埋め込みタスクに特化しており、複数のベンチマークテストで優れた性能を発揮します。
テキスト埋め込み 英語
G
thenlper
1.5M
278
Gte Base En V1.5
Apache-2.0
GTE-base-en-v1.5 は英語の文章変換モデルで、文章類似度タスクに特化しており、複数のテキスト埋め込みベンチマークで優れた性能を発揮します。
テキスト埋め込み
Transformers 複数言語対応

G
Alibaba-NLP
1.5M
63
Gte Multilingual Base
Apache-2.0
GTE Multilingual Base は50以上の言語をサポートする多言語文埋め込みモデルで、文類似度計算などのタスクに適しています。
テキスト埋め込み
Transformers 複数言語対応

G
Alibaba-NLP
1.2M
246
Polybert
polyBERTは、完全に機械駆動の超高速ポリマー情報学を実現するための化学言語モデルです。PSMILES文字列を600次元の密なフィンガープリントにマッピングし、ポリマー化学構造を数値形式で表現します。
テキスト埋め込み
Transformers

P
kuelumbus
1.0M
5
Bert Base Turkish Cased Mean Nli Stsb Tr
Apache-2.0
トルコ語BERTベースの文埋め込みモデルで、意味的類似性タスクに最適化
テキスト埋め込み
Transformers その他

B
emrecan
1.0M
40
GIST Small Embedding V0
MIT
BAAI/bge-small-en-v1.5モデルを微調整したテキスト埋め込みモデルで、MEDIデータセットとMTEB分類タスクデータセットで訓練され、検索タスクのクエリエンコーディング能力を最適化しました。
テキスト埋め込み
Safetensors 英語
G
avsolatorio
945.68k
29
おすすめAIモデル
Llama 3 Typhoon V1.5x 8b Instruct
タイ語専用に設計された80億パラメータの命令モデルで、GPT-3.5-turboに匹敵する性能を持ち、アプリケーションシナリオ、検索拡張生成、制限付き生成、推論タスクを最適化
大規模言語モデル
Transformers 複数言語対応

L
scb10x
3,269
16
Cadet Tiny
Openrail
Cadet-TinyはSODAデータセットでトレーニングされた超小型対話モデルで、エッジデバイス推論向けに設計されており、体積はCosmo-3Bモデルの約2%です。
対話システム
Transformers 英語

C
ToddGoldfarb
2,691
6
Roberta Base Chinese Extractive Qa
RoBERTaアーキテクチャに基づく中国語抽出型QAモデルで、与えられたテキストから回答を抽出するタスクに適しています。
質問応答システム 中国語
R
uer
2,694
98