# Frozen word embeddings
Distilbert Word2vec 256k MLM 250k
This model combines word2vec embeddings with the DistilBERT architecture, suitable for natural language processing tasks. The embedding layer is trained on large-scale corpora and remains frozen, while the model is fine-tuned via masked language modeling.
Large Language Model
Transformers

D
vocab-transformers
21
0
Dense Encoder Msmarco Distilbert Word2vec256k
A sentence encoder based on msmarco-word2vec256000-distilbert-base-uncased, using a word2vec-initialized 256k vocabulary, specifically designed for sentence similarity tasks
Text Embedding
Transformers

D
vocab-transformers
38
0
Featured Recommended AI Models