Llm2vec Sheared LLaMA Mntp
L
Llm2vec Sheared LLaMA Mntp
Developed by McGill-NLP
LLM2Vec is a simple solution for transforming decoder-only large language models into text encoders, achieved by enabling bidirectional attention, masked next-token prediction, and unsupervised contrastive learning.
Downloads 2,430
Release Time : 4/4/2024
Model Overview
LLM2Vec is a technical solution for converting large language models into efficient text encoders, suitable for tasks such as text similarity calculation and information retrieval.
Model Features
Bidirectional Attention Mechanism
By enabling bidirectional attention, the model can better understand contextual information.
Masked Next-Token Prediction
Uses masked next-token prediction to enhance the model's text comprehension capabilities.
Unsupervised Contrastive Learning
Optimizes model performance using unsupervised contrastive learning, requiring no large amounts of labeled data.
Simple Conversion Solution
Converts a decoder LLM into an efficient text encoder in just three simple steps.
Model Capabilities
Text Embedding
Text Semantic Similarity Calculation
Information Retrieval
Text Classification
Text Clustering
Feature Extraction
Use Cases
Information Retrieval
Web Search Query Matching
Retrieves relevant passages based on user queries
Highly accurate query-document matching
Text Analysis
Document Similarity Analysis
Calculates semantic similarity between different documents
Effective document clustering and classification
Featured Recommended AI Models
Š 2025AIbase