L

Llm2vec Meta Llama 31 8B Instruct Mntp

Developed by McGill-NLP
LLM2Vec is a simple method to convert decoder-only large language models into text encoders by enabling bidirectional attention, masked next-token prediction, and unsupervised contrastive learning.
Downloads 386
Release Time : 10/8/2024

Model Overview

LLM2Vec transforms decoder-only large language models into powerful text encoders through three simple steps, applicable to various tasks such as text embedding, information retrieval, and text classification.

Model Features

Bidirectional Attention
Enables bidirectional attention mechanism to enhance the model's text comprehension capabilities.
Masked Next-Token Prediction
Improves the model's text encoding ability through masked next-token prediction tasks.
Unsupervised Contrastive Learning
Optimizes model performance further using unsupervised contrastive learning methods.

Model Capabilities

Text embedding
Information retrieval
Text classification
Text clustering
Text semantic similarity calculation
Feature extraction

Use Cases

Information retrieval
Web search queries
Retrieve relevant paragraphs answering specific queries.
High similarity matching results
Text classification
Document classification
Classify documents based on semantic similarity.
Accurate classification results
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase