L

Llmlingua 2 Xlm Roberta Large Meetingbank

Developed by microsoft
LLMLingua-2 is a token classification model fine-tuned based on the XLM-RoBERTa large model, designed for task-agnostic prompt compression.
Downloads 33.74k
Release Time : 3/17/2024

Model Overview

This model is used for token classification in task-agnostic prompt compression, with the retention probability of each token serving as a compression metric.

Model Features

Task-Agnostic Prompt Compression
Capable of performing efficient prompt compression without relying on specific tasks.
Data Distillation Method
Trained using a data distillation approach, improving compression efficiency and fidelity.
Multilingual Support
Based on the XLM-RoBERTa model, supporting multilingual processing.

Model Capabilities

Text Compression
Token Classification
Multilingual Processing

Use Cases

Meeting Minutes Processing
Meeting Minutes Compression
Compress lengthy meeting minutes while retaining key information.
Improves efficiency for downstream tasks (e.g., QA and summarization).
Prompt Optimization
LLM Prompt Compression
Reduce the length of input prompts while maintaining semantic integrity.
Lowers computational costs and improves inference speed.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase