D

Distil Ita Legal Bert

Developed by dlicari
A lightweight BERT model for the Italian legal domain built using knowledge distillation technology, featuring only 4 Transformer layers
Downloads 353
Release Time : 12/10/2022

Model Overview

This is a lightweight sentence embedding model compressed from ITALIAN-LEGAL-BERT through knowledge distillation, specifically optimized for Italian legal texts, capable of generating semantic vector representations similar to the original teacher model

Model Features

Lightweight and efficient
With only 4 Transformer layers, it significantly reduces computational resource requirements compared to full BERT models
Legal domain optimization
Specifically trained for Italian legal texts, demonstrating superior semantic understanding in this field
Knowledge distillation technology
Achieves efficient compression by minimizing MSE loss with the teacher model (ITALIAN-LEGAL-BERT)

Model Capabilities

Generates sentence embeddings
Calculates sentence similarity
Legal text feature extraction
Semantic search
Text clustering

Use Cases

Legal text processing
Legal document similarity retrieval
Quickly find other documents semantically similar to the queried legal document
Legal case clustering analysis
Automatically classify and discover themes in large volumes of legal cases
Intelligent legal assistant
Relevant legal provisions recommendation
Automatically recommend related legal clauses based on user queries
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase