A

Albert Xxlarge V2

Developed by albert
ALBERT XXLarge v2 is a large language model pre-trained with masked language modeling objectives, featuring a parameter-shared Transformer architecture with 12 repeated layers and 223 million parameters.
Downloads 19.79k
Release Time : 3/2/2022

Model Overview

This model is the largest version in the ALBERT series, reducing memory usage through layer parameter sharing and excelling in various NLP tasks. Primarily used for text feature extraction and downstream task fine-tuning.

Model Features

Parameter-shared Architecture
Significantly reduces model memory consumption by sharing parameters across all Transformer layers
Dual-objective Pre-training
Utilizes both masked language modeling (MLM) and sentence order prediction (SOP) for pre-training
Large-scale Pre-training Data
Trained on BookCorpus and English Wikipedia, covering diverse text types
Version Improvements
The v2 version outperforms v1 by adjusting dropout rates and extending training duration

Model Capabilities

Text feature extraction
Masked language prediction
Sentence order judgment
Downstream task fine-tuning

Use Cases

Natural Language Understanding
Text Classification
Applicable for sentiment analysis, topic classification tasks
Achieved 96.8% accuracy on SST-2 sentiment analysis task
Question Answering Systems
Used for building open-domain QA systems
Scored 89.8/86.9 EM/F1 on SQuAD2.0 QA task
Language Model Research
Language Representation Learning
Investigating the impact of parameter-shared architecture on language representation
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase