AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Pre-trained Language Models

# Pre-trained Language Models

Nezha Cn Base
NEZHA is a neural contextualized representation model for Chinese understanding, based on the Transformer architecture, developed by Huawei Noah's Ark Lab.
Large Language Model Transformers
N
sijunhe
1,443
12
Bros Base Uncased
BROS is a pre-trained language model focused on text and layout understanding, designed to efficiently extract key information from documents.
Large Language Model Transformers
B
naver-clova-ocr
53.22k
18
Longformer Base Plagiarism Detection
This model is trained using the Longformer architecture, specifically designed to detect machine-paraphrased plagiarized text, with significant application value in maintaining academic integrity.
Text Classification Transformers English
L
jpwahle
59.47k
13
Biosyn Sapbert Ncbi Disease
A biomedical entity recognition model based on BioBERT developed by Dmis-lab at Korea University, specializing in feature extraction tasks for the NCBI disease dataset
Text Embedding Transformers
B
dmis-lab
580
2
Bert Base Arabic Camelbert Ca
Apache-2.0
CAMeLBERT is a collection of BERT models optimized for Arabic language variants, with the CA version specifically pre-trained on Classical Arabic texts
Large Language Model Arabic
B
CAMeL-Lab
1,128
12
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase