M

Multilingual ModernBert Large Preview

Developed by makiart
A large multilingual BERT model developed by the Algomatic team, supporting 8192 context length, trained on approximately 60 billion tokens, suitable for mask filling tasks.
Downloads 27
Release Time : 2/11/2025

Model Overview

This is a large multilingual BERT model designed specifically for mask filling tasks, supporting multiple language processing with a large vocabulary and context handling capability.

Model Features

Long Context Support
Supports 8192 context length, suitable for long-text tasks.
Multilingual Capability
Capable of processing text in multiple languages (e.g., Korean, English, Chinese).
Efficient Inference
Supports FlashAttention technology, enabling efficient inference on compatible GPUs.
Large Vocabulary
Vocabulary size of 151,680, optimized for code text processing and capable of distinguishing indentation.

Model Capabilities

Multilingual text processing
Mask filling prediction
Long-text understanding

Use Cases

Text Processing
Korean Text Filling
Predict masked words in Korean sentences.
English Text Filling
Predict masked words in English sentences.
Chinese Text Filling
Predict masked words in Chinese sentences.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase