Bert Base Multilingual Cased Pos English
B
Bert Base Multilingual Cased Pos English
Developed by QCRI
This is a multilingual BERT model fine-tuned for English POS tagging tasks, trained on the Penn Treebank with an F1 score of 96.69.
Downloads 4,074
Release Time : 4/27/2022
Model Overview
This model is a fine-tuned multilingual BERT model for English POS tagging, primarily used for text part-of-speech tagging tasks.
Model Features
High-Accuracy POS Tagging
Trained on the Penn Treebank with an F1 score of 96.69, ensuring high tagging accuracy.
Based on Multilingual BERT
Uses bert-base-multilingual-cased as the base model, providing multilingual understanding capabilities.
Easy to Use
Can be directly called via the transformers pipeline, making it easy to integrate into existing systems.
Model Capabilities
English POS Tagging
Text Token Classification
Use Cases
Natural Language Processing
Text Preprocessing
Used in the text preprocessing stage of NLP pipelines to provide POS information for subsequent tasks.
Improves performance of downstream NLP tasks
Linguistic Research
Assists linguists in grammatical analysis and language phenomenon studies.
Featured Recommended AI Models
Š 2025AIbase