D

Distilbert Base Uncased Finetuned Pos

Developed by tbosse
A lightweight token classification model based on DistilBERT, fine-tuned on the conll2003 dataset for part-of-speech tagging tasks
Downloads 17
Release Time : 3/20/2022

Model Overview

This model is a lightweight version based on DistilBERT, fine-tuned on the conll2003 dataset specifically for token classification tasks such as part-of-speech tagging. The model performs excellently on the evaluation set, achieving an F1 score of 0.9126.

Model Features

Lightweight and Efficient
Based on the DistilBERT architecture, it is 40% smaller than the standard BERT model while retaining 97% of its performance
High-precision Token Classification
Fine-tuned on the conll2003 dataset, achieving an F1 score of 0.9126 and accuracy of 0.9246
Fast Inference
The distilled model design makes its inference speed 60% faster than the original BERT model

Model Capabilities

Part-of-Speech Tagging
Named Entity Recognition
Text Token Classification

Use Cases

Natural Language Processing
Text Preprocessing
Used in preprocessing steps of NLP pipelines to identify parts of speech and named entities in text
92.46% accuracy
Information Extraction
Extracting structured information from unstructured text
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase