N

Nepalibert

Developed by Shushant
A masked language model based on Nepali news data, trained on approximately 10 million Nepali sentences sourced from multiple Nepali news websites, primarily containing news content.
Downloads 701.51k
Release Time : 3/2/2022

Model Overview

This model is a fine-tuned Nepali masked language model based on the BERT architecture, mainly used for Nepali-related natural language processing tasks.

Model Features

Large-scale Nepali training data
The training data includes approximately 10 million Nepali sentences, primarily from news websites, with a text volume of about 4.6GB.
High-performance evaluation results
Performs well on the evaluation set, with a loss value of 1.0495 and a perplexity of 8.56.
GPU-accelerated training
Trained using a Tesla V100 GPU, taking approximately 3 days, 8 hours, and 57 minutes.

Model Capabilities

Nepali text understanding
Nepali text generation
Nepali sentiment analysis

Use Cases

Natural Language Processing
Nepali sentiment analysis
Used to analyze the sentiment tendencies of Nepali tweets
Outperforms other existing Nepali masked language models
Nepali text completion
Fills in missing parts of Nepali sentences
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase