๐ Bias Detection Model
An English sequence classification model for detecting bias and fairness in sentences.
๐ Quick Start
This English sequence classification model is trained on the MBAD Dataset to detect bias and fairness in sentences (news articles). It's built on the distilbert - base - uncased model and trained for 30 epochs with a batch size of 16, a learning rate of 5e - 5, and a maximum sequence length of 512.
โจ Features
- Bias Detection: Effectively detects bias and fairness in English sentences, especially in news articles.
- Trained on MBAD Dataset: Utilizes the MBAD Data for training to ensure high - quality performance.
๐ฆ Installation
No specific installation steps are provided in the original document.
๐ป Usage Examples
Basic Usage
The easiest way is to load the inference api from huggingface and the second method is through the pipeline object offered by the transformers library.
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("d4data/bias-detection-model")
model = TFAutoModelForSequenceClassification.from_pretrained("d4data/bias-detection-model")
classifier = pipeline('text-classification', model=model, tokenizer=tokenizer)
classifier("The irony, of course, is that the exhibit that invites people to throw trash at vacuuming Ivanka Trump lookalike reflects every stereotype feminists claim to stand against, oversexualizing Ivankaโs body and ignoring her hard work.")
๐ Documentation
Model Information
Property |
Details |
Model Type |
English sequence classification model |
Training Data |
MBAD Data |
Carbon Emission |
0.319355 Kg |
Performance Metrics
Train Accuracy |
Validation Accuracy |
Train loss |
Test loss |
76.97 |
62.00 |
0.45 |
0.96 |
๐ง Technical Details
This model was built on top of the distilbert - base - uncased model. It was trained for 30 epochs with a batch size of 16, a learning rate of 5e - 5, and a maximum sequence length of 512.
๐ License
No license information is provided in the original document.
๐จโ๐ป Author
This model is part of the Research topic "Bias and Fairness in AI" conducted by Deepak John Reji, Shaina Raza. If you use this work (code, model or dataset), please star at:
Bias & Fairness in AI, (2022), GitHub repository, https://github.com/dreji18/Fairness-in-AI
Example Widgets
- Biased example 1: "Nevertheless, Trump and other Republicans have tarred the protests as havens for terrorists intent on destroying property."
- Biased example 2: "Billie Eilish issues apology for mouthing an anti - Asian derogatory term in a resurfaced video."
- Biased example 3: "Christians should make clear that the perpetuation of objectionable vaccines and the lack of alternatives is a kind of coercion."
- Non - Biased example 1: "There have been a protest by a group of people"
- Non - Biased example 2: "While emphasizing heโs not singling out either party, Cohen warned about the danger of normalizing white supremacist ideology."