P

Phi3 Hallucination Judge Merge

Developed by grounded-ai
This model is designed to detect hallucination phenomena in language model outputs, i.e., responses that are coherent but factually incorrect or out of context.
Downloads 63
Release Time : 4/25/2025

Model Overview

A specialized binary classification model for detecting hallucinations in language model outputs, achieving high-performance hallucination detection through fine-tuning.

Model Features

High-performance Hallucination Detection
Excels in hallucination detection tasks with an F1 score of 0.81, surpassing multiple cutting-edge language models.
Lightweight Adapter
Utilizes PEFT adapter technology for efficient fine-tuning without modifying the base model.
Standardized Prompt Strategy
Provides standardized input formats and prompt strategies for easy integration into existing systems.

Model Capabilities

Hallucination Detection
Text Classification
Language Model Output Evaluation

Use Cases

Language Model Quality Assessment
Model Output Verification
Verify the factual accuracy of language model-generated content
Accurately identifies 85% of hallucinated outputs
Content Moderation
Fact-checking
Automatically detect factual errors in generated content
Achieves 87% recall rate in error detection
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase