T

TD HallOumi 3B

Developed by TEEN-D
A claim verification model fine-tuned from Llama-3.2-3B-Instruct, specifically designed to detect hallucinations or unsupported statements in AI-generated text.
Downloads 46
Release Time : 4/4/2025

Model Overview

This model evaluates whether claims in responses are supported by given contextual documents, primarily used for claim verification and hallucination detection tasks.

Model Features

Efficient Hallucination Detection
With 3 billion parameters, it outperforms larger models like Llama 3.1 405B and Gemini 1.5 Pro in hallucination detection tasks.
Structured Output
Trained to output specific labels (<|supported|> or <|unsupported|>), facilitating automated processing.
Long Context Support
Maximum sequence length of 8192, capable of handling lengthy contextual documents.
Specialized Fine-tuning
Supervised fine-tuning using a claim verification dataset specifically curated by Oumi AI.

Model Capabilities

Claim Verification
Hallucination Detection
Text Classification
Fact-based Verification

Use Cases

AI-generated Content Verification
AI-generated Summary Verification
Verify whether AI-generated summaries accurately reflect the source document content
Can identify statements in summaries that do not match the source document
QA System Verification
Verify whether answers provided by a QA system are supported by reference documents
Can detect fabricated answers in QA systems not based on documents
Content Moderation
Fact-checking
Check whether facts claimed in news or articles are supported by cited sources
Can identify unverified factual claims
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase