H

Hallucination Evaluation Model

Developed by vectara
HHEM-2.1-Open is a hallucination detection model developed by Vectara, designed to evaluate the consistency between content generated by large language models and given evidence.
Downloads 229.46k
Release Time : 10/25/2023

Model Overview

A text classification model specifically designed to detect hallucinations in large language models (LLMs), particularly suitable for Retrieval-Augmented Generation (RAG) scenarios, quantifying the degree of factual consistency in generated content.

Model Features

High-Performance Detection
Outperforms GPT-3.5-Turbo and GPT-4 in balanced accuracy on the RAGTruth benchmark
Lightweight and Efficient
Memory usage under 600MB at 32-bit precision, processing 2k token inputs on modern x86 CPUs takes approximately 1.5 seconds
Unlimited Context Support
Supports long-context scenarios compared to the 512-token limit of the previous HHEM-1.0
Asymmetric Detection
Capable of identifying special hallucination types where facts are correct but contextually inappropriate

Model Capabilities

Text Consistency Evaluation
RAG Scenario Hallucination Detection
Cross-Sentence Logical Relationship Analysis

Use Cases

Retrieval-Augmented Generation (RAG)
Summary Fact-Checking
When an LLM generates summaries based on retrieval results, verifying whether the summary content aligns with the retrieved evidence
Achieves 64.42% balanced accuracy on the RAGTruth-Summ benchmark
QA System Verification
Evaluating whether answers generated by a QA system strictly adhere to the provided context
Achieves 74.28% balanced accuracy on the RAGTruth-QA benchmark
Content Moderation
Factual Claim Verification
Detecting statements in user-generated content (UGC) that contradict known facts
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase