S

Selfrag Llama2 7b

Developed by selfrag
A 7-billion-parameter Self-RAG model capable of generating outputs for diverse user queries, adaptively invoking retrieval systems, self-criticizing outputs and retrieved passages, while generating reflection tokens.
Downloads 1,318
Release Time : 10/18/2023

Model Overview

The model is trained on an instruction-following corpus interleaved with passages and reflection tokens through standard next-token prediction objectives, achieving efficient and stable learning based on fine-grained feedback. During inference, it utilizes reflection tokens covering multiple dimensions of generated content to sample the optimal output that best aligns with user preferences.

Model Features

Adaptive retrieval invocation
The model automatically determines whether to invoke the retrieval system based on query needs, optimizing resource usage efficiency.
Self-critique mechanism
Generates reflection tokens during content creation to provide fine-grained quality feedback.
Retrieval-augmented generation
Combines retrieved passages to generate more accurate and factually grounded responses.
Fine-grained feedback learning
Utilizes reflection tokens during training to achieve stable learning based on multiple dimensions.

Model Capabilities

Text generation
Retrieval-augmented generation
Self-evaluation
Multi-turn dialogue
Fact-checking

Use Cases

Information query
Factual Q&A
Automatically retrieves relevant information when answering fact-based questions
Generates answers with cited passages and reliability scores
Content analysis
Text classification
Identifies and classifies different elements in input text
Outputs classification results with self-evaluation
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase