# Open-domain QA
Qwen2.5 0.5B Instruct Gensyn Swarm Fierce Placid Whale
A fine-tuned version based on Gensyn/Qwen2.5-0.5B-Instruct, trained using the TRL framework and GRPO algorithm
Large Language Model
Transformers

Q
gangchen
3,053
2
Llama4some SOVL 4x8B L3 V1
This is a Mixture of Experts model obtained by merging multiple pre-trained language models using mergekit, aiming to create the most unconstrained text generation capability.
Large Language Model
Transformers

L
saishf
22
3
Mistral 7B OpenOrca Q4 K M GGUF
Apache-2.0
This model is a GGUF format model converted from Open-Orca/Mistral-7B-OpenOrca, suitable for text generation tasks.
Large Language Model English
M
munish0838
81
2
JARVIS
Apache-2.0
A dialogue AI based on Causal Language Modeling (CLM) architecture, designed for natural language interaction, capable of generating coherent and contextually appropriate responses.
Large Language Model
Transformers Supports Multiple Languages

J
VAIBHAV22334455
38
12
CAG Mistral 7b
MIT
A 7-billion-parameter credibility-aware generation model fine-tuned on Mistral-7B, capable of understanding and utilizing contextual credibility for content generation.
Large Language Model
Transformers English

C
ruotong-pan
37
1
Strangemerges 17 7B Dare Ties
Apache-2.0
StrangeMerges_17-7B-dare_ties is the result of merging two models, Gille/StrangeMerges_16-7B-slerp and Gille/StrangeMerges_12-7B-slerp, using the dare_ties merging method via LazyMergekit.
Large Language Model
Transformers

S
Gille
20
1
Blurdus 7b V0.1
Apache-2.0
Blurdus-7b-v0.1 is a hybrid model obtained by merging three 7B-parameter models using LazyMergekit, demonstrating excellent performance across multiple benchmarks.
Large Language Model
Transformers

B
gate369
80
1
Open Llama 3b V2 Chat
Apache-2.0
A dialogue model developed based on LLaMA 3B v2, supporting text generation tasks, with moderate performance on the open large model leaderboard.
Large Language Model
Transformers

O
mediocredev
134
3
Q Align Iqa
MIT
This is a multimodal model published via arXiv paper 2312.17090, potentially capable of text and visual processing
Large Language Model
Transformers

Q
q-future
43
1
Causallm 7B DPO Alpha GGUF
A 7B-parameter large language model based on Llama 2 architecture, optimized through DPO training, supporting Chinese and English text generation
Large Language Model Supports Multiple Languages
C
tastypear
367
36
Idefics 9b Instruct
Other
IDEFICS is an open-source reproduction of DeepMind's proprietary visual language model Flamingo. It is a multimodal model that can accept arbitrary sequences of images and text as input and generate text output.
Image-to-Text
Transformers English

I
HuggingFaceM4
28.34k
104
Orca Mini 13b
orca_mini_13b is a text generation model trained on multiple high-quality datasets, focusing on instruction following and dialogue tasks.
Large Language Model
Transformers English

O
pankajmathur
79
100
CAMEL 33B Combined Data
CAMEL-33B is a large language model fine-tuned on LLaMA-33B, integrating CAMEL framework dialogue data, ShareGPT public dialogues, and Alpaca instruction data, excelling in multi-turn conversations and instruction comprehension.
Large Language Model
Transformers

C
camel-ai
97
6
Instructblip Vicuna 7b
Other
InstructBLIP is a vision instruction-tuned version based on BLIP-2, using Vicuna-7B as the language model, focusing on vision-language tasks.
Image-to-Text
Transformers English

I
Salesforce
20.99k
91
Bert Finetuned On Nq Short
An open-domain question answering model trained on the complete Natural Questions (NQ) dataset, capable of answering various factual questions
Large Language Model
Transformers

B
eibakke
13
1
Spar Wiki Bm25 Lexmodel Query Encoder
A dense retriever based on BERT-base architecture, trained on Wikipedia articles to emulate BM25 behavior
Text Embedding
Transformers

S
facebook
80
2
Blenderbot 1B Distill
Apache-2.0
This model is a high-performance open-domain chatbot capable of integrating multiple dialogue skills such as questioning, answering, knowledge demonstration, and empathy.
Dialogue System
Transformers English

B
facebook
2,413
37
BERT NLP
A versatile large language model capable of handling various natural language processing tasks (inferred information)
Large Language Model
B
subbareddyiiit
18
0
Dpr Question Encoder Single Nq Base
DPR (Dense Passage Retrieval) is a tool and model for open-domain question answering research. This model is a BERT-based question encoder trained on the Natural Questions (NQ) dataset.
Question Answering System
Transformers English

D
facebook
32.90k
30
Dpr Question Encoder Multiset Base
BERT-based Dense Passage Retrieval (DPR) question encoder for open-domain QA research, trained on multiple QA datasets
Question Answering System
Transformers English

D
facebook
17.51k
4
Kogpt2 Base V2
KoGPT2 is a Korean GPT-2 model developed by SKT-AI, based on the Transformer architecture, suitable for various Korean text generation tasks.
Large Language Model Korean
K
skt
105.25k
47
Sparta Msmarco Distilbert Base V1
SPARTA is an efficient open-domain QA model based on sparse Transformer matching retrieval, designed for information retrieval tasks.
Question Answering System
Transformers

S
BeIR
50
2
Featured Recommended AI Models