# Multi-task Reasoning
Midm 2.0 Base Instruct Gguf
MIT
Mi:dm 2.0 is an 'AI centered around South Korea' model developed using KT's proprietary technology, which deeply internalizes the unique values, cognitive frameworks, and common-sense reasoning of South Korean society.
Large Language Model
Transformers Supports Multiple Languages

M
mykor
517
1
Guardreasoner 1B
Other
GuardReasoner 1B is a version fine-tuned via R-SFT and HS-DPO based on meta-llama/Llama-3.2-1B, focusing on classification tasks for analyzing human-AI interactions.
Large Language Model
Transformers English

G
yueliu1999
154
4
Guardreasoner 3B
Other
A security protection model fine-tuned based on Llama-3.2-3B using R-SFT and HS-DPO methods, designed to analyze harmful content in human-computer interactions
Large Language Model
Transformers

G
yueliu1999
172
3
Guardreasoner 8B
Apache-2.0
GuardReasoner 8B is a fine-tuned model based on meta-llama/Llama-3.1-8B, specializing in reasoning-based LLM safety protection
Large Language Model
Transformers

G
yueliu1999
480
2
Llama3 German 8B 32k
A German-optimized large language model based on Meta Llama3-8B, continuously pre-trained on 65 billion German tokens, specifically optimized for German and supporting 32k long context
Large Language Model
Transformers German

L
DiscoResearch
91
13
Gemma 7b Zephyr Sft
Other
A large language model based on Google's Gemma 7B, fine-tuned using the Zephyr SFT recipe, primarily for text generation tasks
Large Language Model
Transformers

G
wandb
19
2
Darebeagle 7B
Apache-2.0
DareBeagle-7B is a 7B-parameter large language model obtained by merging mlabonne/NeuralBeagle14-7B and mlabonne/NeuralDaredevil-7B using LazyMergekit, demonstrating excellent performance across multiple benchmarks.
Large Language Model
Transformers

D
shadowml
77
1
Tiny Llava V1 Hf
Apache-2.0
TinyLLaVA is a compact large-scale multimodal model framework focused on vision-language tasks, featuring small parameter size yet excellent performance.
Image-to-Text
Transformers Supports Multiple Languages

T
bczhou
2,372
57
Galactica 6.7B EssayWriter
A 6.7 billion parameter large language model based on the Galactica architecture, specializing in essay writing tasks, with an average score of 37.75 on the Open LLM Leaderboard.
Large Language Model
Transformers

G
KnutJaegersberg
105
4
Featured Recommended AI Models