# DARE-TIES Merge
Legion V2.2 LLaMa 70B
A pre-trained language model merged using the DARE TIES method, based on L-BASE-V1 and fusing multiple MERGE models
Large Language Model
Transformers

L
TareksTesting
24
2
Biollama Ko 8B
Apache-2.0
BioLlama-Ko-8B is a Korean medical large language model based on the Llama-3 architecture, constructed by merging the beomi/Llama-3-Open-Ko-8B and ProbeMedicalYonseiMAILab/medllama3-v20 models, specializing in Korean medical Q&A tasks.
Large Language Model
Transformers

B
iRASC
1,745
7
Featured Recommended AI Models