# LoRA Efficient Training

Thinkygemma 4b
A pseudo-reasoning expert model fine-tuned from Google Gemma-3-4b-pt, designed for structured reasoning/pseudo-inductive reasoning
Large Language Model Transformers
T
xsanskarx
19
1
T3Q Qwen2.5 14b V1.0 E3
Apache-2.0
A post-trained version based on the Qwen/Qwen2.5-14B-Instruct-1M model, using LoRA-8-4-0.0001-cosine-32-16 configuration, trained on train_data_v1.0.
Large Language Model Transformers Supports Multiple Languages
T
JungZoona
1,557
25
Wiroai Finance Qwen 1.5B
Apache-2.0
Financial domain-specific language model based on Qwen architecture, fine-tuned with 500k+ financial instructions
Large Language Model Transformers
W
WiroAI
886
16
Llama3.1 1B Neo BAAI 1000k
Apache-2.0
Llama3.1-Neo-1B-100w is an efficient language model pruned to 1.4B parameters from Meta-Llama-3.1-8B-Instruct and fine-tuned using the LLM-Neo method (combining LoRA and knowledge distillation). The training data consists of 1 million samples from BAAI/Infinity-Instruct.
Large Language Model Transformers
L
yang31210999
39
2
Qra 1b Dolly Instruction 0.1
This is a Q&A model fine-tuned on Polish instruction datasets based on the Qra-1b model, primarily used to answer user questions.
Large Language Model Transformers Other
Q
nie3e
16
2
Zhilu 13B Instruct
Apache-2.0
ZhiLu is a financial large language model developed based on Chinese Alpaca2-13B, achieving capability leaps through massive incremental pre-training with Chinese and English corpora and high-quality instruction data alignment, with a focus on enhancing performance in the financial domain.
Large Language Model Transformers
Z
SYSU-MUCFC-FinTech-Research-Center
26
3
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase