L

Luxia 21.4b Alignment V1.2

Developed by saltlux
LUXIA-21.4B-Alignment is a large language model with 21.4 billion parameters, demonstrating outstanding performance across various natural language processing tasks.
Downloads 1,839
Release Time : 5/27/2024

Model Overview

This model exhibits top-tier performance among models with fewer than 35B parameters, even surpassing 72B parameter models and 34Bx2 mixture-of-experts models. Derived from the luxia-21.4b-instruct model through DPO training.

Model Features

High Performance
Demonstrates top-tier performance among models with fewer than 35B parameters, surpassing larger-scale models
Advanced Training Methods
Utilizes cutting-edge techniques such as supervised fine-tuning (SFT) and direct preference optimization (DPO)
High-Quality Training Data
Incorporates multiple curated datasets, including alpaca-gpt4-data, SlimOrca, and more

Model Capabilities

Text generation
Question answering systems
Natural language understanding
Mathematical reasoning

Use Cases

Education
Math problem solving
Solving math problems from the GSM8K dataset
Achieved a score of 66.94 in GSM8K evaluation
Knowledge Q&A
Common sense Q&A
Answering common sense questions from the ARC dataset
Achieved a score of 77.73 in ARC evaluation
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase