đ Uploaded Model
This is a fine - tuned Llama model designed to enhance performance in financial tasks.
Model Information
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

đ Model Card
The goal of this model is to enhance the base model's performance on financial tasks by fine - tuning it on a specialized financial dataset. Using LoRA, this model has been optimized for low - rank adaptation, allowing efficient fine - tuning with fewer resources.
đ§ Model Details
Property |
Details |
Base Model |
unsloth/DeepSeek-R1-Distill-Llama-8B |
Model Type |
Language Model (Distilled) |
Fine - Tuning Technique |
LoRA (Low - Rank Adaptation) |
Fine - Tuned Model |
DeepSeek-R1-Distill-Llama-8B-finance-v1 |
Dataset |
Josephgflowers/Finance-Instruct-500k (reduced to 5k JSONL entries) |
Platform |
Free - tier Kaggle Notebook |
Library |
Hugging Face Transformers, Unsloth and Pytorch |
This model is a fine - tuned version of the unsloth/DeepSeek-R1-Distill-Llama-8B, utilizing LoRA for efficient parameter adaptation. It has been specifically tuned on a reduced version (5k) of the Josephgflowers/Finance-Instruct-500k dataset to enhance performance in finance - related tasks.
đ¯ Intended Use
The model is intended for tasks related to financial question answering, generation, and instructions that require domain - specific knowledge in finance. It can also be used in other natural language understanding and generation tasks that benefit from fine - tuning on a finance - specific dataset.
đ Dataset
The model was fine - tuned on a subset of the Finance - Instruct - 500k dataset from Hugging Face, specifically reduced to 5,000 JSONL entries for the fine - tuning process. This dataset contains financial questions and answers, providing a rich set of examples for training the model.
đ Training Data
Property |
Details |
Dataset Name |
Josephgflowers/Finance-Instruct-500k |
Data Size |
5k samples (subset from original dataset) |
Domain |
Finance |
Task |
Instruction - based fine - tuning for financial information retrieval and generation. |
đ Notes
â ī¸ Important Note
This fine - tuning was performed on the free - tier of Kaggle Notebook, so training time and available resources are limited.
đĄ Usage Tip
- Ensure that your runtime in Colab/Kaggle is set to a GPU environment to speed up the training process.
- The reduced 5k dataset is a smaller sample for experimentation. You can scale this up depending on your needs and available resources.
đ¯ Performance
The model performs well in financial instruction tasks, delivering accurate responses based on the reduced dataset. Performance can be further evaluated through specific finance - related benchmarks.
đģ Usage Examples
Basic Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("abhi9ab/DeepSeek-R1-Distill-Llama-8B-finance-v1")
model = AutoModelForCausalLM.from_pretrained("abhi9ab/DeepSeek-R1-Distill-Llama-8B-finance-v1")
inputs = tokenizer("Example finance-related query", return_tensors="pt")
outputs = model.generate(inputs['input_ids'])
đ Acknowledgement
- Josephgflowers for the dataset.
- Hugging Face Transformers library for model implementation and Unsloth for LoRA - based fine - tuning.