模型简介
模型特点
模型能力
使用案例
🚀 orca_mini_13b
orca_mini_13b是基于OpenLLaMa - 13B模型,在特定数据集上进行微调训练的模型,它借鉴了Orca研究论文的数据集构建方法,能够学习到更优的思维过程。
✨ 主要特性
- 基于OpenLLaMa - 13B模型,在解释调优的数据集上进行训练。
- 使用了WizardLM、Alpaca和Dolly - V2数据集的指令和输入,并应用了Orca研究论文的数据集构建方法。
- 借助Orca研究论文中的15条系统指令生成自定义数据集,帮助模型学习教师模型(ChatGPT)的思维过程。
📦 安装指南
文档未提及安装步骤,暂不提供。
💻 使用示例
基础用法
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer
# Hugging Face model_path
model_path = 'psmathur/orca_mini_13b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
#generate text function
def generate_text(system, instruction, input=None):
if input:
prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n"
else:
prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Response:\n"
tokens = tokenizer.encode(prompt)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to('cuda')
instance = {'input_ids': tokens,'top_p': 1.0, 'temperature':0.7, 'generate_len': 1024, 'top_k': 50}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length+instance['generate_len'],
use_cache=True,
do_sample=True,
top_p=instance['top_p'],
temperature=instance['temperature'],
top_k=instance['top_k']
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f'[!] Response: {string}'
# Sample Test Instruction Used by Youtuber Sam Witteveen https://www.youtube.com/@samwitteveenai
system = 'You are an AI assistant that follows instruction extremely well. Help as much as you can.'
instruction = 'Write a letter to Sam Altman, CEO of OpenAI, requesting him to convert GPT4 a private model by OpenAI to an open source project'
print(generate_text(system, instruction))
高级用法
文档未提及高级用法代码示例,暂不提供。
[!] Response:
Dear Sam Altman,
I am writing to request that you convert the GPT4 private model developed by OpenAI to an open source project. As a user of OpenAI, I have been waiting for the day when I can use the advanced natural language processing capabilities of GPT4 in a more open and accessible way.
While OpenAI has made significant progress in developing AI applications, it has primarily focused on building private models that are not accessible to the general public. However, with the recent release of GPT - 3, there is a growing demand for more open and accessible AI tools.
Converting GPT4 to an open source project would allow for greater transparency, collaboration, and innovation. It would also help to build trust in the technology and ensure that it is used ethically and responsibly.
I urge you to consider converting GPT4 to an open source project. This would be a significant contribution to the AI community and would help to create a more open and accessible future.
Thank you for your consideration.
Sincerely,
[Your Name]
📚 详细文档
数据集
我们构建了解释调优的WizardLM数据集~70K、Alpaca数据集~52K和Dolly - V2数据集~15K,使用了Orca研究论文中的方法。与原始数据集使用的普通指令调优方法不同,我们利用了Orca研究论文中提供的15条系统指令来生成自定义数据集。这有助于学生模型(即本模型)从教师模型(ChatGPT,gpt - 3.5 - turbo - 0301版本)学习思维过程。
训练
训练配置如下表所示:
参数 | 值 |
---|---|
batch_size | 16 |
train_micro_batch_size_per_gpu | 2 |
gradient_accumulation_steps | 1 |
Learning rate | 2e - 5 |
Max length | 1024 |
Epochs | 3 |
Optimizer | AdamW |
训练在8个A100(80G)GPU上进行,持续约15小时,使用Lambda Labs的成本约为180美元。我们使用了DeepSpeed和完全分片数据并行(即[ZeRO stage 3](https://engineering.fb.com/2021/07/15/open - source/fsdp/)),通过编写自己的微调脚本并利用OpenAlpaca仓库提供的部分模型训练代码。
后续目标
- 尝试更多数据,如实际使用FLAN - v2,就像Orca研究论文中那样(欢迎提出建议)。
- 为文本生成界面提供更多选项(可能使用https://github.com/oobabooga/text - generation - webui)。
- 提供4位的GGML/GPTQ量化模型(可能TheBloke可以提供帮助)。
局限性与偏差
本模型可能会产生事实错误的输出,不应依赖它来生成事实准确的信息。该模型在各种公共数据集上进行训练,尽管在清理预训练数据方面已付出巨大努力,但仍有可能生成低俗、有偏见或其他冒犯性的输出。
免责声明
本模型的许可证不构成法律建议。我们不对使用此模型的第三方的行为负责。在将此模型用于商业目的之前,请咨询律师。
引用
如果您在研究或应用中发现wizardlm_alpaca_dolly_orca_open_llama_13b很有用,请使用以下BibTeX进行引用:
@misc{orca_mini_13b,
author = {Pankaj Mathur},
title = {orca_mini_13b: An explain tuned OpenLLaMA - 13b model on custom wizardlm, alpaca, & dolly datasets},
year = {2023},
publisher = {GitHub, HuggingFace},
journal = {GitHub repository, HuggingFace repository},
howpublished = {\url{https://github.com/pankajarm/wizardlm_alpaca_dolly_orca_open_llama_13b}, \url{https://https://huggingface.co/psmathur/wizardlm_alpaca_dolly_orca_open_llama_13b}},
}
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT - 4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@software{openlm2023openllama,
author = {Xinyang Geng and Hao Liu},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm - research/open_llama}
}
@misc{openalpaca,
author = {Yixuan Su and Tian Lan and Deng Cai},
title = {OpenAlpaca: A Fully Open - Source Instruction - Following Model Based On OpenLLaMA},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/yxuansu/OpenAlpaca}},
}
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction - following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu - lab/stanford_alpaca}},
}
Open LLM Leaderboard评估结果
详细结果可查看[此处](https://huggingface.co/datasets/open - llm - leaderboard/details_psmathur__orca_mini_13b)
指标 | 值 |
---|---|
平均值 | 37.06 |
ARC (25 - shot) | 42.06 |
HellaSwag (10 - shot) | 63.4 |
MMLU (5 - shot) | 35.43 |
TruthfulQA (0 - shot) | 43.1 |
Winogrande (5 - shot) | 64.17 |
GSM8K (5 - shot) | 0.0 |
DROP (3 - shot) | 11.23 |
Open LLM Leaderboard评估结果
详细结果可查看[此处](https://huggingface.co/datasets/open - llm - leaderboard/details_psmathur__orca_mini_13b)
指标 | 值 |
---|---|
平均值 | 41.36 |
AI2推理挑战 (25 - Shot) | 42.06 |
HellaSwag (10 - Shot) | 63.40 |
MMLU (5 - Shot) | 35.43 |
TruthfulQA (0 - shot) | 43.10 |
Winogrande (5 - shot) | 64.17 |
GSM8k (5 - shot) | 0.00 |
🔧 技术细节
文档未提供足够详细的技术实现细节,暂不提供。
📄 许可证
本模型采用CC - BY - NC - SA 4.0许可证。



