模型概述
模型特點
模型能力
使用案例
🚀 orca_mini_13b
orca_mini_13b是基於OpenLLaMa - 13B模型,在特定數據集上進行微調訓練的模型,它借鑑了Orca研究論文的數據集構建方法,能夠學習到更優的思維過程。
✨ 主要特性
- 基於OpenLLaMa - 13B模型,在解釋調優的數據集上進行訓練。
- 使用了WizardLM、Alpaca和Dolly - V2數據集的指令和輸入,並應用了Orca研究論文的數據集構建方法。
- 藉助Orca研究論文中的15條系統指令生成自定義數據集,幫助模型學習教師模型(ChatGPT)的思維過程。
📦 安裝指南
文檔未提及安裝步驟,暫不提供。
💻 使用示例
基礎用法
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer
# Hugging Face model_path
model_path = 'psmathur/orca_mini_13b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
#generate text function
def generate_text(system, instruction, input=None):
if input:
prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n"
else:
prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Response:\n"
tokens = tokenizer.encode(prompt)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to('cuda')
instance = {'input_ids': tokens,'top_p': 1.0, 'temperature':0.7, 'generate_len': 1024, 'top_k': 50}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length+instance['generate_len'],
use_cache=True,
do_sample=True,
top_p=instance['top_p'],
temperature=instance['temperature'],
top_k=instance['top_k']
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f'[!] Response: {string}'
# Sample Test Instruction Used by Youtuber Sam Witteveen https://www.youtube.com/@samwitteveenai
system = 'You are an AI assistant that follows instruction extremely well. Help as much as you can.'
instruction = 'Write a letter to Sam Altman, CEO of OpenAI, requesting him to convert GPT4 a private model by OpenAI to an open source project'
print(generate_text(system, instruction))
高級用法
文檔未提及高級用法代碼示例,暫不提供。
[!] Response:
Dear Sam Altman,
I am writing to request that you convert the GPT4 private model developed by OpenAI to an open source project. As a user of OpenAI, I have been waiting for the day when I can use the advanced natural language processing capabilities of GPT4 in a more open and accessible way.
While OpenAI has made significant progress in developing AI applications, it has primarily focused on building private models that are not accessible to the general public. However, with the recent release of GPT - 3, there is a growing demand for more open and accessible AI tools.
Converting GPT4 to an open source project would allow for greater transparency, collaboration, and innovation. It would also help to build trust in the technology and ensure that it is used ethically and responsibly.
I urge you to consider converting GPT4 to an open source project. This would be a significant contribution to the AI community and would help to create a more open and accessible future.
Thank you for your consideration.
Sincerely,
[Your Name]
📚 詳細文檔
數據集
我們構建瞭解釋調優的WizardLM數據集~70K、Alpaca數據集~52K和Dolly - V2數據集~15K,使用了Orca研究論文中的方法。與原始數據集使用的普通指令調優方法不同,我們利用了Orca研究論文中提供的15條系統指令來生成自定義數據集。這有助於學生模型(即本模型)從教師模型(ChatGPT,gpt - 3.5 - turbo - 0301版本)學習思維過程。
訓練
訓練配置如下表所示:
參數 | 值 |
---|---|
batch_size | 16 |
train_micro_batch_size_per_gpu | 2 |
gradient_accumulation_steps | 1 |
Learning rate | 2e - 5 |
Max length | 1024 |
Epochs | 3 |
Optimizer | AdamW |
訓練在8個A100(80G)GPU上進行,持續約15小時,使用Lambda Labs的成本約為180美元。我們使用了DeepSpeed和完全分片數據並行(即[ZeRO stage 3](https://engineering.fb.com/2021/07/15/open - source/fsdp/)),通過編寫自己的微調腳本並利用OpenAlpaca倉庫提供的部分模型訓練代碼。
後續目標
- 嘗試更多數據,如實際使用FLAN - v2,就像Orca研究論文中那樣(歡迎提出建議)。
- 為文本生成界面提供更多選項(可能使用https://github.com/oobabooga/text - generation - webui)。
- 提供4位的GGML/GPTQ量化模型(可能TheBloke可以提供幫助)。
侷限性與偏差
本模型可能會產生事實錯誤的輸出,不應依賴它來生成事實準確的信息。該模型在各種公共數據集上進行訓練,儘管在清理預訓練數據方面已付出巨大努力,但仍有可能生成低俗、有偏見或其他冒犯性的輸出。
免責聲明
本模型的許可證不構成法律建議。我們不對使用此模型的第三方的行為負責。在將此模型用於商業目的之前,請諮詢律師。
引用
如果您在研究或應用中發現wizardlm_alpaca_dolly_orca_open_llama_13b很有用,請使用以下BibTeX進行引用:
@misc{orca_mini_13b,
author = {Pankaj Mathur},
title = {orca_mini_13b: An explain tuned OpenLLaMA - 13b model on custom wizardlm, alpaca, & dolly datasets},
year = {2023},
publisher = {GitHub, HuggingFace},
journal = {GitHub repository, HuggingFace repository},
howpublished = {\url{https://github.com/pankajarm/wizardlm_alpaca_dolly_orca_open_llama_13b}, \url{https://https://huggingface.co/psmathur/wizardlm_alpaca_dolly_orca_open_llama_13b}},
}
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT - 4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@software{openlm2023openllama,
author = {Xinyang Geng and Hao Liu},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm - research/open_llama}
}
@misc{openalpaca,
author = {Yixuan Su and Tian Lan and Deng Cai},
title = {OpenAlpaca: A Fully Open - Source Instruction - Following Model Based On OpenLLaMA},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/yxuansu/OpenAlpaca}},
}
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction - following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu - lab/stanford_alpaca}},
}
Open LLM Leaderboard評估結果
詳細結果可查看[此處](https://huggingface.co/datasets/open - llm - leaderboard/details_psmathur__orca_mini_13b)
指標 | 值 |
---|---|
平均值 | 37.06 |
ARC (25 - shot) | 42.06 |
HellaSwag (10 - shot) | 63.4 |
MMLU (5 - shot) | 35.43 |
TruthfulQA (0 - shot) | 43.1 |
Winogrande (5 - shot) | 64.17 |
GSM8K (5 - shot) | 0.0 |
DROP (3 - shot) | 11.23 |
Open LLM Leaderboard評估結果
詳細結果可查看[此處](https://huggingface.co/datasets/open - llm - leaderboard/details_psmathur__orca_mini_13b)
指標 | 值 |
---|---|
平均值 | 41.36 |
AI2推理挑戰 (25 - Shot) | 42.06 |
HellaSwag (10 - Shot) | 63.40 |
MMLU (5 - Shot) | 35.43 |
TruthfulQA (0 - shot) | 43.10 |
Winogrande (5 - shot) | 64.17 |
GSM8k (5 - shot) | 0.00 |
🔧 技術細節
文檔未提供足夠詳細的技術實現細節,暫不提供。
📄 許可證
本模型採用CC - BY - NC - SA 4.0許可證。



