🚀 aryashah00/survey-finetuned-TinyLlama-1.1B-Chat-v1.0
該模型是一個經過微調的版本,優化後可用於跨多個領域生成合成調查回覆。它基於特定的調查回覆自定義數據集進行指令微調,每個回覆都反映了特定的人物角色。
🚀 快速開始
本模型專為從不同人物角色生成合成調查回覆而設計。在提供以下信息時,模型效果最佳:
- 詳細的人物角色描述
- 具體的調查問題
✨ 主要特性
📦 安裝指南
文檔未提及安裝步驟,跳過該章節。
💻 使用示例
基礎用法
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("aryashah00/survey-finetuned-TinyLlama-1.1B-Chat-v1.0", device_map="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("aryashah00/survey-finetuned-TinyLlama-1.1B-Chat-v1.0", trust_remote_code=True)
persona = "A nurse who educates the child about modern medical treatments and encourages a balanced approach to healthcare"
question = "How often was your pain well controlled during this hospital stay?"
system_prompt = f"You are embodying the following persona: {{persona}}"
user_prompt = f"Survey Question: {{question}}\n\nPlease provide your honest and detailed response to this question."
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt}
]
input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(model.device)
import torch
with torch.no_grad():
output_ids = model.generate(
input_ids=input_ids,
max_new_tokens=256,
temperature=0.7,
top_p=0.9,
do_sample=True
)
output = tokenizer.decode(output_ids[0], skip_special_tokens=True)
response_start = output.find(input_text) + len(input_text)
generated_response = output[response_start:].strip()
print(f"Generated response: {{generated_response}}")
高級用法
import requests
API_URL = "https://api-inference.huggingface.co/models/aryashah00/survey-finetuned-TinyLlama-1.1B-Chat-v1.0"
headers = {"Authorization": "Bearer YOUR_API_KEY"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
messages = [
{"role": "system", "content": "You are embodying the following persona: A nurse who educates the child about modern medical treatments and encourages a balanced approach to healthcare"},
{"role": "user", "content": "Survey Question: How often was your pain well controlled during this hospital stay?\n\nPlease provide your honest and detailed response to this question."}
]
output = query({"inputs": messages})
print(output)
📚 詳細文檔
模型信息
訓練細節
侷限性
- 該模型針對調查回覆生成進行了優化,在其他任務上可能表現不佳。
- 回覆質量取決於人物角色和問題的清晰度和具體程度。
- 模型偶爾可能生成與給定人物角色不完全相符的回覆。
🔧 技術細節
本模型基於 TinyLlama/TinyLlama-1.1B-Chat-v1.0 進行微調。通過使用自定義的調查回覆數據集進行指令微調,使得模型能夠根據不同的人物角色生成相應的調查回覆。訓練過程中採用了參數高效微調的方法(LoRA),並設置了特定的參數(r=16, alpha=32, dropout=0.05)。在訓練設置上,批次大小為8,學習率為0.0002,訓練輪數為5,以確保模型能夠學習到數據集中的特徵並生成高質量的回覆。
📄 許可證
該模型遵循基礎模型 TinyLlama/TinyLlama-1.1B-Chat-v1.0 的許可證。