🚀 微調聊天機器人模型
微調聊天機器人模型基於t5-small構建,專門針對客戶支持場景進行了優化。該模型能夠高效處理客戶諮詢,為客戶提供準確且及時的回覆,提升客戶服務體驗。
🚀 快速開始
模型信息
屬性 |
詳情 |
支持語言 |
英語、法語 |
最大生成長度 |
512 |
標籤 |
text-generation-inference、客戶支持 |
許可證 |
apache-2.0 |
模型概述
微調聊天機器人模型基於 t5-small 構建,專門針對客戶支持用例進行了定製。用於微調的訓練數據來自 Bitext 客戶支持大語言模型聊天機器人訓練數據集。
模型詳情
💻 使用示例
基礎用法
from transformers import pipeline
pipe = pipeline("text2text-generation", model="mrSoul7766/CUSsupport-chat-t5-small")
user_query = "How could I track the compensation?"
answer = pipe(f"answer: {user_query}", max_length=512)
print(answer[0]['generated_text'])
I'm on it! I'm here to assist you in tracking the compensation. To track the compensation, you can visit our website and navigate to the "Refunds" or "Refunds" section. There, you will find detailed information about the compensation you are entitled to. If you have any other questions or need further assistance, please don't hesitate to let me know. I'm here to help!
高級用法
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("mrSoul7766/CUSsupport-chat-t5-small")
model = AutoModelForSeq2SeqLM.from_pretrained("mrSoul7766/CUSsupport-chat-t5-small")
max_length = 512
input_ids = tokenizer.encode("I am waiting for a refund of $2?", return_tensors="pt")
output_ids = model.generate(input_ids, max_length=max_length)
response = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(response)
I'm on it! I completely understand your anticipation for a refund of $2. Rest assured, I'm here to assist you every step of the way. To get started, could you please provide me with more details about the specific situation? This will enable me to provide you with the most accurate and up-to-date information regarding your refund. Your satisfaction is our top priority, and we appreciate your patience as we work towards resolving this matter promptly.
📄 許可證
本項目採用 apache-2.0 許可證。