Flow Judge V0.1
Flow Judge v0.1 是一款輕量級但功能強大的 38 億參數模型,可在多個領域對大語言模型(LLM)系統進行定製化評估。
下載量 6,094
發布時間 : 9/15/2024
模型概述
Flow Judge v0.1 是一款基於 Phi-3.5-mini 指令模型架構的輕量級評估模型,專注於對大語言模型系統的性能進行定製化評估。
模型特點
可定製評估
用戶能夠定義自己的評估標準和評分規則,使 Flow Judge 滿足特定需求,實現對 LLM 系統性能的精準評估。
多評分體系支持
支持三種不同的評分尺度,包括二元通過/失敗評分、3-李克特評分和5-李克特評分,可滿足不同粒度的評估需求。
結構化評估結果
生成帶有<feedback>和<score>標籤的結構化評估結果,包含定性反饋和數值分數。
輕量級高性能
儘管模型規模較小,但在保留數據集和域外基準測試中,其性能可與更大的模型相媲美。
模型能力
大語言模型系統評估
定製化評分
結構化反饋生成
多尺度評分
使用案例
客戶服務
客戶投訴處理評估
評估AI系統對客戶投訴郵件的回覆質量
提供詳細的反饋和評分,指出回覆中的優點和不足
內容生成
生成內容質量評估
評估AI生成內容的準確性、相關性和流暢性
根據自定義標準提供結構化評分和反饋
🚀 Flow Judge v0.1
Flow Judge v0.1 是一款輕量級但功能強大的 38 億參數模型,可在多個領域對大語言模型(LLM)系統進行定製化評估。它繼承了 Phi - 3.5 - mini 指令模型的架構,在保持小體積的同時能提供高質量的評估結果。儘管模型規模較小,但在保留數據集和域外基準測試中,其性能可與更大的模型相媲美。
🚀 Flow Judge | 📄 技術報告 | 💻 flow-judge
✨ 主要特性
- 可定製評估:用戶能夠定義自己的評估標準和評分規則,使 Flow Judge 滿足特定需求,實現對 LLM 系統性能的精準評估。
- 支持多評分體系:支持三種不同的評分尺度,包括二元通過/失敗評分、3 - 李克特評分和 5 - 李克特評分,可滿足不同粒度的評估需求。
- 結果易於解釋:生成帶有
<feedback>
和<score>
標籤的結構化評估結果。其中,定性反饋會解釋評分理由並指出回覆中的問題部分;評分則基於評分規則,以二元、3 - 李克特或 5 - 李克特尺度返回數值分數。
📦 安裝指南
文檔未提供安裝步驟,故跳過此章節。
💻 使用示例
基礎用法
帶輸入的提示模板
# GOAL
Your job is to evaluate a task carried out by an AI system powered by a large language model.
You will be provided with the inputs and output of the task, as well as the evaluation criteria and scoring rubric. Your task is to evaluate the output of the AI system based on the evaluation criteria and scoring rubric provided.
# INPUT
Below are the inputs required for performing the task:
<inputs>
{INPUTS}
</inputs>
# OUTPUT
Below is the output of the task:
<output>
{OUTPUT}
</output>
# EVALUATION CRITERIA AND SCORING RUBRIC
Here are the evaluation criteria and the rubric that you need to use for evaluating the task:
<evaluation_criteria>
{EVALUATION_CRITERIA}
</evaluation_criteria>
<scoring_rubric>
{RUBRIC}
</scoring_rubric>
# INSTRUCTIONS FOR THE EVALUATION
1. Understand the task and criteria: Familiarize yourself with the task to be evaluated. Review the evaluation criteria and scoring rubric to understand the different levels of performance and the descriptions for each score.
2. Review the inputs and output: Look at the inputs provided for the task. Examine the output generated from completing the task.
3. Compare output to score descriptions: Compare the output against the criteria and score descriptions in the scoring rubric. For each criterion,decide which description best matches the output.
4. After comparing the output to the score descriptions, pay attention to the small details that might impact the final score that you assign. Sometimes a small difference can dictate the final score.
5. Write verbal feedback justifying your evaluation that includes a detailed rationale, referring to specific aspects of the output and comparing them to the rubric.
6. Assign a final score based on the scoring rubric.
## FORMAT FOR THE EVALUATION
- Write the verbal feedback inside <feedback> tags without any additional surrounding text.
- Write the numeric score inside <score> tags, without any additional surrounding text and always after the feedback.
Please accurately evaluate the task. Strictly adhere to the evaluation criteria and rubric.
不帶輸入的提示模板
# GOAL
Your job is to evaluate a task carried out by an AI system powered by a large language model.
You will be provided the output of the task, as well as the evaluation criteria and scoring rubric. Your task is to evaluate the output of the AI system based on the evaluation criteria and scoring rubric provided.
# OUTPUT
Below is the output of the task:
<output>
{OUTPUT}
</output>
# EVALUATION CRITERIA AND SCORING RUBRIC
Here are the evaluation criteria and the rubric that you need to use for evaluating the task:
<evaluation_criteria>
{EVALUATION_CRITERIA}
</evaluation_criteria>
<scoring_rubric>
{RUBRIC}
</scoring_rubric>
# INSTRUCTIONS FOR THE EVALUATION
1. Understand the task and criteria: Familiarize yourself with the task to be evaluated. Review the evaluation criteria and scoring rubric to understand the different levels of performance and the descriptions for each score.
2. Review the output: Examine the output generated from completing the task.
3. Compare output to score descriptions: Compare the output against the criteria and score descriptions in the scoring rubric. For each criterion,decide which description best matches the output.
4. After comparing the output to the score descriptions, pay attention to the small details that might impact the final score that you assign. Sometimes a small difference can dictate the final score.
5. Write verbal feedback justifying your evaluation that includes a detailed rationale, referring to specific aspects of the output and comparing them to the rubric.
6. Assign a final score based on the scoring rubric.
## FORMAT FOR THE EVALUATION
- Write the verbal feedback inside <feedback> tags without any additional surrounding text.
- Write the numeric score inside <score> tags, without any additional surrounding text and always after the feedback.
Please accurately evaluate the task. Strictly adhere to the evaluation criteria and rubric.
帶輸入的格式化提示示例
# GOAL
Your job is to evaluate a task carried out by an AI system powered by a large language model.
You will be provided with the inputs and output of the task, as well as the evaluation criteria and scoring rubric. Your task is to evaluate the output of the AI system based on the evaluation criteria and scoring rubric provided.
# INPUT/s
Below are the inputs required for performing the task:
<inputs>
<customer_query>
Subject: Issues with recent order #12345
Hello,
I recently received my order #12345, but I'm having a few problems:
1. The blue t-shirt I ordered is actually green. Can I exchange it?
2. The phone case doesn't fit my iPhone 13 Pro Max. The website said it was compatible.
3. I was charged for express shipping, but it took over a week to arrive. Can I get a refund on the shipping cost?
Please let me know how we can resolve these issues. I've been a loyal customer for years and this is the first time I've had problems like this.
Thank you,
Jane Smith
</customer_query>
</inputs>
# OUTPUT
Below is the output of the task:
<output>
Dear Jane Smith,
Thank you for reaching out about the issues with your recent order #12345. We sincerely apologize for the inconvenience you've experienced. Let me address each of your concerns:
1. T-shirt color discrepancy: We're sorry about the mix-up with the t-shirt color. We'd be happy to exchange it for the correct blue t-shirt. I'll send you a prepaid return label for the green shirt, and we'll ship out the blue one as soon as we receive the return.
2. Phone case compatibility: We apologize for the confusion regarding the phone case compatibility. Our website information should have been more accurate. We'll gladly accept a return of the incompatible case and issue a full refund for it. Additionally, I can recommend a case that will fit your iPhone 13 Pro Max if you're interested.
3. Shipping charge: You're absolutely right about the shipping charge. Since the delivery didn't meet our express shipping standards, we will refund the entire shipping cost to your original payment method. This refund should process within 3-5 business days.
To initiate these resolutions, please reply to this email confirming that you'd like to proceed with the t-shirt exchange and phone case return. Once confirmed, I'll send you the necessary return labels and process the shipping refund immediately.
We truly value your loyalty as a long-time customer and sincerely regret that you've encountered these issues. Rest assured, we're taking steps to prevent similar problems in the future. As a gesture of goodwill, we'd like to offer you a 20% discount on your next order.
If you have any further questions or concerns, please don't hesitate to reach out. We're here to ensure your complete satisfaction.
Best regards,
Alex Johnson
Customer Service Representative
</output>
# EVALUATION CRITERIA AND SCORING RUBRIC
Here are the evaluation criteria and the rubric that you need to use for evaluating the task:
<evaluation_criteria>
How well the response addresses the specific issues raised in the customer's query?
</evaluation_criteria>
<scoring_rubric>
- Score 1: The response completely fails to address the customer's needs and ignores the specific issues raised.
- Score 2: The response barely addresses the customer's query and misses most of the specific issues raised.
- Score 3: The response partially addresses the customer's query, touching on some of the specific issues but leaving others unaddressed.
- Score 4: The response adequately addresses most aspects of the customer's query and the specific issues raised.
- Score 5: The response fully and comprehensively addresses all aspects of the customer's query and all specific issues raised in a highly satisfactory manner.
</scoring_rubric>
# INSTRUCTIONS FOR THE EVALUATION
1. Understand the task and criteria: Familiarize yourself with the task to be evaluated. Review the evaluation criteria and scoring rubric to understand the different levels of performance and the descriptions for each score.
2. Review the inputs and output: Look at the inputs provided for the task. Examine the output generated from completing the task.
3. Compare output to score descriptions: Compare the output against the criteria and score descriptions in the scoring rubric. For each criterion,decide which description best matches the output.
4. After comparing the output to the score descriptions, pay attention to the small details that might impact the final score that you assign. Sometimes a small difference can dictate the final score.
5. Write verbal feedback justifying your evaluation that includes a detailed rationale, referring to specific aspects of the output and comparing them to the rubric.
6. Assign a final score based on the scoring rubric.
## FORMAT FOR THE EVALUATION
- Write the verbal feedback inside <feedback> tags without any additional surrounding text.
- Write the numeric score inside <score> tags, without any additional surrounding text and always after the feedback.
Please accurately evaluate the task. Strictly adhere to the evaluation criteria and rubric.
📚 詳細文檔
預期用例
Flow Judge 旨在用於定製化的 LLM 系統評估任務,具體如下:
- 可定製評估:用戶可定義自己的評估標準和評分規則,使評估更具針對性,準確衡量 LLM 系統的性能。
- 多評分體系支持:
- 二元通過/失敗評分:適用於二元評估,如判斷一段文本是否符合特定標準或是否包含錯誤。
- 3 - 李克特評分:允許更細緻的評估,分數範圍從負面到中立再到正面,有助於評估文本的整體質量或情感傾向。
- 5 - 李克特評分:提供更細緻的評估,分數範圍從強烈負面到強烈正面,能捕捉質量或情感的細微差異。
- 結果易於解釋:Flow Judge 生成帶有
<feedback>
和<score>
標籤的結構化評估結果。定性反饋會解釋評分理由並指出回覆中的問題部分;評分則基於評分規則,以二元、3 - 李克特或 5 - 李克特尺度返回數值分數。
量化權重
快速開始
🔧 技術細節
模型
Flow Judge 基於 Phi - 3.5 - mini 架構,具體使用其指令版本的基礎模型檢查點。該模型使用相同的分詞器,支持多查詢注意力(MQA)和 Flash Attention 2,權重採用 bfloat16 精度。不過,微調後模型對語言和長上下文長度的支持尚未完全測試。由於經過專門的監督微調(SFT),Flow Judge 可能會顯示不同的基準測試結果,並且最大上下文長度為 8192,比基礎模型短。
訓練數據集
Flow - Judge - v0.1 在合成生成的數據集上進行訓練。其訓練數據集的構建包括以下多步驟過程:
- 手動策劃種子評分規則作為基礎。
- 為不同領域合成生成適應特定領域的指標和評分規則。
- 合成生成包含多個輸入(如用戶查詢和上下文信息)的訓練實例。
- 採用雙重評估策略並達成共識,以確保質量和一致性。
此過程創建了全面且多樣化的訓練實例集,可在生成式 AI 產品中對 LLM 系統進行準確的特定領域評估,同時減少人工干預。更多關於數據集構建的信息可查看此處。
微調
微調時使用 Axolotl 的預處理方法確保輸入訓練數據的一致性。然後基於 microsoft/Phi - 3.5 - mini - instruct 進行監督微調,採用 RSLoRa 方法。更多關於微調過程的詳細信息可查看技術報告。
📄 許可證
本項目採用 Apache 2.0 許可證,詳情請見 LICENSE。
Phi 2 GGUF
其他
Phi-2是微軟開發的一個小型但強大的語言模型,具有27億參數,專注於高效推理和高質量文本生成。
大型語言模型 支持多種語言
P
TheBloke
41.5M
205
Roberta Large
MIT
基於掩碼語言建模目標預訓練的大型英語語言模型,採用改進的BERT訓練方法
大型語言模型 英語
R
FacebookAI
19.4M
212
Distilbert Base Uncased
Apache-2.0
DistilBERT是BERT基礎模型的蒸餾版本,在保持相近性能的同時更輕量高效,適用於序列分類、標記分類等自然語言處理任務。
大型語言模型 英語
D
distilbert
11.1M
669
Llama 3.1 8B Instruct GGUF
Meta Llama 3.1 8B Instruct 是一個多語言大語言模型,針對多語言對話用例進行了優化,在常見的行業基準測試中表現優異。
大型語言模型 英語
L
modularai
9.7M
4
Xlm Roberta Base
MIT
XLM-RoBERTa是基於100種語言的2.5TB過濾CommonCrawl數據預訓練的多語言模型,採用掩碼語言建模目標進行訓練。
大型語言模型 支持多種語言
X
FacebookAI
9.6M
664
Roberta Base
MIT
基於Transformer架構的英語預訓練模型,通過掩碼語言建模目標在海量文本上訓練,支持文本特徵提取和下游任務微調
大型語言模型 英語
R
FacebookAI
9.3M
488
Opt 125m
其他
OPT是由Meta AI發佈的開放預訓練Transformer語言模型套件,參數量從1.25億到1750億,旨在對標GPT-3系列性能,同時促進大規模語言模型的開放研究。
大型語言模型 英語
O
facebook
6.3M
198
1
基於transformers庫的預訓練模型,適用於多種NLP任務
大型語言模型
Transformers

1
unslothai
6.2M
1
Llama 3.1 8B Instruct
Llama 3.1是Meta推出的多語言大語言模型系列,包含8B、70B和405B參數規模,支持8種語言和代碼生成,優化了多語言對話場景。
大型語言模型
Transformers 支持多種語言

L
meta-llama
5.7M
3,898
T5 Base
Apache-2.0
T5基礎版是由Google開發的文本到文本轉換Transformer模型,參數規模2.2億,支持多語言NLP任務。
大型語言模型 支持多種語言
T
google-t5
5.4M
702
精選推薦AI模型
Llama 3 Typhoon V1.5x 8b Instruct
專為泰語設計的80億參數指令模型,性能媲美GPT-3.5-turbo,優化了應用場景、檢索增強生成、受限生成和推理任務
大型語言模型
Transformers 支持多種語言

L
scb10x
3,269
16
Cadet Tiny
Openrail
Cadet-Tiny是一個基於SODA數據集訓練的超小型對話模型,專為邊緣設備推理設計,體積僅為Cosmo-3B模型的2%左右。
對話系統
Transformers 英語

C
ToddGoldfarb
2,691
6
Roberta Base Chinese Extractive Qa
基於RoBERTa架構的中文抽取式問答模型,適用於從給定文本中提取答案的任務。
問答系統 中文
R
uer
2,694
98