🚀 Model Card for Coherence Testing Model
This model is a fine - tuned version of sentence - transformers/all - mpnet - base - v2
, specifically designed for coherence testing in dialogues. It uses the cross - encoder architecture to evaluate the relevance and coherence of responses to prompts or questions, which can be used to enhance chatbots and dialogue systems.
🚀 Quick Start
You can use the model as follows:
from sentence_transformers import CrossEncoder
model = CrossEncoder('enochlev/coherence-all-mpnet-base-v2')
output = model.predict([["What is your favorite color?", "Blue!"],
["Do you like playing outside?", "I like ice cream."],
["What is your favorite animal?", "I like dogs!"],
["Do you want to go to the park?", "Yes, I want to go on the swings!"],
["What is your favorite food?", "I like playing with blocks."],
["Do you have a pet?", "Yes, I have a cat named Whiskers."],
["What is your favorite thing to do on a sunny day?", "I like playing soccer with my friends."]])
print(output)
The output array represents coherence scores where higher scores indicate greater coherence.
✨ Features
- Dialogue Coherence Evaluation: Specifically designed to evaluate the coherence of responses in dialogues.
- Cross - Encoder Architecture: Leverages the cross - encoder architecture from the
sentence - transformers
library.
- Fine - Tuned Model: Fine - tuned from
sentence - transformers/all - mpnet - base - v2
for better performance in dialogue coherence testing.
📚 Documentation
Model Details
Model Description
This model is a fine - tuned version of the sentence - transformers/all - mpnet - base - v2
designed specifically for coherence testing in dialogues. Leveraging the cross - encoder architecture from the [sentence - transformers](https://github.com/UKPLab/sentence - transformers) library, it is intended to evaluate the relevance and coherence of responses given a prompt or question.
Property |
Details |
Developed by |
Enoch Levandovsky |
Model Type |
Cross - encoder |
Language(s) |
English |
License |
Check the repository for more information |
Finetuned from model |
sentence - transformers/all - mpnet - base - v2 |
Model Sources
- Repository: [Model on Hugging Face](https://huggingface.co/enochlev/coherence - all - mpnet - base - v2)
- Space Demo: [Coherence Testing Space](https://huggingface.co/spaces/enochlev/coherence - all - mpnet - base - v2 - space)
Uses
Direct Use
This model is designed to evaluate the coherence of a response to a given question or prompt. It can be directly used to enhance chatbots or dialogue systems by predicting how coherent or relevant a response is, thus improving the quality of conversational agents.
Downstream Use
This model can be fine - tuned further for specific dialogue systems or used as a component in larger conversational AI frameworks to ensure responses are meaningful and contextually appropriate.
Out - of - Scope Use
This model is not intended for applications requiring complex sentiment analysis, emotional tone recognition, or tasks outside dialogue coherence assessment.
Results
Example outputs reflect coherent or relevant responses with scores closer to 1. For instance:
Output >>> array([0.88097143, 0.04521223, 0.943173 , 0.9436357 , 0.04369843,
0.94450355, 0.8392763 ], dtype=float32)
Evaluation & Limitations
Testing Data, Factors & Metrics
The model has been fine - tuned and evaluated using the CHILDES dataset to ensure it captures conversational coherence effectively.
Recommendations
⚠️ Important Note
The model may not fully capture nuanced conversational elements such as sarcasm or humor.
Environmental Impact
Please refer to the Machine Learning Impact calculator for estimating carbon emissions. Details specific to training this model are not available but consider general best practices to minimize environmental impact.
Citation
To cite this model, please provide appropriate credit to the Hugging Face repository page and the original model creator, Enoch Levandovsky.