đ FastChat-T5 Model Card
FastChat-T5 is an open - source chatbot model. It is fine - tuned from Flan - t5 - xl on user - shared conversations, offering capabilities for commercial use and research in the field of language processing.
đ Quick Start
For more information about using the FastChat - T5 model, please refer to the official repository: [FastChat](https://github.com/lm - sys/FastChat#FastChat - T5).
⨠Features
- Open - source: Based on the Flan - t5 - xl architecture, it is trained on user - shared conversations from ShareGPT.
- Encoder - decoder architecture: Can autoregressively generate responses to user inputs.
- Versatile use cases: Suitable for both commercial applications of large language models and chatbots, as well as research purposes.
đĻ Installation
No specific installation steps are provided in the original document.
đ Documentation
Model Details
Property |
Details |
Model Type |
FastChat - T5 is an open - source chatbot trained by fine - tuning Flan - t5 - xl (3B parameters) on user - shared conversations collected from ShareGPT. It is based on an encoder - decoder transformer architecture and can autoregressively generate responses to users' inputs. |
Model Date |
FastChat - T5 was trained on April 2023. |
Organizations Developing the Model |
The FastChat developers, primarily Dacheng Li, Lianmin Zheng and Hao Zhang. |
Paper or Resources for More Information |
[FastChat Repository](https://github.com/lm - sys/FastChat#FastChat - T5) |
License |
Apache License 2.0 |
Where to Send Questions or Comments about the Model |
[FastChat Issues](https://github.com/lm - sys/FastChat/issues) |
Intended Use
- Primary Intended Uses: The primary use of FastChat - T5 is the commercial usage of large language models and chatbots. It can also be used for research purposes.
- Primary Intended Users: The primary intended users of the model are entrepreneurs and researchers in natural language processing, machine learning, and artificial intelligence.
Training Dataset
70K conversations collected from ShareGPT.com.
Training Details
It processes the ShareGPT data in the form of question - answering. Each ChatGPT response is processed as an answer, and previous conversations between the user and the ChatGPT are processed as the question.
The encoder bi - directionally encodes a question into a hidden representation. The decoder uses cross - attention to attend to this representation while generating an answer uni - directionally from a start token.
This model is fine - tuned for 3 epochs, with a max learning rate 2e - 5, warmup ratio 0.03, and a cosine learning rate schedule.
Evaluation Dataset
A preliminary evaluation of the model quality is conducted by creating a set of 80 diverse questions and utilizing GPT - 4 to judge the model outputs. See Vicuna for more details.
đ License
This model is released under the Apache License 2.0.