T

Tulu 65b

Developed by allenai
Tulu 65B is a 65B-parameter LLaMa model fine-tuned on multi-instruction datasets, representing the outcome of open-resource instruction tuning research with robust comprehensive performance.
Downloads 20
Release Time : 6/7/2023

Model Overview

This model is fine-tuned with multiple instruction datasets including FLAN V2, CoT, and Dolly, suitable for various natural language processing tasks, with particular emphasis on instruction-following capabilities.

Model Features

Multi-instruction dataset fine-tuning
Trained with 7 high-quality instruction datasets including FLAN V2, CoT, and Dolly
Strict input format requirements
Uses specific dialogue format (<|user|>/<|assistant|> tags) to ensure optimal generation results
Outstanding comprehensive performance
Excels in multiple benchmark tests such as MMLU, GSM, and BBH

Model Capabilities

Instruction understanding and execution
Multi-turn dialogue generation
Complex problem solving
Code generation and explanation
Knowledge reasoning

Use Cases

Intelligent assistant
Task-oriented dialogue system
Handles complex multi-turn instruction dialogues
Outperforms Davinci-003 model in AlpacaFarm evaluation
Educational research
Open-domain Q&A
Answers various knowledge-based questions
Achieves 61.1% 5-shot accuracy in MMLU benchmark
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase