đ Tessa-T1: React-Focused Reasoning Model
Tessa-T1 is an innovative React reasoning model based on the Transformer architecture. Fine-tuned from the powerful Qwen2.5-Coder-3B-Instruct base model, it can autonomously generate well-structured and semantic React components, serving as a powerful tool for web interface development and frontend code intelligence.
đ Quick Start
You can use the following code example to start using Tessa-T1:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "smirki/Tessa-T1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).to("cuda")
prompt = """<|im_start|>user
Create a React component for a user profile card.<|im_end|>
<|im_start|>assistant
<|im_start|>think
"""
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=1500, do_sample=True, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
⨠Features
- React-specific Reasoning: Accurately generates functional and semantic React components.
- Agent Integration: Seamlessly fits into AI-driven coding agents and autonomous frontend systems.
- Context-Aware Generation: Effectively understands and utilizes UI context to provide relevant code solutions.
đģ Usage Examples
Basic Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "smirki/Tessa-T1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).to("cuda")
prompt = """<|im_start|>user
Create a React component for a user profile card.<|im_end|>
<|im_start|>assistant
<|im_start|>think
"""
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=1500, do_sample=True, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
đ Documentation
Use Cases
Recommended Uses
- Automatic Component Generation: Quickly produce React components from textual prompts.
- Agent-based Web Development: Integrate into automated coding systems for faster frontend workflows.
- Frontend Refactoring: Automate the optimization and semantic enhancement of React code.
Limitations
- Focused on React: Limited use outside React.js frameworks.
- Complex State Management: May require manual adjustments for highly dynamic state management scenarios.
Performance and Evaluation
-
Strengths:
- Strong semantic React component generation.
- Excellent integration capabilities with agent-based systems.
-
Weaknesses:
- Complex JavaScript logic may require manual post-processing.
đ§ Technical Details
Property |
Details |
Architecture |
Transformer-based LLM |
Base Model |
Qwen2.5-Coder-3B-Instruct |
Precision |
bf16 mixed precision, quantized to q8 |
Hardware Requirements |
Recommended 12GB VRAM |
Software Dependencies |
Hugging Face Transformers, PyTorch |
đ License
This project is licensed under the Apache 2.0 license.
đ Citation
@misc{smirki_Tessa-T1,
title={Tessa-T1: React-Focused Reasoning Model for Component Generation},
author={tesslate},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/tesslate/Tessa-T1}
}
đĨ Contact & Community
- Creator: smirki
- Repository & Demo: Coming soon!
Sponsored by vichar ai Huggingface Website

"Landing Page"
See examples demonstrating the powerful reasoning and component creation capabilities of Tessa-T1:
AI upload
Virtual Machine Console

Playlist Management

Prompt: "add in a calendar"
