# Low-resource Efficiency

Phi 4 Reasoning Plus
MIT
Phi-4 Reasoning Enhanced is a 14-billion parameter open-source reasoning model developed by Microsoft Research, optimized through supervised fine-tuning and reinforcement learning, focusing on advanced reasoning capabilities in mathematics, science, and programming fields.
Large Language Model Transformers Supports Multiple Languages
P
unsloth
189
2
T5 Small Finetuned Xsum
Apache-2.0
A text summarization model fine-tuned on the XSum dataset based on T5-small
Text Generation Transformers
T
bdwjaya
103
0
Ket5 News Summarizer
Apache-2.0
Korean text summarization model based on T5 architecture, specifically fine-tuned for news reporting
Text Generation Supports Multiple Languages
K
onebeans
40
1
Drama Large
DRAMA-large (0.3B) is a dense retrieval model built upon a pruned large language model architecture, optimized for efficient and generalizable multilingual text retrieval tasks.
Text Embedding Transformers Supports Multiple Languages
D
facebook
55
7
Granite Embedding 30m English
Apache-2.0
IBM Granite Embedding 30M English is a transformer-based English text embedding model developed and released by IBM.
Text Embedding Transformers English
G
ibm-granite
78.53k
10
Llama 3 8B Summarization QLoRa
Other
A summarization model fine-tuned using QLoRa technology on the scitldr dataset based on Meta-Llama-3-8B
Large Language Model TensorBoard
L
pkbiswas
29
0
Bloomz 560m Reranking
Openrail
A bilingual reranking model based on Bloomz-560m for measuring semantic relevance between queries and contexts, supporting both French and English
Large Language Model Transformers Supports Multiple Languages
B
cmarkea
17
1
Prodigy 7B GGUF Imatrix
GGUF-Imatrix quantized version of Prodigy_7B, utilizing importance matrix technology to enhance quantization quality
Large Language Model
P
Lewdiculous
58
7
Openchat 3.5 GPTQ
Apache-2.0
OpenChat 3.5 7B is a 7B-parameter large language model based on the Mistral architecture, developed by the OpenChat team and released under the Apache 2.0 license.
Large Language Model Transformers
O
TheBloke
107
17
Tst Summarization
News summarization model fine-tuned on google/pegasus-xsum, trained on the cnn_dailymail dataset
Text Generation Transformers English
T
ChaniM
23
0
T5 Small Finetuned Cnn V2
Apache-2.0
A text summarization generation model fine-tuned on the cnn_dailymail dataset based on the T5-small model
Text Generation Transformers
T
ubikpt
20
1
T5 Small Finetuned Cnn
Apache-2.0
A text summarization model fine-tuned on the cnn_dailymail dataset based on the T5-small architecture, excelling in news summarization tasks
Text Generation Transformers
T
ubikpt
55
0
Distilbert Base Uncased Squad2 With Ner With Neg With Multi With Repeat
A QA and NER model fine-tuned on the conll2003 dataset based on distilbert-base-uncased-squad2
Question Answering System Transformers
D
andi611
20
0
Distilbart Qgen 3 3
Apache-2.0
This model is a BART variant fine-tuned on the SQuAD dataset, specifically designed to generate corresponding questions based on text passages and answers.
Question Answering System Transformers English
D
gpssohi
21
3
Distilbert Base Uncased Squad2 With Ner With Neg With Multi
A multi-task model based on DistilBERT for question answering and named entity recognition, fine-tuned on the conll2003 dataset
Question Answering System Transformers
D
andi611
20
0
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase