# English Processing

Sam Reason S2.1 GGUF
MIT
Static quantized version of Sam-reason-S2.1, offering multiple quantization options to suit different hardware requirements
Large Language Model English
S
mradermacher
299
1
Emotion Model
Apache-2.0
A sentiment analysis model fine-tuned based on distilbert-base-uncased for text sentiment classification tasks
Text Classification Transformers
E
umeshkaushik610
15
1
T5 Small Title Ft
Apache-2.0
T5 Small is the compact version of Google's T5 (Text-to-Text Transfer Transformer) model, suitable for various natural language processing tasks.
Text Generation Transformers English
T
swarup3204
25
0
Sentiment Analysis With Distilbert Base Uncased
Apache-2.0
This is a sentiment analysis model fine-tuned on distilbert-base-uncased, achieving 93.2% accuracy on the evaluation set.
Text Classification Transformers
S
sherif-911
50
1
Chonky Distilbert Base Uncased 1
MIT
Chonky is a Transformer model that intelligently segments text into meaningful semantic chunks, suitable for RAG systems.
Sequence Labeling Transformers English
C
mirth
1,486
12
L3 GothicMaid Upscaled 11B
An 8B-parameter language model fused using the mergekit tool, generated via Passthrough fusion method
Large Language Model Transformers English
L
yamatazen
14
3
T5 Finetuned Finance
Apache-2.0
T5 Small is the compact version of Google's T5 (Text-To-Text Transfer Transformer) series, suitable for various text processing tasks.
Text Classification Transformers English
T
AdityaSai1234
21
3
Toriigate V0.4 7B I1 GGUF
Apache-2.0
This is a weighted/importance matrix quantized version of the Minthy/ToriiGate-v0.4-7B model, offering multiple quantization options to suit different needs.
Image-to-Text English
T
mradermacher
410
1
Deepseek R1 Distill Llama 8B Abliterated
DeepSeek-R1-Distill-Llama-8B is a distilled large language model based on the Llama architecture, with a parameter scale of 8B, primarily designed for English text generation and comprehension tasks.
Large Language Model Transformers English
D
stepenZEN
119
9
Question Decomposer T5
This is a sequence-to-sequence model based on T5-base, specifically designed for decomposing complex questions into multiple sub-questions.
Text Generation Safetensors English
Q
thenHung
317
4
Prompt Saturation Attack Detector
A small BERT model for detecting saturation-type jailbreak attacks, not suitable for independently defending against other types of jailbreak attacks.
Text Classification Transformers English
P
GuardrailsAI
4,762
1
Xflux Text Encoders
Apache-2.0
T5 (Text-to-Text Transfer Transformer) is a general-purpose text-to-text conversion model developed by Google, capable of handling various natural language processing tasks.
Large Language Model Transformers English
X
XLabs-AI
481.59k
17
Phi 3 Mini 4k Instruct Gguf Derived
Apache-2.0
phi3 is an open-source model based on the Apache-2.0 license, supporting English language, primarily used for summarization tasks.
Large Language Model English
P
zhhan
39
0
Finetuning Sentiment Ditilbert
Apache-2.0
A sentiment analysis model fine-tuned on distilbert-base-uncased, achieving 87.67% accuracy on the evaluation set
Text Classification Transformers
F
Neo111x
15
1
Negmpnet
NegMPNet is a negation-aware version based on all-mpnet-base-v2, capable of mapping sentences and paragraphs into a 768-dimensional dense vector space, particularly adept at handling negation semantics.
Text Embedding Transformers English
N
tum-nlp
31
0
Resume Summary
Openrail
A model for resume summarization, primarily used in the job search domain.
Text Generation Transformers English
R
burberg92
21
0
Englishmodel
Apache-2.0
This model is a fine-tuned speech recognition model based on facebook/wav2vec2-xls-r-300m, primarily used for English speech-to-text tasks.
Speech Recognition Transformers
E
Foxasdf
24
1
Xtremedistil L6 H384 Uncased Finetuned Squad
MIT
This model is a fine-tuned version of microsoft/xtremedistil-l6-h384-uncased on the SQuAD dataset, primarily used for question answering tasks.
Question Answering System Transformers
X
tachyon-11
20
0
Spanbert Qa
A question-answering model fine-tuned on SpanBERT/spanbert-base-cased, suitable for reading comprehension tasks
Question Answering System Transformers
S
vaibhav9
24
0
Roberta Base Squad
MIT
A question-answering model fine-tuned based on the roberta-base model, trained on SQuAD format datasets
Question Answering System Transformers
R
DLL888
14
0
Large Email Classifier
This is a sentence similarity model based on sentence-transformers, capable of mapping text to a 384-dimensional vector space, suitable for clustering and semantic search tasks.
Text Embedding
L
lewispons
24
1
Distilbert Base Uncased Finetuned Squad
Apache-2.0
This model is a question-answering model fine-tuned on the SQuAD dataset based on DistilBERT, designed for reading comprehension tasks.
Question Answering System Transformers
D
BillZou
14
0
Bert Base Uncased Finetuned Squad
Apache-2.0
This is a BERT model fine-tuned on the SQuAD dataset for question answering tasks.
Question Answering System Transformers
B
harveyagraphcore
15
0
Distilbert Base Uncased Becasv3 1
Apache-2.0
This model is a fine-tuned version of distilbert-base-uncased on the becasv3 dataset, primarily used for text generation tasks.
Large Language Model Transformers
D
Evelyn18
15
0
Distilbert Base Uncased Becas 5
Apache-2.0
This model is a fine-tuned version of distilbert-base-uncased on the becasv2 dataset, primarily used for text classification or related tasks.
Large Language Model Transformers
D
Evelyn18
16
0
Distilbert Base Uncased Becas 1
Apache-2.0
A text classification model fine-tuned on the becasv2 dataset based on distilbert-base-uncased
Large Language Model Transformers
D
Evelyn18
18
0
Distilbert Base Uncased Finetuned Squad
Apache-2.0
A question-answering model based on DistilBERT, fine-tuned on the SQuAD dataset for extractive question answering tasks.
Question Answering System Transformers
D
lingchensanwen
16
0
Distilbert Base Uncased Finetuned Squad
Apache-2.0
This model is a question-answering model fine-tuned on the SQuAD v2 dataset based on the DistilBERT base model, suitable for reading comprehension tasks.
Question Answering System Transformers
D
lorenzkuhn
15
0
Bert Base Uncased Finetuned Squad V2
Apache-2.0
This model is a question-answering model fine-tuned on the SQuAD dataset based on bert-base-uncased
Question Answering System Transformers
B
HomayounSadri
621
0
Distilbert Base Uncased Finetuned Squad
Apache-2.0
A model fine-tuned on Q&A datasets based on Distilled BERT Base, suitable for Q&A tasks
Question Answering System Transformers
D
jhoonk
15
0
Distilbert Base Uncased Finetuned Test Headline
Apache-2.0
This model is a fine-tuned version of distilbert-base-uncased on an unspecified dataset, primarily used for text-related tasks.
Large Language Model Transformers
D
lucypallent
15
0
Distilbert Base Uncased Finetuned Squad
Apache-2.0
This model is a fine-tuned version of the DistilBERT base model on the SQuAD Q&A dataset, suitable for question-answering tasks.
Question Answering System Transformers
D
jsunster
16
0
Bert Base Cased Squad2
This is a base model based on the BERT architecture, specifically trained on the SQuAD v2 dataset, suitable for question-answering tasks.
Question Answering System Transformers
B
ydshieh
39
0
Roberta Base Squad2 Finetuned Squad
This model is a question-answering model fine-tuned on the SQuAD 2.0 dataset based on RoBERTa-base, excelling in reading comprehension tasks.
Question Answering System Transformers
R
deepakvk
14
0
Bert Base Uncased Sports
Apache-2.0
BERT model fine-tuned on sports-related data based on bert-base-uncased
Large Language Model Transformers
B
amanm27
39
1
BERT NER Ep5 Finetuned Ner
Apache-2.0
A named entity recognition (NER) model fine-tuned based on bert-base-cased, achieving an F1 score of 0.6868 on the evaluation set
Sequence Labeling Transformers
B
suwani
15
0
Distilbert Base Uncased 3feb 2022 Finetuned Squad
Apache-2.0
This model is a fine-tuned version of the DistilBERT base model on the SQuAD question answering dataset, designed for question answering tasks.
Question Answering System Transformers
D
sunitha
26
0
Distilbert Base Uncased Sst2 Train 8 8
Apache-2.0
This model is a fine-tuned version of distilbert-base-uncased on the SST-2 dataset for sentiment analysis tasks.
Text Classification Transformers
D
SetFit
17
0
Distilbert Base Uncased Sst2 Train 8 3
Apache-2.0
This model is a fine-tuned version of distilbert-base-uncased on the SST-2 dataset, primarily used for text classification tasks.
Text Classification Transformers
D
SetFit
25
0
BERT NER Ep5 PAD 50 Finetuned Ner
Apache-2.0
Named entity recognition model fine-tuned based on bert-base-cased, achieving an F1 score of 0.6920 on the evaluation set
Sequence Labeling Transformers
B
suwani
16
0
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase