Cnmoro TinyLlama ContextQuestionPair Classifier Reranker Gguf
This is a lightweight text ranking model based on TinyLlama, specifically designed for classifying and reordering context-question pairs.
Downloads 391
Release Time : 10/16/2024
Model Overview
The model optimizes relevance ranking for information retrieval and Q&A systems by classifying and reordering context-question pairs.
Model Features
Lightweight Quantization
Offers multiple quantized versions, with the smallest being only 0.4GB, suitable for resource-constrained environments.
Context-Question Pair Processing
Specially optimized for analyzing the relevance between context and question pairs.
Multiple Quantization Options
Provides 21 different quantization levels from Q2_K to Q8_0 for model selection.
Model Capabilities
Text Relevance Scoring
Question-Answer Pair Ranking
Context Understanding
Information Retrieval Optimization
Use Cases
Q&A Systems
FAQ Ranking
Ranks candidate answers by relevance to improve Q&A system accuracy.
Inference can enhance answer selection accuracy.
Information Retrieval
Document Passage Ranking
Reorders retrieved document passages based on query questions.
Inference can improve retrieval result relevance.
🚀 TinyLlama-ContextQuestionPair-Classifier-Reranker - GGUF
This project provides a text - ranking model, TinyLlama - ContextQuestionPair - Classifier - Reranker in GGUF format. It can determine whether the context contains relevant information to answer the question and respond in JSON format.
📚 Documentation
Model Information
- Quantization: Quantization made by Richard Erkhov.
- Model Creator: https://huggingface.co/cnmoro/
- Original Model: https://huggingface.co/cnmoro/TinyLlama-ContextQuestionPair-Classifier-Reranker/
Links
Model Quantization Details
Property | Details |
---|---|
Model Type | TinyLlama - ContextQuestionPair - Classifier - Reranker - GGUF |
Quantization Creator | Richard Erkhov |
Quantized Model List
Original Model Description
- License: cc - by - nc - 2.0
- Language: en, pt
- Tags: classification, llama, tinyllama, rag, rerank
💻 Usage Examples
Basic Usage
template = """<s><|system|>
You are a chatbot who always responds in JSON format indicating if the context contains relevant information to answer the question</s>
<|user|>
Context:
{Text}
Question:
{Prompt}</s>
<|assistant|>
"""
# Output should be:
{"relevant": true}
# or
{"relevant": false}
Example
<s><|system|>
You are a chatbot who always responds in JSON format indicating if the context contains relevant information to answer the question</s>
<|user|>
Context:
old. NFT were observed in almost all patients over 60 years of age, but the incidence was low.
Many ubiquitin-positive small-sized granules were observed in the second and third layer of the parahippocampal gyrus of aged patients,
and the incidence rose with increasing age. On the other hand, few of these granules were in patients with Alzheimer\'s type dementia.
Granulovacuolar degeneration was examined. Many centrally-located granules were positive for ubiquitin. Based on electron microscopic
observation of these granules at several stages, the granules were thought to be a type of autophagosome. During the first stage of
granulovacuolar degeneration, electron-dense materials appeared in the cytoplasm, following which they were surrounded by smooth cytoplasm,
following which they were surrounded by smooth endoplasmic reticulum. Analytical electron microscopy disclosed that the granules contained
some aluminium. Several senile changes in the central nervous system in cadavers were examined. The pattern of extension of Alzheimer\'s
neurofibrillary tangles (NFT) and senile plaques (SP) in the olfactory bulbs of 100 specimens was examined during routine autopsy by
immunohistochemical staining. NFT were first observed in the anterior olfactory nucleus after the age of 60, and incidence rose with
increasing age. Senile plaques were found in the nucleus when there were many SP in the cerebral cortex. Of 25 non-demented amyotrophic
lateral sclerosis patients, SP were found in the cerebral cortices of 10, and 9 of 10 were over 60 years old. NFT were observed in almost
all patients over
Question:
What is granulovacuolar degeneration and what was its observation on electron microscopy?</s>
<|assistant|>
{"relevant": true}</s>
vLLM Recommended Request Parameters
prompt = "<s><|system|>\nYou are a chatbot who always responds in JSON format indicating if the context contains relevant information to answer the question</s>\n<|user|>\nContext:\nConhecida como missão de imagem de raios-x e espectroscopia (da sigla em inglês XRISM), a estratégia é utilizar o telescópio para ampliar os estudos da humanidade a níveis celestiais com uma fração dos pixels da tela de um Gameboy original, lançado em 1989. Isso é possível por meio de uma ferramenta chamada “Resolve”. Apesar de utilizar a medição em pixels, a tecnologia é bastante diferente de uma câmera. Com um conjunto de microcalorímetros de seis pixels quadrados que mede 0,5 cm², ela detecta a temperatura de cada raio-x que o atinge. Como funciona o Resolve do telescópio XRISM? Cientista do projeto XRISM da NASA, Brian Williams explicou em um comunicado o funcionamento do telescópio. “Chamamos o Resolve de espectrômetro de microcalorímetros porque cada um de seus 36 pixels está medindo pequenas quantidades de calor entregues por cada raio-x recebido, nos permitindo ver as impressões digitais químicas dos elementos que compõem as fontes com detalhes sem precedentes”.\n\nQuestion:\nQual é a sigla em alemão mencionada?</s>\n<|assistant|>\n{\"relevant\":"
headers = {
"Accept": "text/event-stream",
"Authorization": "Bearer EMPTY"
}
body = {
"model": model,
"prompt": [prompt],
"best_of": 5,
"max_tokens": 1,
"temperature": 0,
"top_p": 1,
"use_beam_search": True,
"top_k": -1,
"min_p": 0,
"repetition_penalty": 1,
"length_penalty": 1,
"min_tokens": 1,
"logprobs": 1
}
result = requests.post(base_uri, headers=headers, json=body)
result = result.json()
boolean_response = bool(eval(json_result['choices'][0]['text'].strip().title()))
print(boolean_response)
Distilbert Base Uncased Finetuned Sst 2 English
Apache-2.0
Text classification model fine-tuned on the SST-2 sentiment analysis dataset based on DistilBERT-base-uncased, with 91.3% accuracy
Text Classification English
D
distilbert
5.2M
746
Xlm Roberta Base Language Detection
MIT
Multilingual detection model based on XLM-RoBERTa, supporting text classification in 20 languages
Text Classification
Transformers Supports Multiple Languages

X
papluca
2.7M
333
Roberta Hate Speech Dynabench R4 Target
This model improves online hate detection through dynamic dataset generation, focusing on learning from worst-case scenarios to enhance detection effectiveness.
Text Classification
Transformers English

R
facebook
2.0M
80
Bert Base Multilingual Uncased Sentiment
MIT
A multilingual sentiment analysis model fine-tuned based on bert-base-multilingual-uncased, supporting sentiment analysis of product reviews in 6 languages
Text Classification Supports Multiple Languages
B
nlptown
1.8M
371
Emotion English Distilroberta Base
A fine-tuned English text emotion classification model based on DistilRoBERTa-base, capable of predicting Ekman's six basic emotions and neutral category.
Text Classification
Transformers English

E
j-hartmann
1.1M
402
Robertuito Sentiment Analysis
Spanish tweet sentiment analysis model based on RoBERTuito, supporting POS(positive)/NEG(negative)/NEU(neutral) three-class sentiment classification
Text Classification Spanish
R
pysentimiento
1.0M
88
Finbert Tone
FinBERT is a BERT model pre-trained on financial communication texts, specializing in the field of financial natural language processing. finbert-tone is its fine-tuned version for financial sentiment analysis tasks.
Text Classification
Transformers English

F
yiyanghkust
998.46k
178
Roberta Base Go Emotions
MIT
A multi-label sentiment classification model based on RoBERTa-base, trained on the go_emotions dataset, supporting recognition of 28 emotion labels.
Text Classification
Transformers English

R
SamLowe
848.12k
565
Xlm Emo T
XLM-EMO is a multilingual sentiment analysis model fine-tuned based on the XLM-T model, supporting 19 languages and specifically designed for sentiment prediction in social media texts.
Text Classification
Transformers Other

X
MilaNLProc
692.30k
7
Deberta V3 Base Mnli Fever Anli
MIT
DeBERTa-v3 model trained on MultiNLI, Fever-NLI, and ANLI datasets, excelling in zero-shot classification and natural language inference tasks
Text Classification
Transformers English

D
MoritzLaurer
613.93k
204
Featured Recommended AI Models