Cnmoro TinyLlama ContextQuestionPair Classifier Reranker Gguf
模型概述
該模型通過對上下文和問題對進行分類和重排,優化信息檢索和問答系統的相關性排序。
模型特點
輕量級量化
提供多種量化版本,最小僅0.4GB,適合資源受限環境
上下文-問題對處理
專門優化用於分析上下文與問題對的相關性
多量化選項
提供從Q2_K到Q8_0共21種不同量化級別的模型選擇
模型能力
文本相關性評分
問答對排序
上下文理解
信息檢索優化
使用案例
問答系統
FAQ排序
對候選答案進行相關性排序,提升問答系統準確率
推斷可提高答案選擇準確率
信息檢索
文檔段落排序
根據查詢問題對檢索到的文檔段落進行重排
推斷可提高檢索結果相關性
🚀 TinyLlama-ContextQuestionPair-Classifier-Reranker - GGUF
本項目提供了TinyLlama-ContextQuestionPair-Classifier-Reranker模型的量化版本(GGUF格式),可用於文本排序任務。該模型由Richard Erkhov進行量化處理,原始模型由https://huggingface.co/cnmoro/ 創建。
項目鏈接
✨ 主要特性
- 多語言支持:支持英語(en)和葡萄牙語(pt)。
- 分類與重排序:適用於分類和重排序任務,特別是在RAG(檢索增強生成)場景中。
📦 模型信息
屬性 | 詳情 |
---|---|
模型類型 | TinyLlama-ContextQuestionPair-Classifier-Reranker - GGUF |
模型創建者 | https://huggingface.co/cnmoro/ |
原始模型 | https://huggingface.co/cnmoro/TinyLlama-ContextQuestionPair-Classifier-Reranker/ |
許可證 | cc-by-nc-2.0 |
支持語言 | 英語(en)、葡萄牙語(pt) |
標籤 | classification, llama, tinyllama, rag, rerank |
量化模型列表
💻 使用示例
基礎用法
template = """<s><|system|>
You are a chatbot who always responds in JSON format indicating if the context contains relevant information to answer the question</s>
<|user|>
Context:
{Text}
Question:
{Prompt}</s>
<|assistant|>
"""
# Output should be:
{"relevant": true}
# or
{"relevant": false}
示例輸入輸出
<s><|system|>
You are a chatbot who always responds in JSON format indicating if the context contains relevant information to answer the question</s>
<|user|>
Context:
old. NFT were observed in almost all patients over 60 years of age, but the incidence was low.
Many ubiquitin-positive small-sized granules were observed in the second and third layer of the parahippocampal gyrus of aged patients,
and the incidence rose with increasing age. On the other hand, few of these granules were in patients with Alzheimer\'s type dementia.
Granulovacuolar degeneration was examined. Many centrally-located granules were positive for ubiquitin. Based on electron microscopic
observation of these granules at several stages, the granules were thought to be a type of autophagosome. During the first stage of
granulovacuolar degeneration, electron-dense materials appeared in the cytoplasm, following which they were surrounded by smooth cytoplasm,
following which they were surrounded by smooth endoplasmic reticulum. Analytical electron microscopy disclosed that the granules contained
some aluminium. Several senile changes in the central nervous system in cadavers were examined. The pattern of extension of Alzheimer\'s
neurofibrillary tangles (NFT) and senile plaques (SP) in the olfactory bulbs of 100 specimens was examined during routine autopsy by
immunohistochemical staining. NFT were first observed in the anterior olfactory nucleus after the age of 60, and incidence rose with
increasing age. Senile plaques were found in the nucleus when there were many SP in the cerebral cortex. Of 25 non-demented amyotrophic
lateral sclerosis patients, SP were found in the cerebral cortices of 10, and 9 of 10 were over 60 years old. NFT were observed in almost
all patients over
Question:
What is granulovacuolar degeneration and what was its observation on electron microscopy?</s>
<|assistant|>
{"relevant": true}</s>
vLLM推薦請求參數
prompt = "<s><|system|>\nYou are a chatbot who always responds in JSON format indicating if the context contains relevant information to answer the question</s>\n<|user|>\nContext:\nConhecida como missão de imagem de raios-x e espectroscopia (da sigla em inglês XRISM), a estratégia é utilizar o telescópio para ampliar os estudos da humanidade a níveis celestiais com uma fração dos pixels da tela de um Gameboy original, lançado em 1989. Isso é possível por meio de uma ferramenta chamada “Resolve”. Apesar de utilizar a medição em pixels, a tecnologia é bastante diferente de uma câmera. Com um conjunto de microcalorímetros de seis pixels quadrados que mede 0,5 cm², ela detecta a temperatura de cada raio-x que o atinge. Como funciona o Resolve do telescópio XRISM? Cientista do projeto XRISM da NASA, Brian Williams explicou em um comunicado o funcionamento do telescópio. “Chamamos o Resolve de espectrômetro de microcalorímetros porque cada um de seus 36 pixels está medindo pequenas quantidades de calor entregues por cada raio-x recebido, nos permitindo ver as impressões digitais químicas dos elementos que compõem as fontes com detalhes sem precedentes”.\n\nQuestion:\nQual é a sigla em alemão mencionada?</s>\n<|assistant|>\n{\"relevant\":"
headers = {
"Accept": "text/event-stream",
"Authorization": "Bearer EMPTY"
}
body = {
"model": model,
"prompt": [prompt],
"best_of": 5,
"max_tokens": 1,
"temperature": 0,
"top_p": 1,
"use_beam_search": True,
"top_k": -1,
"min_p": 0,
"repetition_penalty": 1,
"length_penalty": 1,
"min_tokens": 1,
"logprobs": 1
}
result = requests.post(base_uri, headers=headers, json=body)
result = result.json()
boolean_response = bool(eval(json_result['choices'][0]['text'].strip().title()))
print(boolean_response)
📄 許可證
本項目原始模型採用 cc-by-nc-2.0
許可證。
Distilbert Base Uncased Finetuned Sst 2 English
Apache-2.0
基於DistilBERT-base-uncased在SST-2情感分析數據集上微調的文本分類模型,準確率91.3%
文本分類 英語
D
distilbert
5.2M
746
Xlm Roberta Base Language Detection
MIT
基於XLM-RoBERTa的多語言檢測模型,支持20種語言的文本分類
文本分類
Transformers 支持多種語言

X
papluca
2.7M
333
Roberta Hate Speech Dynabench R4 Target
該模型通過動態生成數據集來改進在線仇恨檢測,專注於從最差案例中學習以提高檢測效果。
文本分類
Transformers 英語

R
facebook
2.0M
80
Bert Base Multilingual Uncased Sentiment
MIT
基於bert-base-multilingual-uncased微調的多語言情感分析模型,支持6種語言的商品評論情感分析
文本分類 支持多種語言
B
nlptown
1.8M
371
Emotion English Distilroberta Base
基於DistilRoBERTa-base微調的英文文本情感分類模型,可預測埃克曼六種基本情緒及中性類別。
文本分類
Transformers 英語

E
j-hartmann
1.1M
402
Robertuito Sentiment Analysis
基於RoBERTuito的西班牙語推文情感分析模型,支持POS(積極)/NEG(消極)/NEU(中性)三類情感分類
文本分類 西班牙語
R
pysentimiento
1.0M
88
Finbert Tone
FinBERT是一款基於金融通訊文本預訓練的BERT模型,專注於金融自然語言處理領域。finbert-tone是其微調版本,用於金融情感分析任務。
文本分類
Transformers 英語

F
yiyanghkust
998.46k
178
Roberta Base Go Emotions
MIT
基於RoBERTa-base的多標籤情感分類模型,在go_emotions數據集上訓練,支持28種情感標籤識別。
文本分類
Transformers 英語

R
SamLowe
848.12k
565
Xlm Emo T
XLM-EMO是一個基於XLM-T模型微調的多語言情感分析模型,支持19種語言,專門針對社交媒體文本的情感預測。
文本分類
Transformers 其他

X
MilaNLProc
692.30k
7
Deberta V3 Base Mnli Fever Anli
MIT
基於MultiNLI、Fever-NLI和ANLI數據集訓練的DeBERTa-v3模型,擅長零樣本分類和自然語言推理任務
文本分類
Transformers 英語

D
MoritzLaurer
613.93k
204
精選推薦AI模型
Llama 3 Typhoon V1.5x 8b Instruct
專為泰語設計的80億參數指令模型,性能媲美GPT-3.5-turbo,優化了應用場景、檢索增強生成、受限生成和推理任務
大型語言模型
Transformers 支持多種語言

L
scb10x
3,269
16
Cadet Tiny
Openrail
Cadet-Tiny是一個基於SODA數據集訓練的超小型對話模型,專為邊緣設備推理設計,體積僅為Cosmo-3B模型的2%左右。
對話系統
Transformers 英語

C
ToddGoldfarb
2,691
6
Roberta Base Chinese Extractive Qa
基於RoBERTa架構的中文抽取式問答模型,適用於從給定文本中提取答案的任務。
問答系統 中文
R
uer
2,694
98