🚀 PhoRanker: A Cross-encoder Model for Vietnamese Text Ranking
PhoRanker is a cross-encoder model designed for Vietnamese text ranking, offering high performance on relevant datasets.
🚀 Quick Start
This section provides a quick guide on how to install and use PhoRanker.
✨ Features
- Cross-encoder: Ideal for text classification and reranking tasks.
- Vietnamese Support: Specifically trained for Vietnamese text.
- High Performance: Demonstrates excellent results on the MS MMarco Passage Reranking - Vi - Dev dataset.
📦 Installation
- Install
VnCoreNLP
for word segmentation:
- Install
sentence-transformers
(recommended) - Usage:
pip install sentence-transformers
- Install
transformers
(optional) - Usage:
💻 Usage Examples
Basic Usage
Before using the model, you need to perform pre - processing on the text.
import py_vncorenlp
py_vncorenlp.download_model(save_dir='/absolute/path/to/vncorenlp')
rdrsegmenter = py_vncorenlp.VnCoreNLP(annotators=["wseg"], save_dir='/absolute/path/to/vncorenlp')
query = "Trường UIT là gì?"
sentences = [
"Trường Đại học Công nghệ Thông tin có tên tiếng Anh là University of Information Technology (viết tắt là UIT) là thành viên của Đại học Quốc Gia TP.HCM.",
"Trường Đại học Kinh tế – Luật (tiếng Anh: University of Economics and Law – UEL) là trường đại học đào tạo và nghiên cứu khối ngành kinh tế, kinh doanh và luật hàng đầu Việt Nam.",
"Quĩ uỷ thác đầu tư (tiếng Anh: Unit Investment Trusts; viết tắt: UIT) là một công ty đầu tư mua hoặc nắm giữ một danh mục đầu tư cố định"
]
tokenized_query = rdrsegmenter.word_segment(query)
tokenized_sentences = [rdrsegmenter.word_segment(sent) for sent in sentences]
tokenized_pairs = [[tokenized_query, sent] for sent in tokenized_sentences]
MODEL_ID = 'itdainb/PhoRanker'
MAX_LENGTH = 256
Usage with sentence-transformers
from sentence_transformers import CrossEncoder
model = CrossEncoder(MODEL_ID, max_length=MAX_LENGTH)
model.model.half()
scores = model.predict(tokenized_pairs)
print(scores)
Usage with transformers
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained(MODEL_ID)
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
model.half()
features = tokenizer(tokenized_pairs, padding=True, truncation="longest_first", return_tensors="pt", max_length=MAX_LENGTH)
model.eval()
with torch.no_grad():
model_predictions = model(**features, return_dict=True)
logits = model_predictions.logits
logits = torch.nn.Sigmoid()(logits)
scores = [logit[0] for logit in logits]
print(scores)
📚 Documentation
Performance
In the following table, we provide various pre - trained Cross - Encoders together with their performance on the MS MMarco Passage Reranking - Vi - Dev dataset.
Note: Runtime was computed on a A100 GPU with fp16.
🔧 Technical Details
PhoRanker is a cross - encoder model for Vietnamese text ranking. It is trained on the unicamp-dl/mmarco dataset, achieving high performance in text classification and reranking tasks.
📄 License
This project is licensed under the Apache - 2.0 license.
Support me
If you find this work useful and would like to support its continued development, here are a few ways you can help:
- Star the Repository: If you appreciate this work, please give it a star. Your support encourages continued development and improvement.
- Contribute: Contributions are always welcome! You can help by reporting issues, submitting pull requests, or suggesting new features.
- Share: Share this project with your colleagues, friends, or community. The more people know about it, the more feedback and contributions it can attract.
- Buy me a coffee: If you’d like to provide financial support, consider making a donation. You can donate via
- Momo: 0948798843
- BIDV Bank: DAINB
- Paypal: 0948798843
Citation
Please cite as
@misc{PhoRanker,
title={PhoRanker: A Cross-encoder Model for Vietnamese Text Ranking},
author={Dai Nguyen Ba ({ORCID:0009-0008-8559-3154})},
year={2024},
publisher={Huggingface},
journal={huggingface repository},
howpublished={\url{https://huggingface.co/itdainb/PhoRanker}},
}