🚀 高效SPLADE
高效的SPLADE模型,用於段落檢索。該架構使用兩個不同的模型進行查詢和文檔推理。這是查詢模型,請同時下載文檔模型(https://huggingface.co/naver/efficient-splade-VI-BT-large-doc)。如需更多詳細信息,請訪問:
- 論文:https://dl.acm.org/doi/10.1145/3477495.3531833
- 代碼:https://github.com/naver/splade
🚀 快速開始
本模型專為段落檢索設計,通過兩個不同模型分別處理查詢和文檔推理,為檢索任務提供高效解決方案。
✨ 主要特性
- 雙模型架構:使用兩個不同的模型分別進行查詢和文檔推理,提升檢索效率。
- 性能優異:在MS MARCO數據集上展現出良好的MRR@10和R@1000指標。
- 低延遲:在PISA和推理過程中都具有較低的延遲。
📚 詳細文檔
模型性能指標
模型名稱 |
MRR@10 (MS MARCO dev) |
R@1000 (MS MARCO dev) |
延遲 (PISA) 毫秒 |
延遲 (推理) 毫秒 |
naver/efficient-splade-V-large |
38.8 |
98.0 |
29.0 |
45.3 |
naver/efficient-splade-VI-BT-large |
38.0 |
97.8 |
31.1 |
0.7 |
📄 許可證
本項目採用CC BY-NC-SA 4.0許可證。
🔖 引用
如果您使用了我們的模型檢查點,請引用我們的工作:
@inproceedings{10.1145/3477495.3531833,
author = {Lassance, Carlos and Clinchant, St\'{e}phane},
title = {An Efficiency Study for SPLADE Models},
year = {2022},
isbn = {9781450387323},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3477495.3531833},
doi = {10.1145/3477495.3531833},
abstract = {Latency and efficiency issues are often overlooked when evaluating IR models based on Pretrained Language Models (PLMs) in reason of multiple hardware and software testing scenarios. Nevertheless, efficiency is an important part of such systems and should not be overlooked. In this paper, we focus on improving the efficiency of the SPLADE model since it has achieved state-of-the-art zero-shot performance and competitive results on TREC collections. SPLADE efficiency can be controlled via a regularization factor, but solely controlling this regularization has been shown to not be efficient enough. In order to reduce the latency gap between SPLADE and traditional retrieval systems, we propose several techniques including L1 regularization for queries, a separation of document/query encoders, a FLOPS-regularized middle-training, and the use of faster query encoders. Our benchmark demonstrates that we can drastically improve the efficiency of these models while increasing the performance metrics on in-domain data. To our knowledge, we propose the first neural models that, under the same computing constraints, achieve similar latency (less than 4ms difference) as traditional BM25, while having similar performance (less than 10% MRR@10 reduction) as the state-of-the-art single-stage neural rankers on in-domain data.},
booktitle = {Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval},
pages = {2220–2226},
numpages = {7},
keywords = {splade, latency, information retrieval, sparse representations},
location = {Madrid, Spain},
series = {SIGIR '22}
}