🚀 高效SPLADE
高效的SPLADE模型,用于段落检索。该架构使用两个不同的模型进行查询和文档推理。这是查询模型,请同时下载文档模型(https://huggingface.co/naver/efficient-splade-VI-BT-large-doc)。如需更多详细信息,请访问:
- 论文:https://dl.acm.org/doi/10.1145/3477495.3531833
- 代码:https://github.com/naver/splade
🚀 快速开始
本模型专为段落检索设计,通过两个不同模型分别处理查询和文档推理,为检索任务提供高效解决方案。
✨ 主要特性
- 双模型架构:使用两个不同的模型分别进行查询和文档推理,提升检索效率。
- 性能优异:在MS MARCO数据集上展现出良好的MRR@10和R@1000指标。
- 低延迟:在PISA和推理过程中都具有较低的延迟。
📚 详细文档
模型性能指标
模型名称 |
MRR@10 (MS MARCO dev) |
R@1000 (MS MARCO dev) |
延迟 (PISA) 毫秒 |
延迟 (推理) 毫秒 |
naver/efficient-splade-V-large |
38.8 |
98.0 |
29.0 |
45.3 |
naver/efficient-splade-VI-BT-large |
38.0 |
97.8 |
31.1 |
0.7 |
📄 许可证
本项目采用CC BY-NC-SA 4.0许可证。
🔖 引用
如果您使用了我们的模型检查点,请引用我们的工作:
@inproceedings{10.1145/3477495.3531833,
author = {Lassance, Carlos and Clinchant, St\'{e}phane},
title = {An Efficiency Study for SPLADE Models},
year = {2022},
isbn = {9781450387323},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3477495.3531833},
doi = {10.1145/3477495.3531833},
abstract = {Latency and efficiency issues are often overlooked when evaluating IR models based on Pretrained Language Models (PLMs) in reason of multiple hardware and software testing scenarios. Nevertheless, efficiency is an important part of such systems and should not be overlooked. In this paper, we focus on improving the efficiency of the SPLADE model since it has achieved state-of-the-art zero-shot performance and competitive results on TREC collections. SPLADE efficiency can be controlled via a regularization factor, but solely controlling this regularization has been shown to not be efficient enough. In order to reduce the latency gap between SPLADE and traditional retrieval systems, we propose several techniques including L1 regularization for queries, a separation of document/query encoders, a FLOPS-regularized middle-training, and the use of faster query encoders. Our benchmark demonstrates that we can drastically improve the efficiency of these models while increasing the performance metrics on in-domain data. To our knowledge, we propose the first neural models that, under the same computing constraints, achieve similar latency (less than 4ms difference) as traditional BM25, while having similar performance (less than 10% MRR@10 reduction) as the state-of-the-art single-stage neural rankers on in-domain data.},
booktitle = {Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval},
pages = {2220–2226},
numpages = {7},
keywords = {splade, latency, information retrieval, sparse representations},
location = {Madrid, Spain},
series = {SIGIR '22}
}