Bert2bert Base Arxiv Titlegen
B
Bert2bert Base Arxiv Titlegen
Developed by Callidior
An automatic paper title generation model based on the BERT2BERT architecture, specifically designed to generate titles from arXiv paper abstracts in the computer science field.
Downloads 19
Release Time : 3/2/2022
Model Overview
This model can automatically generate appropriate titles based on paper abstracts in the computer science field, using an encoder-decoder architecture, making it particularly suitable for academic writing assistance.
Model Features
Attention-based Transformer architecture
Completely eliminates recurrent and convolutional structures, using only attention mechanisms for better parallelism and training efficiency
Large-scale domain-specific training
Fine-tuned using 318,500 arXiv papers in the computer science field, ensuring strong domain expertise
Efficient training
Significantly reduces training time compared to traditional models, completing training in just 3.5 days on 8 GPUs
Model Capabilities
Academic text generation
Automatic paper title generation
Natural language processing
Use Cases
Academic writing assistance
Paper title suggestions
Researchers input paper abstracts to obtain suitable title suggestions
Generated titles comply with academic standards and accurately reflect paper content
Academic literature organization
Libraries or document management systems automatically generate titles for unnamed documents
Improves document management efficiency
Machine translation evaluation
Translation quality assessment
Used as an evaluation tool in WMT 2014 English-German and English-French translation tasks
Achieved 28.4 BLEU for English-German and 41.8 BLEU for English-French, setting new single-model records
Featured Recommended AI Models
Š 2025AIbase