Roberta2roberta L 24 Wikisplit
This is an encoder-decoder model based on the RoBERTa architecture, specifically fine-tuned for sentence splitting tasks.
Downloads 16
Release Time : 3/2/2022
Model Overview
The model adopts an encoder-decoder architecture, with both the encoder and decoder initialized from roberta-large and fine-tuned on the WikiSplit dataset for splitting long sentences into shorter, more readable ones.
Model Features
RoBERTa-based architecture
Both the encoder and decoder are initialized from the powerful roberta-large model.
Specialized sentence splitting
Fine-tuned specifically on the WikiSplit dataset, excelling at breaking down complex long sentences into shorter, clearer ones.
Special character handling
Requires special handling of double quotation marks (replaced with two single quotes) for optimal results.
Model Capabilities
Text rewriting
Sentence splitting
Text simplification
Use Cases
Text processing
Long sentence splitting
Splitting complex compound long sentences into multiple simple sentences
Improves text readability and comprehension
Content rewriting
Rewriting sentence structures without altering the original meaning
Generates more natural and fluent expressions
Featured Recommended AI Models