AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Linear Complexity Self-Attention

# Linear Complexity Self-Attention

Yoso 4096
YOSO is an efficient Transformer variant that reduces the self-attention complexity from quadratic to linear through Bernoulli sampling attention mechanism, supporting sequence lengths up to 4096.
Large Language Model Transformers
Y
uw-madison
2,072
0
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase