Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Linear Complexity Self-Attention
# Linear Complexity Self-Attention
Yoso 4096
YOSO is an efficient Transformer variant that reduces the self-attention complexity from quadratic to linear through Bernoulli sampling attention mechanism, supporting sequence lengths up to 4096.
Large Language Model
Transformers
Y
uw-madison
2,072
0
Featured Recommended AI Models
Empowering the Future, Your AI Solution Knowledge Base
English
简体中文
繁體中文
にほんご
© 2025
AIbase