M

Musicgen Melody

Developed by facebook
MusicGen is a simple and controllable music generation model capable of producing high-quality music based on text descriptions or melody inputs.
Downloads 3,632
Release Time : 6/8/2023

Model Overview

MusicGen is a single-stage autoregressive Transformer model trained on a 32kHz EnCodec tokenizer, using 4 codebooks sampled at 50Hz. Unlike existing methods, it doesn't require self-supervised semantic representations and can generate all codebooks in one pass.

Model Features

Parallel prediction
Achieves parallel prediction by introducing minimal delays between codebooks, requiring only 50 autoregressive steps per second of audio.
Melody-guided generation
Can generate music based on given audio melodies and text descriptions while preserving the original melodic characteristics.
Simple and controllable
Doesn't require self-supervised semantic representations, featuring a straightforward model architecture that's easy to control.

Model Capabilities

Text-to-music generation
Melody-guided music generation
Multiple music style generation

Use Cases

Music creation
Background music generation
Generates customized background music for videos, games, and other content.
Can generate music segments from 8 seconds to longer durations
Melody extension
Generates complete musical works based on existing melody fragments.
Expands musical content while preserving original melodic characteristics
Research
AI music generation research
Used to explore applications of generative models in the music field.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase