Model Overview
Model Features
Model Capabilities
Use Cases
đ SeaLLMs-Audio: Large Audio-Language Models for Southeast Asia
SeaLLMs-Audio is the multimodal (audio) extension of the SeaLLMs family. It's the first large audio - language model supporting multiple Southeast Asian languages, excelling in various audio - related tasks.
Website đ¤ DEMO Github đ¤ Model
⨠Features
- Multilingual: Supports 5 languages, including đŽđŠ Indonesian, đšđ Thai, đģđŗ Vietnamese, đŦđ§ English, and đ¨đŗ Chinese.
- Multimodal: Supports flexible input formats like audio only, text only, and audio with text.
- Multi - task: Supports various tasks, such as audio captioning, automatic speech recognition, speech - to - text translation, speech emotion recognition, speech question answering, and speech summarization. It also handles voice chat tasks.
We open - weight SeaLLMs-Audio on Hugging Face and have built a demo for user interaction.
đĻ Installation
No installation steps are provided in the original document, so this section is skipped.
đģ Usage Examples
Basic Usage
from transformers import Qwen2AudioForConditionalGeneration, AutoProcessor
import librosa
import os
model = Qwen2AudioForConditionalGeneration.from_pretrained("SeaLLMs/SeaLLMs-Audio-7B", device_map="auto")
processor = AutoProcessor.from_pretrained("SeaLLMs/SeaLLMs-Audio-7B")
def response_to_audio(conversation, model=None, processor=None):
text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False)
audios = []
for message in conversation:
if isinstance(message["content"], list):
for ele in message["content"]:
if ele["type"] == "audio":
if ele['audio_url'] != None:
audios.append(librosa.load(
ele['audio_url'],
sr=processor.feature_extractor.sampling_rate)[0]
)
if audios != []:
inputs = processor(text=text, audios=audios, return_tensors="pt", padding=True,sampling_rate=16000)
else:
inputs = processor(text=text, return_tensors="pt", padding=True)
inputs.input_ids = inputs.input_ids.to("cuda")
inputs = {k: v.to("cuda") for k, v in inputs.items() if v is not None}
generate_ids = model.generate(**inputs, max_new_tokens=2048, temperature = 0, do_sample=False)
generate_ids = generate_ids[:, inputs["input_ids"].size(1):]
response = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
return response
# Voice Chat
os.system(f"wget -O fact_en.wav https://DAMO-NLP-SG.github.io/SeaLLMs-Audio/static/audios/fact_en.wav")
os.system(f"wget -O general_en.wav https://DAMO-NLP-SG.github.io/SeaLLMs-Audio/static/audios/general_en.wav")
conversation = [
{"role": "user", "content": [
{"type": "audio", "audio_url": "fact_en.wav"},
]},
{"role": "assistant", "content": "The most abundant gas in Earth's atmosphere is nitrogen. It makes up about 78 percent of the atmosphere by volume."},
{"role": "user", "content": [
{"type": "audio", "audio_url": "general_en.wav"},
]},
]
response = response_to_audio(conversation, model=model, processor=processor)
print(response)
# Audio Analysis
os.system(f"wget -O ASR_en.wav https://DAMO-NLP-SG.github.io/SeaLLMs-Audio/static/audios/ASR_en.wav")
conversation = [
{"role": "user", "content": [
{"type": "audio", "audio_url": "ASR_en.wav"},
{"type": "text", "text": "Please write down what is spoken in the audio file."},
]},
]
response = response_to_audio(conversation, model=model, processor=processor)
print(response)
Advanced Usage
from vllm import LLM, SamplingParams
import librosa, os
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained("SeaLLMs/SeaLLMs-Audio-7B")
llm = LLM(
model="SeaLLMs/SeaLLMs-Audio-7B", trust_remote_code=True, gpu_memory_utilization=0.5,
enforce_eager=True, device = "cuda",
limit_mm_per_prompt={"audio": 5},
)
def response_to_audio(conversation, model=None, processor=None, temperature = 0.1,repetition_penalty=1.1, top_p = 0.9,max_new_tokens = 4096):
text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False)
audios = []
for message in conversation:
if isinstance(message["content"], list):
for ele in message["content"]:
if ele["type"] == "audio":
if ele['audio_url'] != None:
audios.append(librosa.load(
ele['audio_url'],
sr=processor.feature_extractor.sampling_rate)[0]
)
sampling_params = SamplingParams(
temperature=temperature, max_tokens=max_new_tokens, repetition_penalty=repetition_penalty, top_p=top_p, top_k=20,
stop_token_ids=[],
)
input = {
'prompt': text,
'multi_modal_data': {
'audio': [(audio, 16000) for audio in audios]
}
}
output = model.generate([input], sampling_params=sampling_params)[0]
response = output.outputs[0].text
return response
# Voice Chat
os.system(f"wget -O fact_en.wav https://DAMO-NLP-SG.github.io/SeaLLMs-Audio/static/audios/fact_en.wav")
os.system(f"wget -O general_en.wav https://DAMO-NLP-SG.github.io/SeaLLMs-Audio/static/audios/general_en.wav")
conversation = [
{"role": "user", "content": [
{"type": "audio", "audio_url": "fact_en.wav"},
]},
{"role": "assistant", "content": "The most abundant gas in Earth's atmosphere is nitrogen. It makes up about 78 percent of the atmosphere by volume."},
{"role": "user", "content": [
{"type": "audio", "audio_url": "general_en.wav"},
]},
]
response = response_to_audio(conversation, model=llm, processor=processor)
print(response)
# Audio Analysis
os.system(f"wget -O ASR_en.wav https://DAMO-NLP-SG.github.io/SeaLLMs-Audio/static/audios/ASR_en.wav")
conversation = [
{"role": "user", "content": [
{"type": "audio", "audio_url": "ASR_en.wav"},
{"type": "text", "text": "Please write down what is spoken in the audio file."},
]},
]
response = response_to_audio(conversation, model=llm, processor=processor)
print(response)
đ Documentation
Training information
SeaLLMs-Audio is built upon Qwen2-Audio-7B and Qwen2.5-7B-Instruct. We replaced the LLM module in Qwen2-Audio-7B with Qwen2.5-7B-Instruct and then performed full - parameter fine - tuning on a large - scale audio dataset.
This dataset contains 1.58M conversations for multiple tasks, 93% of which are single - turn. The tasks can be roughly classified into the following categories: automatic speech recognition (ASR), audio captioning (AC), speech - to - text translation (S2TT), question answering (QA), speech summarization (SS), speech question answering (SQA), chat, math, and fact and mixed tasks (mixed).
The distribution of data across languages and tasks:
Distribution of SeaLLMs-Audio training data across languages and tasks
The training dataset was curated from multiple data sources, including public datasets and in - house data. Public datasets include: gigaspeech, gigaspeech2, common voice, AudioCaps, VoiceAssistant-400K, YODAS2, and Multitask-National-Speech-Corpus.
We trained the model on the dataset for 1 epoch, which took about 6 days to complete on 32 A800 GPUs.
Performance
Due to the lack of standard audio benchmarks for evaluating audio LLMs in Southeast Asia, we created a benchmark called SeaBench - Audio, which consists of nine tasks:
- Tasks with both audio and text inputs: Audio Captioning (AC), Automatic Speech Recognition (ASR), Speech - to - Text Translation (S2TT), Speech Emotion Recognition (SER), Speech Question Answering (SQA), and Speech Summarization (SS).
- Tasks with only audio inputs: Factuality, Math, and General.
We manually annotated 15 questions per task per language. For evaluation, qualified native speakers rated each response on a scale of 1 to 5, with 5 representing the highest quality.
Due to the lack of LALMs for all three Southeast Asian languages, we compared the performance of SeaLLMs - Audio with relevant LALMs of similar sizes, including: [Qwen2 - Audio - 7B - Instruct](https://huggingface.co/Qwen/Qwen2 - Audio - 7B - Instruct) (Qwen2 - Audio), [MERaLiON - AudioLLM - Whisper - SEA - LION](https://huggingface.co/MERaLiON/MERaLiON - AudioLLM - Whisper - SEA - LION) (MERaLiON), [llama3.1 - typhoon2 - audio - 8b - instruct](https://huggingface.co/scb10x/llama3.1 - typhoon2 - audio - 8b - instruct) (typhoon2 - audio), and [DiVA - llama - 3 - v0 - 8b](https://huggingface.co/WillHeld/DiVA - llama - 3 - v0 - 8b) (DiVA). All these LALMs can accept audio with text as input. The results are shown in the figure below.
Average scores of SeaLLMs-Audio vs. Other LALMs on SeaBench-Audio
The results show that SeaLLMs - Audio achieves state - of - the - art performance in all five languages, demonstrating its effectiveness in supporting audio - related tasks in Southeast Asia.
đ§ Technical Details
No specific technical details beyond the training and performance evaluation are provided in the original document, so this section is skipped.
đ License
The project uses the seallms
license. You can find more details here.
đ Citation
If you find our project useful, we hope you would kindly star our repo and cite our work as follows. Corresponding Author: Wenxuan Zhang (wxzhang@sutd.edu.sg)
@misc{SeaLLMs-Audio,
author = {Chaoqun Liu and Mahani Aljunied and Guizhen Chen and Hou Pong Chan and Weiwen Xu and Yu Rong and Wenxuan Zhang},
title = {SeaLLMs-Audio: Large Audio-Language Models for Southeast Asia},
year = {2025},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/DAMO-NLP-SG/SeaLLMs-Audio}},
}







