StepFun AI Releases Step-Audio 2 Mini: An Open-Source 8B Speech-to-Speech AI Model that Surpasses GPT-4o-Audio

The StepFun AI team has released Step-Audio 2 Mini, an 8B parameter speech-to-speech large audio language model (LALM) that delivers expressive, grounded, and real-time audio interaction. Released under the Apache 2.0 license, this open-source model achieves state-of-the-art performance across speech recognition, audio understanding, and speech conversation benchmarks—surpassing commercial systems such as GPT-4o-Audio.

Key Features
1. Unified Audio–Text Tokenization
Unlike cascaded ASR+LLM+TTS pipelines, Step-Audio 2 integrates Multimodal Discrete Token Modeling, where text and audio tokens share a single modeling stream.
This enables:
- Seamless reasoning across text and audio.
- On-the-fly voice style switching during inference.
- Consistency in semantic, prosodic, and emotional outputs.
2. Expressive and Emotion-Aware Generation
The model doesn’t just transcribe speech—it interprets paralinguistic features like pitch, rhythm, emotion, timbre, and style. This allows conversations with realistic emotional tones such as whispering, sadness, or excitement. Benchmarks on StepEval-Audio-Paralinguistic show Step-Audio 2 achieving 83.1% accuracy, far beyond GPT-4o Audio (43.5%) and Qwen-Omni (44.2%).
3. Retrieval-Augmented Speech Generation
Step-Audio 2 incorporates multimodal RAG (Retrieval-Augmented Generation):
- Web search integration for factual grounding.
- Audio search—a novel capability that retrieves real voices from a large library and fuses them into responses, enabling voice timbre/style imitation at inference time.
4. Tool Calling and Multimodal Reasoning
The system extends beyond speech synthesis by supporting tool invocation. Benchmarks show that Step-Audio 2 matches textual LLMs in tool selection and parameter accuracy, while uniquely excelling at audio search tool calls—a capability unavailable in text-only LLMs.
Training and Data Scale
- Text + Audio Corpus: 1.356T tokens
- Audio Hours: 8M+ real and synthetic hours
- Speaker Diversity: ~50K voices across languages and dialects
- Pretraining Pipeline: multi-stage curriculum covering ASR, TTS, speech-to-speech translation, and emotion-labeled conversational synthesis.
This large-scale training allows Step-Audio 2 Mini to retain strong text reasoning (via its Qwen2-Audio and CosyVoice foundation) while mastering fine-grained audio modeling.
Performance Benchmarks




Automatic Speech Recognition (ASR)
- English: Average WER 3.14% (beats GPT-4o Transcribe at an average 4.5%).
- Chinese: Average CER 3.08% (significantly lower than GPT-4o and Qwen-Omni).
- Robust across dialects and accents.
Audio Understanding (MMAU Benchmark)
- Step-Audio 2: 78.0 average, outperforming Omni-R1 (77.0) and Audio Flamingo 3 (73.1).
- Strongest in sound and speech reasoning tasks.
Speech Translation
- CoVoST 2 (S2TT): BLEU 39.26 (highest among open and closed models).
- CVSS (S2ST): BLEU 30.87, ahead of GPT-4o (23.68).
Conversational Benchmarks (URO-Bench)
- Chinese Conversations: Best overall at 83.3 (basic) and 68.2 (pro).
- English Conversations: Competitive with GPT-4o (83.9 vs. 84.5), far ahead of other open models.


Conclusion
Step-Audio 2 Mini makes advanced, multimodal speech intelligence accessible to the developers and research community. By combining Qwen2-Audio’s reasoning capacity with CosyVoice’s tokenization pipeline, and augmenting with retrieval-based grounding, StepFun has delivered one of the most capable open audio LLMs.
Check out the PAPER and MODEL on HUGGING FACE. Feel free to check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.