Audio programming presents unique challenges that demand both high performance and reliability. From real-time digital signal processing to music creation tools, audio applications require low latency, predictable memory usage, and freedom from unexpected crashes or glitches. Rust, with its combination of performance comparable to C/C++ and memory safety guarantees without garbage collection, has emerged as an excellent choice for audio development.
In this comprehensive guide, we’ll explore Rust’s ecosystem for audio programming as it stands in early 2025. We’ll examine the libraries, frameworks, and tools that have matured over the years, providing developers with robust building blocks for creating efficient and reliable audio applications. Whether you’re building digital audio workstations, audio plugins, embedded audio devices, or game audio engines, this guide will help you navigate the rich landscape of Rust’s audio programming ecosystem.
Audio Foundations
At the core of audio programming are libraries for handling audio data and interfacing with audio hardware:
CPAL: Cross-Platform Audio Library
// Using CPAL for cross-platform audio I/O
// Cargo.toml:
// [dependencies]
// cpal = "0.15"
// anyhow = "1.0"
use cpal::traits::{DeviceTrait, HostTrait, StreamTrait};
use cpal::{Sample, SampleFormat};
fn main() -> Result<(), anyhow::Error> {
// Get the default host
let host = cpal::default_host();
// Get the default output device
let device = host.default_output_device()
.expect("No output device available");
println!("Output device: {}", device.name()?);
// Get the default output config
let config = device.default_output_config()?;
println!("Default output config: {:?}", config);
// Create a sine wave generator
let sample_rate = config.sample_rate().0 as f32;
let mut sample_clock = 0f32;
let mut next_value = move || {
sample_clock = (sample_clock + 1.0) % sample_rate;
(sample_clock * 440.0 * 2.0 * std::f32::consts::PI / sample_rate).sin() * 0.2
};
// Build an output stream
let err_fn = |err| eprintln!("an error occurred on the output audio stream: {}", err);
let stream = match config.sample_format() {
SampleFormat::F32 => device.build_output_stream(
&config.into(),
move |data: &mut [f32], _: &cpal::OutputCallbackInfo| {
for sample in data.iter_mut() {
*sample = next_value();
}
},
err_fn,
None,
)?,
SampleFormat::I16 => device.build_output_stream(
&config.into(),
move |data: &mut [i16], _: &cpal::OutputCallbackInfo| {
for sample in data.iter_mut() {
*sample = Sample::from::<f32>(&next_value());
}
},
err_fn,
None,
)?,
SampleFormat::U16 => device.build_output_stream(
&config.into(),
move |data: &mut [u16], _: &cpal::OutputCallbackInfo| {
for sample in data.iter_mut() {
*sample = Sample::from::<f32>(&next_value());
}
},
err_fn,
None,
)?,
_ => return Err(anyhow::Error::msg("Unsupported sample format")),
};
// Play the stream
stream.play()?;
// Keep the program running
println!("Playing a sine wave. Press Enter to exit...");
let mut input = String::new();
std::io::stdin().read_line(&mut input)?;
Ok(())
}
Symphonia: Audio Decoding Library
// Using Symphonia for audio decoding
// Cargo.toml:
// [dependencies]
// symphonia = { version = "0.5", features = ["mp3", "aac", "flac", "wav"] }
use std::fs::File;
use std::path::Path;
use symphonia::core::audio::SampleBuffer;
use symphonia::core::codecs::{DecoderOptions, CODEC_TYPE_NULL};
use symphonia::core::errors::Error;
use symphonia::core::formats::FormatOptions;
use symphonia::core::io::MediaSourceStream;
use symphonia::core::meta::MetadataOptions;
use symphonia::core::probe::Hint;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Open the media source
let path = Path::new("audio.mp3");
let file = File::open(path)?;
// Create a media source stream
let mss = MediaSourceStream::new(Box::new(file), Default::default());
// Create a hint to help the format registry determine the format
let mut hint = Hint::new();
// Provide the file extension as a hint
if let Some(extension) = path.extension() {
if let Some(extension_str) = extension.to_str() {
hint.with_extension(extension_str);
}
}
// Use the default options for format and metadata
let format_opts = FormatOptions::default();
let metadata_opts = MetadataOptions::default();
// Probe the media source
let probed = symphonia::default::get_probe().format(&hint, mss, &format_opts, &metadata_opts)?;
// Get the format reader
let mut format = probed.format;
// Get the default track
let track = format
.default_track()
.ok_or(Error::new(CODEC_TYPE_NULL, "no default track"))?;
// Create a decoder for the track
let mut decoder = symphonia::default::get_codecs()
.make(&track.codec_params, &DecoderOptions::default())?;
// Print track information
println!("Track info:");
println!(" Codec: {}", track.codec_params.codec);
if let Some(n_frames) = track.codec_params.n_frames {
println!(" Frames: {}", n_frames);
}
if let Some(sample_rate) = track.codec_params.sample_rate {
println!(" Sample Rate: {}", sample_rate);
}
if let Some(n_channels) = track.codec_params.channels {
println!(" Channels: {}", n_channels.count());
}
// Decode the first few packets
let mut sample_count = 0;
let max_samples = 10000; // Limit the number of samples to process
while sample_count < max_samples {
// Get the next packet from the format reader
let packet = match format.next_packet() {
Ok(packet) => packet,
Err(Error::ResetRequired) => {
// Handle reset required error
break;
}
Err(err) => {
// Handle other errors
return Err(Box::new(err));
}
};
// Decode the packet
let decoded = decoder.decode(&packet)?;
// Create a sample buffer
let spec = *decoded.spec();
let duration = decoded.capacity() as u64;
let mut sample_buffer = SampleBuffer::<f32>::new(duration, spec);
// Copy the decoded audio data to the sample buffer
sample_buffer.copy_interleaved_ref(decoded);
// Process the samples (here we just count them)
sample_count += sample_buffer.samples().len();
}
println!("Processed {} samples", sample_count);
Ok(())
}
Dasp: Digital Audio Signal Processing
// Using Dasp for audio signal processing
// Cargo.toml:
// [dependencies]
// dasp = { version = "0.11", features = ["all"] }
// hound = "3.5"
use dasp::{Frame, Sample, Signal};
use dasp::signal::{self, Sine};
use hound::{SampleFormat, WavSpec, WavWriter};
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Define audio parameters
let sample_rate = 44100;
let duration_secs = 5.0;
let num_samples = (sample_rate as f64 * duration_secs) as usize;
// Create a sine wave oscillator at 440 Hz
let sine = signal::rate(sample_rate as f64).const_hz(440.0).sine();
// Create a tremolo effect (amplitude modulation)
let tremolo_freq = 5.0; // 5 Hz tremolo
let tremolo = signal::rate(sample_rate as f64).const_hz(tremolo_freq).sine();
// Apply the tremolo to the sine wave
let signal = sine.zip_map(tremolo, |sine, tremolo| {
// Map tremolo from [-1, 1] to [0.5, 1.0] range for amplitude modulation
let amplitude = 0.75 + 0.25 * tremolo;
sine * amplitude
});
// Take the specified number of samples
let signal = signal.take(num_samples);
// Convert to Vec<f32>
let samples: Vec<f32> = signal.map(|sample| sample as f32).collect();
// Write to a WAV file
let spec = WavSpec {
channels: 1,
sample_rate: sample_rate as u32,
bits_per_sample: 32,
sample_format: SampleFormat::Float,
};
let mut writer = WavWriter::create("sine_with_tremolo.wav", spec)?;
for sample in samples {
writer.write_sample(sample)?;
}
writer.finalize()?;
println!("Created sine_with_tremolo.wav");
Ok(())
}
Audio Plugins and DAWs
Rust is increasingly used for building audio plugins and digital audio workstations:
NIH-plug: Audio Plugin Framework
// Using NIH-plug for audio plugin development
// Cargo.toml:
// [lib]
// crate-type = ["cdylib"]
//
// [dependencies]
// nih_plug = { git = "https://github.com/robbert-vdh/nih-plug.git", features = ["assert_process_allocs"] }
// nih_plug_vizia = { git = "https://github.com/robbert-vdh/nih-plug.git" }
use nih_plug::prelude::*;
use nih_plug_vizia::ViziaState;
use std::sync::Arc;
// Define the plugin struct
struct GainPlugin {
params: Arc<GainParams>,
}
// Define the plugin parameters
#[derive(Params)]
struct GainParams {
#[id = "gain"]
pub gain: FloatParam,
#[persist = "editor-state"]
editor_state: Arc<ViziaState>,
}
impl Default for GainPlugin {
fn default() -> Self {
Self {
params: Arc::new(GainParams::default()),
}
}
}
impl Default for GainParams {
fn default() -> Self {
Self {
gain: FloatParam::new(
"Gain",
0.0,
FloatRange::Linear { min: -30.0, max: 30.0 },
)
.with_unit(" dB")
.with_smoother(SmoothingStyle::Logarithmic(50.0))
.with_step_size(0.1),
editor_state: Arc::new(ViziaState::new(nih_plug_vizia::create_vizia_editor_state(
|cx, params| {
// Editor UI code would go here
},
))),
}
}
}
impl Plugin for GainPlugin {
const NAME: &'static str = "Gain Plugin";
const VENDOR: &'static str = "Rust Audio Examples";
const URL: &'static str = "https://example.com";
const EMAIL: &'static str = "[email protected]";
const VERSION: &'static str = env!("CARGO_PKG_VERSION");
const AUDIO_IO_LAYOUTS: &'static [AudioIOLayout] = &[
AudioIOLayout {
main_input_channels: NonZeroU32::new(2),
main_output_channels: NonZeroU32::new(2),
..AudioIOLayout::const_default()
},
];
const MIDI_INPUT: MidiConfig = MidiConfig::None;
const MIDI_OUTPUT: MidiConfig = MidiConfig::None;
const SAMPLE_ACCURATE_AUTOMATION: bool = true;
type SysExMessage = ();
type BackgroundTask = ();
fn params(&self) -> Arc<dyn Params> {
self.params.clone()
}
fn process(
&mut self,
buffer: &mut Buffer,
_aux: &mut AuxiliaryBuffers,
_context: &mut impl ProcessContext<Self>,
) -> ProcessStatus {
// Apply gain to all samples in the buffer
for channel_samples in buffer.iter_samples() {
let gain = self.params.gain.smoothed.next();
let gain_linear = util::db_to_gain(gain);
for sample in channel_samples {
*sample *= gain_linear;
}
}
ProcessStatus::Normal
}
}
impl ClapPlugin for GainPlugin {
const CLAP_ID: &'static str = "com.example.gain-plugin";
const CLAP_DESCRIPTION: Option<&'static str> = Some("A simple gain plugin");
const CLAP_MANUAL_URL: Option<&'static str> = Some(Self::URL);
const CLAP_SUPPORT_URL: Option<&'static str> = None;
const CLAP_FEATURES: &'static [ClapFeature] = &[
ClapFeature::AudioEffect,
ClapFeature::Stereo,
ClapFeature::Utility,
];
}
impl Vst3Plugin for GainPlugin {
const VST3_CLASS_ID: [u8; 16] = *b"GainPluginRustEx";
const VST3_SUBCATEGORIES: &'static [Vst3SubCategory] = &[
Vst3SubCategory::Fx,
Vst3SubCategory::Tools,
];
}
nih_export_clap!(GainPlugin);
nih_export_vst3!(GainPlugin);
Audio Analysis and Visualization
Rust provides tools for analyzing and visualizing audio data:
Spectrum Analysis with RustFFT
// Using RustFFT for spectrum analysis
// Cargo.toml:
// [dependencies]
// rustfft = "6.1"
// hound = "3.5"
use hound::WavReader;
use rustfft::{FftPlanner, num_complex::Complex};
use std::f32::consts::PI;
use std::path::Path;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Open a WAV file
let mut reader = WavReader::open("audio.wav")?;
let spec = reader.spec();
println!("Audio format: {:?}", spec);
// Read audio samples
let samples: Vec<f32> = if spec.sample_format == hound::SampleFormat::Float {
reader.samples::<f32>().map(|s| s.unwrap()).collect()
} else {
// Convert integer samples to float
match spec.bits_per_sample {
16 => {
let scale = 1.0 / 32768.0;
reader.samples::<i16>().map(|s| s.unwrap() as f32 * scale).collect()
}
24 => {
let scale = 1.0 / 8388608.0;
reader.samples::<i32>().map(|s| s.unwrap() as f32 * scale).collect()
}
32 => {
let scale = 1.0 / 2147483648.0;
reader.samples::<i32>().map(|s| s.unwrap() as f32 * scale).collect()
}
_ => return Err("Unsupported bit depth".into()),
}
};
// Take a segment of the audio for analysis
let segment_size = 4096;
let segment = if samples.len() > segment_size {
// Take a segment from the middle of the audio
let start = (samples.len() - segment_size) / 2;
&samples[start..start + segment_size]
} else {
&samples
};
// Apply a Hann window to reduce spectral leakage
let mut windowed_segment: Vec<Complex<f32>> = segment
.iter()
.enumerate()
.map(|(i, &sample)| {
let window = 0.5 * (1.0 - (2.0 * PI * i as f32 / segment.len() as f32).cos());
Complex::new(sample * window, 0.0)
})
.collect();
// Perform FFT
let mut planner = FftPlanner::new();
let fft = planner.plan_fft_forward(windowed_segment.len());
fft.process(&mut windowed_segment);
// Calculate magnitude spectrum
let spectrum: Vec<f32> = windowed_segment
.iter()
.map(|c| (c.norm() / segment_size as f32).log10() * 20.0) // Convert to dB
.collect();
// Find the peak frequency
let mut max_magnitude = -f32::INFINITY;
let mut peak_bin = 0;
for i in 1..segment_size / 2 {
if spectrum[i] > max_magnitude {
max_magnitude = spectrum[i];
peak_bin = i;
}
}
let peak_frequency = peak_bin as f32 * spec.sample_rate as f32 / segment_size as f32;
println!("Peak frequency: {:.2} Hz", peak_frequency);
Ok(())
}
Music and MIDI
Rust offers libraries for working with music theory concepts and MIDI:
MIDI Processing
// MIDI processing with midir
// Cargo.toml:
// [dependencies]
// midir = "0.9"
// wmidi = "4.0"
use midir::{MidiInput, MidiOutput, Ignore};
use std::error::Error;
use std::io::{stdin, stdout, Write};
use std::thread::sleep;
use std::time::Duration;
use wmidi::{MidiMessage, Note, U7};
fn main() -> Result<(), Box<dyn Error>> {
// List available MIDI output ports
let midi_out = MidiOutput::new("midir output")?;
let out_ports = midi_out.ports();
println!("Available output ports:");
for (i, p) in out_ports.iter().enumerate() {
println!("{}: {}", i, midi_out.port_name(p)?);
}
// Select output port
print!("Select output port: ");
stdout().flush()?;
let mut input = String::new();
stdin().read_line(&mut input)?;
let out_port = out_ports.get(input.trim().parse::<usize>()?).ok_or("Invalid port number")?;
// Open output connection
let mut conn_out = midi_out.connect(out_port, "midir-test")?;
println!("Connection open. Playing a scale...");
// Play a C major scale
let channel = wmidi::Channel::Ch1;
let velocity = U7::try_from(64).unwrap();
let notes = [
Note::C4, Note::D4, Note::E4, Note::F4,
Note::G4, Note::A4, Note::B4, Note::C5,
];
for ¬e in ¬es {
// Note On message
let msg = MidiMessage::NoteOn(channel, note, velocity);
let bytes = msg.to_bytes();
conn_out.send(&bytes)?;
// Wait a bit
sleep(Duration::from_millis(300));
// Note Off message
let msg = MidiMessage::NoteOff(channel, note, velocity);
let bytes = msg.to_bytes();
conn_out.send(&bytes)?;
// Small gap between notes
sleep(Duration::from_millis(50));
}
println!("Scale finished.");
Ok(())
}
Music Theory
// Music theory with rust_music_theory
// Cargo.toml:
// [dependencies]
// rust_music_theory = "0.2"
use rust_music_theory::note::{Note, PitchClass};
use rust_music_theory::scale::{Mode, Scale, ScaleType};
use rust_music_theory::chord::{Chord, ChordType, ChordQuality};
fn main() {
// Create a C major scale
let c_major = Scale::new(
PitchClass::C,
ScaleType::Major,
None,
);
println!("C Major Scale: {}", c_major);
println!("Notes: {}", c_major.notes().iter().map(|n| n.to_string()).collect::<Vec<_>>().join(", "));
// Create a D minor scale
let d_minor = Scale::new(
PitchClass::D,
ScaleType::Minor(Mode::Aeolian),
None,
);
println!("\nD Minor Scale: {}", d_minor);
println!("Notes: {}", d_minor.notes().iter().map(|n| n.to_string()).collect::<Vec<_>>().join(", "));
// Create a G7 chord
let g7 = Chord::new(
PitchClass::G,
ChordType::Seventh(ChordQuality::Dominant),
None,
);
println!("\nG7 Chord: {}", g7);
println!("Notes: {}", g7.notes().iter().map(|n| n.to_string()).collect::<Vec<_>>().join(", "));
// Find the relative minor of C major
let relative_minor = Scale::new(
c_major.relative_minor(),
ScaleType::Minor(Mode::Aeolian),
None,
);
println!("\nRelative minor of C major: {}", relative_minor);
println!("Notes: {}", relative_minor.notes().iter().map(|n| n.to_string()).collect::<Vec<_>>().join(", "));
// Create a chord progression (I-IV-V-I in C major)
let progression = vec![
Chord::new(PitchClass::C, ChordType::Triad(ChordQuality::Major), None),
Chord::new(PitchClass::F, ChordType::Triad(ChordQuality::Major), None),
Chord::new(PitchClass::G, ChordType::Triad(ChordQuality::Major), None),
Chord::new(PitchClass::C, ChordType::Triad(ChordQuality::Major), None),
];
println!("\nI-IV-V-I progression in C major:");
for (i, chord) in progression.iter().enumerate() {
println!(" Chord {}: {} - {}", i + 1, chord, chord.notes().iter().map(|n| n.to_string()).collect::<Vec<_>>().join(", "));
}
}
Game Audio
Rust provides tools for game audio development:
Kira: Game Audio Library
// Using Kira for game audio
// Cargo.toml:
// [dependencies]
// kira = "0.8"
use kira::{
manager::{
AudioManager, AudioManagerSettings,
backend::DefaultBackend,
},
sound::static_sound::{StaticSoundData, StaticSoundSettings},
tween::Tween,
};
use std::time::Duration;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create an audio manager
let mut manager = AudioManager::<DefaultBackend>::new(AudioManagerSettings::default())?;
// Load a sound
let sound_data = StaticSoundData::from_file(
"sound.wav",
StaticSoundSettings::default(),
)?;
// Play the sound
let mut sound = manager.play(sound_data.clone())?;
println!("Playing sound...");
std::thread::sleep(Duration::from_secs(2));
// Fade out the sound
sound.set_volume(0.0, Tween {
duration: Duration::from_secs_f32(1.0),
..Default::default()
})?;
println!("Fading out...");
std::thread::sleep(Duration::from_secs(2));
// Load another sound with different settings
let sound_data_2 = StaticSoundData::from_file(
"sound2.wav",
StaticSoundSettings::default()
.loop_region(0.0..10.0)
.volume(0.5),
)?;
// Play the looping sound
let mut sound_2 = manager.play(sound_data_2)?;
println!("Playing looping sound...");
std::thread::sleep(Duration::from_secs(5));
// Stop the sound
sound_2.stop(Tween {
duration: Duration::from_secs_f32(0.5),
..Default::default()
})?;
println!("Stopping...");
std::thread::sleep(Duration::from_secs(1));
Ok(())
}
Conclusion
Rust’s ecosystem for audio programming has matured significantly, offering a comprehensive set of tools and libraries for building efficient and reliable audio applications. From low-level audio I/O and signal processing to high-level music creation tools and audio plugins, Rust provides the building blocks needed to tackle the unique challenges of audio development.
The key takeaways from this exploration of Rust’s audio programming ecosystem are:
- Strong foundations with libraries like CPAL and Symphonia providing robust audio I/O and decoding capabilities
- Powerful DSP tools like Dasp enabling efficient audio signal processing
- Plugin frameworks such as NIH-plug and Baseplug for creating professional audio plugins
- Analysis and visualization tools for understanding and working with audio data
- Music and MIDI libraries for working with musical concepts and MIDI communication
- Game audio solutions for interactive audio experiences
As audio technology continues to evolve, Rust’s focus on performance, safety, and expressiveness makes it an excellent choice for developers building the next generation of audio applications. Whether you’re creating professional audio tools, game audio engines, or embedded audio devices, Rust’s audio ecosystem provides the tools you need to succeed.