Drop audio or click (WAV/MP3/OGG/OPUS)
Your audio never leaves your device.
Hear the 3D effect fast. Loading replaces the current scene and starts playback.
Inside-the-hull bed, bubbles at different depths, front sonar ping, distant hull groans.
Wind bed, light rain, stream, leaves, and two bird layers with gentle motion.
Three soft pulses orbit your head with gentle height changes.
Two calming chimes drifting in height with a roomy feel.
Two soft whooshes glide front↔back for a wide, externalized bed.
Four pulses at front/back/left/right for quick spatial A/B.
Kick/snare/hat + clave + bass at 100 BPM, staged like a mini band.
Like the Spatial Audio Mixer?
If this tool helps your work or study, you can keep it 100% free by buying us a coffee. Tips go to hosting, new features, and better docs.
Drop your audio files and position each source in 3D. The mixer applies HRTF panning, early reflections, occlusion, distance curves, and per-track EQ for a clear headphone mix. When you’re ready, render stereo binaural or export Ambisonics for XR.
Drag multiple files or use the picker. Each file becomes a track with its own 3D position.
Pan around the listener, set distance and height, choose room size, and enable early reflections or occlusion.
Use per-track EQ and gain to avoid masking while keeping spatial cues intact.
Render headphone-ready binaural or Ambisonics for head-tracked XR pipelines.
Processing stays on-device. No uploads, no logins, no watermarks.
Accurate HRTF panning, height cues, distance roll-off, occlusion, and early reflections.
Per-track EQ, gain staging, and width controls preserve localization and intelligibility.
Render binaural stereo for headphones or Ambisonics for XR pipelines.
Yes. Move sources during playback and hear position, distance, and occlusion changes instantly.
Yes. Place sources on a 2D plane and add height (Z axis). Elevation cues are created via HRTF filtering and early reflections.
HRTF and room processing run natively in the browser. If your device struggles, mute or freeze heavy tracks and reduce reflection order.
Yes. Position voices and ambiences for a sense of space, then export a headphone-ready binaural stereo file.
Lossless WAV is preferred for final renders. For stems, WAV or high-quality OGG/Opus are fine; everything is decoded locally.
Yes. You can render stereo binaural for headphones or export Ambisonics for engines that decode to head-tracked output.
Audio is processed locally via the Web Audio API. Files are not uploaded to a server.
The mixer ships with common datasets (e.g., CIPIC). You can switch profiles to fit different ear shapes and perceived width.
Modern laptops handle dozens of tracks. For large sessions, prefer 48 kHz sources, freeze heavy chains, and keep IR lengths reasonable.
Live monitoring is near-instant for panning and EQ. Convolution adds a small buffer; render is offline and sample-accurate.
Latest Chrome, Edge, and Firefox perform best. Safari works but may need smaller projects due to AudioWorklet constraints.
Use 48 kHz for consistent HRTF filtering, cleaner resampling, and reliable Ambisonics export.
Related tools: Isochronic Tones · Binaural Beats · Downloads
Runs in your browser
No login. Local processing via Web Audio API.
Input formats
WAV, MP3, Opus, Ogg (multiple stems supported).
Spatial features
HRTF panning, early reflections, occlusion, per-track EQ.
Outputs
Stereo binaural
Use cases
Headphone mixes, mockups, XR, game audio, podcasts.