top of page

Audio Visualizer

Resonant Visions:

A Reactive Audio

Visualizer

​

Overview

Resonant Visions is an interactive audio-visual project that transforms sound into a dynamic visual experience using the p5.js and p5.sound libraries. Designed as a real-time visualizer, the project responds to live microphone input or preloaded audio, creating a synchronized display of waveforms, frequency spectrums, and rhythmic patterns. The visuals include a bass-reactive circular grid, fluid waveform representations, and symmetrical rectangles that respond dynamically to different sound frequencies.

​

Development Process

The project began with an exploration of audio visualization techniques. I found myself particularly focusing on p5.FFT's getEnergy function to analyze bass, midrange, and treble frequencies. Early experiments involved layering waveforms with varying colors and opacities to create visually compelling patterns. As the project evolved, additional elements were introduced, such as the pulsating grid, symmetrical frequency-based visuals, and reactive circular waveforms to enhance movement and depth. Some key milestones included refining waveform layers and adjusting frequency-based shape transformations.

​

Feedback & Refinements

During the development process, user feedback played a crucial role in refining the visuals. Initial iterations of the circular grid were too dense and visually overwhelming, leading to a redesign that reduced the number of elements and adjusted opacity to create a more balanced aesthetic. These refinements allowed the main visual elements to stand out while maintaining a dynamic yet non-intrusive background.

​

Future Enhancements

Looking ahead, Resonant Visions has the potential to integrate with Hydra Video Synth for more complex, layered visuals. Additional improvements could include gradual color shifts in response to sound dynamics, further enhancing the immersive quality of the experience.

​

Audience Reception

The project was showcased at the Audio Pixel Collider event at the University of Texas at Austin fall of 2024, where it received positive feedback. The audience enjoyed the interplay between visuals and sound, particularly during a live DJ set. Moving forward, minor adjustments, such as fine-tuning audio input sensitivity, could further optimize the balance between waveform density and overall clarity.

​

Resonant Visions successfully bridges the gap between music and visuals, creating an engaging, real-time interaction where sound is not only heard but seen as well..

my coding project image.png

© 2035 by KM Powered and secured by Wix

bottom of page