On this data we apply a threshold, obtaining a binary list that is converted into decimal values by obtaining an ID, through a custom object made in javascript. For this analysis, we used Chromagram, an external object developed in Max that generates a list containing the chromatic values of the 12 tones. In music, the chromogram relates to the twelve different pitch classes and is used to analyze and capture the harmonic and melodic characteristics of music. We have recreated and displayed this scale by implementing the Bark = - 0.53 algorithm applied to the frequency bands extracted from the FFT analysis via the Max pfft ~ object. The scale corresponds to the first 24 critical hearing bands. The Bark scale is a psychoacoustic scale theorized by Eberhard Zwicker in 1961 and takes the name of Heinrich Barkhausen who proposed the first subjective measurements of the volume (loudness). If the sampling frequency is 44100 and there is a high peak in the cepstrum with 100 samples, the peak indicates the presence of a pitch at the frequency of 441Hz (44100/100 = 441 Hz).
MAX MSP FFT PATCH
Through the cepstral analysis, it's possible to create a patch that allows to trace the fundamental note of a signal. In signal theory, the cepstrum is the result of the Fourier transform applied to the spectrum of a signal. Through the Fourier transform we can break down a signal into all its sinusoidal components and visualize the harmonic structure of a sound. Using the FFT algorithm (optimized to calculate the discrete Fourier transform), we have created a spectrum analyzer that displays the frequency values on the Y-axis and the evolution of sound over time on the X-axis. Natural harmonics are a succession of pure sounds whose frequencies are multiples of a base note called fundamental note. Finally, the pixels are interpolated to reach a new size from 32x32 px to 512x512 px. The distribution of the pixels on the 2D surface occurs by dividing the initial texture into 32 parts (32x1 px) and placing the new textures on the Y-axis.
MAX MSP FFT HOW TO
In this case, we explored the possibility of creating two-dimensional maps through sound and generating vector fields to control positions, speeds, forces, etc. The first obstacle was to understand how to transform the list containing the information of two-dimensional samples, to do this, the sound is recorded in a buffer with a size of 1024 samples, thus sampling 1024 values every 23.22 milliseconds. The data inside the buffer are then converted to grayscale and represented as pixels in a 1Dimensional texture (1024x1 px). In our installations, we use the average of the samples to control different parameters related to the movement of the particles. One of the main goals of this research was to find new ways to extrapolate data through sampling other than those already tested. The video shows an example of how the list containing the sampling information can be used, translating the value of each sample into pixels. Thanks to sampling, it was possible to achieve a higher level of precision and synchronization by analyzing even small events and sound details that were lost in the spectrum analysis. Our first installations used spectrum analysis, obtaining only amplitudes and frequencies of the sound signal. When the sound is sampled, each sample is assigned the amplitude value closest to the amplitude of the original wave. The higher the frequency, the higher the sound reconstruction quality will be. In the sound field, this process takes place through sampling, an operation that consists of taking each second with a constant frequency (sampling frequency) of the samples from the original signal. In order to manipulate and translate the data extrapolated from the sound, we used Max/MSP by Cycling ’74.ĭigitization is the process that converts an analog signal into a digital one. This research aims at the study and experimentation of new audio analysis techniques to improve the control and generation of real-time graphics through the development of custom patches.
The synergy between sound and image has always been a fundamental component in our projects and for this reason, we constantly explore different strategies of data extraction from sound signal.