As a child, I always wondered how deep can one look into an object. We’re used to seeing the shape of everything from our own human perspective. A dog, a rock, a snowflake, the skin, the eyes… they all look familiar to us, but could you recognize them if you look at them through a microscope?
My grandfather was a doctor and he used to take me to his laboratory quite often. One of my hobbies there was to take small samples that were given to me and to look at them through the microscope. Blood samples or bacteria looked just like a plain liquid but the lenses revealed a whole different world. There were lots of circles from different size and colour that were supposed to mean something. My grandfather would count them and from the final number he would get some kind of conclusion. Healthy, unhealthy or probably somewhere in between. That’s what it all looked like to me.
I also loved observing cities from an airplane. They looked so tiny and fragile, pretty much like a city from a children’s book, and I was a giant who could easily pick the houses up and move them around or scramble them. This is probably the reason why I loved Google Earth when it came out. All the cities I lived in could be seen from above, going either closer or further, as the very shape of them evolved into something totally new and different. Then again I wondered, how far can we go? Are we someone else’s blood cells? Someone else’s bacteria?
Diving into the sea of sound
Can we look at the sound through a microscope? Is there something equivalent? What can we find if we dive into the sound waves with a magnifying glass?
Before computers started to make it all easier, Ιάννης Ξενάκης (Iannis Xenakis) started to manipulate small particles of sound for music purposes. He would take a music tape and slice it into very tiny pieces, rearranging the fragments and putting them back together. This approach was very limited and time consuming. Fortunately, today we can rely on computers to take this game to the next level.
Granular synthesis, what is it?
Imagine you take a small fragment of a photographic film and magnify it a hundred times. Then, from that magnified image, you take a fragment the same size as the original film and focus your attention on it. You only see that magnified fragment and you don’t see anything else. How different is it from the original film? Could you tell it comes from the original? Does it have a different character?
Now imagine there is not one but thousands of photographic films, one after another, sorted as frames in a movie. On this motion picture, you select one time interval and take all the frames within, rearrange them in a different order and play the whole sequence back, at a different speed, faster or slower. The result is going to be completely different and it will probably unveil a totally different set of emotions, telling a different story and being a unique and different piece of art on its own.
The same happens with sound.
In a nutshell, granular synthesis is about taking small fragments of sound, in the range of 10 to 50 milliseconds, that can be then played back at different speeds or rearranged with other fragments to form other sounds¹. This is a technique that has been widely used for time-stretching and pitch-shifting purposes, but the purpose of this article is to rather explore the creative possibilities of granular synthesis.
Samples, the raw material
This type of synthesis takes audio samples as the raw material. The sample is chopped into slices, called “grains”, that are reproduced at a certain speed. A good way of imagining it is picturing a tape recorder. The head of the tape recorder reads the first grain and after that, it has the ability to jump forward or backward. It can jump forward just a bit after the beginning of the next grain (A); it can jump backward, somewhere in between the previously played grain (B); or it can jump backward a bit further, before the beginning of the previously played grain (C). On each case we get a different effect:
A: Speed up effect
B: Slow down effect
C: Backwards playback effect
This ability to jump around is called “shift speed”, the playback speed is called “sample rate” and the size of the grain is called -you guessed it- “grain size”². Each grain is normally faded in and out with an envelope for smoothing the effect.
Pushing the boundaries
Just by playing with the basic parameters we just mentioned, you could get very interesting results, a lot different than the original audio sample. The texture is sometimes recognizable and that could be a good thing. However, more radically different results can be obtained when modulating those parameters either with external low-frequency oscillators or any other signal. When the sample rate, grain size or shift speed change in time, the result is a more “vivid” sound, as if alive or organic.
Randomness can be added to the list of ingredients with very nice results. Instead of jumping from one grain to the adjacent one, the head reader jumps to a randomly selected grain. Also, granular synthesis can be used as a texture generator, applying the algorithm to create textures that can be played in a way that they are tuned to a desired note³, normally triggered with a MIDI keyboard controller.
From hardware to software, there are tons of tools out there for experiencing granular synthesis. I will list some of my favorite ones, the ones I love playing with and those getting me the most interesting textures.
Bastl Instruments – microGranny/grandPA
These are very nice toys for playing with granular synthesis. Plus, you get the feeling of touching knobs and buttons with your hands. The microGranny is a standalone hardware synth that lets you do all the granular synthesis basics and also synchronize it with an external MIDI clock, meaning that whatever texture you’ve got with it can be triggered at a specific tempo (BPM), a very nice feature for making music. The sample rate can also be tuned to specific notes via MIDI and it also supports MIDI CC, so any parameter can be controlled externally. A very powerful tool.
The grandPA is microGranny’s modular counterpart. The modular implementation makes it a lot more powerful, considering all the infinite possibilities of modulating the parameters with CV signals (which can have any shape, frequency, etc.). Modular synthesis is a world on its own.
Max 4 Live – Granulator
This Max 4 Live instrument provides everything granular synthesis has to offer and with an amazing implementation of the algorithm. It’s my personal choice when creating textures tuned to specific notes triggered with a MIDI keyboard and, unlike the microGranny, it’s polyphonic so you can also play chords. Last but not least, it’s free. This is my personal recommendation if you want to start experimenting with granular synthesis without diving deeper into algorithms or concepts. You’ll get results right away. Just select an audio sample and start tweaking it.
Max/MSP – 2D.wave~
This is a very quick way of implementing granular synthesis. This Max/MSP object takes one fragment of an audio sample and divides it in a number of rows. Then, one or more rows are used for playback at a certain speed. This “raw” processing can create very interesting textures that can’t be recognized from the original sample.
Stochastic synthesis – when “pure” and “concrete” sounds meet
I have to admit that I don’t totally understand what stochastic synthesis really means. It also works in the microsound universe, what makes it be siblings with granular synthesis. However, it applies stochastic processes to produce sonorities. Here’s what I understand: there are many different waveforms (like “grains” in granular synthesis, only called “GENDYs”. The name comes from génération dynamique) that are combined one after another. The order in which these GENDYs are concatenated comes from stochastic procedures (“numerical values of some system randomly changing over time, such as the growth of a bacterial population”⁴). The waveform of each GENDY can also change over time.
Don’t worry if it sounds way too technical or theoretical. You don’t need to understand it deeply in order to use it. Some time ago, I used an object in Max/MSP that implements Xenakis’ GENDY3 algorithm (called “sc.gendy3~“). The data I used for changing the parameters was Spanish statistical data during the crisis in 2011, but you can use anything. Either plug a guitar in, a keyboard, CV signals, MIDI signals, an audio file, a video, all of them at once… Anything that can provide data that evolves over time is a great source. The rewards are high so it’s worth trying.
Only when the “pure” electronic sounds were framed by other “concrete” sounds (…) could electronic music become really powerful. ⁵
There are two tendencies in electronic music nowadays, I would say. On one side there is the vintage tendency. People digging up old hardware and old sounds from the 80s, 70s, and more. On the other side, there’s the futuristic tendency. People developing new ways of making music. I stand with this last group. Granular and stochastic synthesis both have been around for ages, only that now it’s very accessible to anyone. I believe that the future is more about communicating with sound than with music theory, and the microsound universe is yet to be explored and discovered even more. There’s a vast and rich world in there, so don’t be afraid to dig deep inside. You might find gold.
1: A more complete and precise description can be found here: http://granularsynthesis.com/guide.php
2: The names can change depending on the tool.
3: A full explanation on how pitch-shifting and time-stretching algorithms work can be found in this article:
5: Xenakis, Iannis. 1992. Formalized Music. (Rev. ed). Stuyvesant, NY: Pendragon Press
Available online here: