Sound transplant: taking the “DNA” of an audio clip and implanting it to another one.

Roli Seaboard Rise 25


As most teenagers from my generation with a slight interest in music, I used to spend my afternoons watching music videos on MTV, and it was in one of those lay-down afternoons that I first heard the catchy chorus of Daft Punk’s “Around the World”. “How do they do that?” -I wondered. It was like listening to a robot singing. “Are machines finally taking over the world?” This was several years before I even heard the word “vocoder” for the first time and a lot before I got interested in electronic sounds.

Back then, I was more into rock music but what I loved about vocoders was their ability to turn whatever sound they were applied on into a human-like tune. I loved wah-wah effects for the same reason. Unfortunately, vocoders and Auto-Tune vocals have been so abused in the last years that the technique is now really washed-out, but if there’s one positive thing that we can take from vocoders is the idea of imprinting the characteristics of one sound onto another one.

Transplantation – How does it work?
What vocoders basically do is to take the frequency spectrum of the voice and multiply it with that of the instrument on which it’s applied. In other words, they just multiply two signals in the frequency domain. They do have more features in order to cope with vowels and consonants but the basic idea is just a multiplication.

It’s easy to perform this operation in Max/MSP. What is not easy is to actually predict the result. Some sounds, when combined together, can create something great and unique. Some others just create a useless mass of lo-fi noise. It’s up to you, of course, to discern between gold and litter, according to your own taste.

vocoder max/msp
Multiplying the frequency spectrum of two audio samples in Max/MSP. Top: main patch. Bottom: pfft~ in detail.

A few things to consider
Silence: when one sound is silent, the result is silence. It might sound obvious but it’s important to remember. When you multiply something by zero, the result is zero. And zero means silence. So it’s advisable to match the beginning of both audio clips beforehand.
Filtering: You may want to consider filtering one of the audio clips (or both). Filtering can remove some frequencies that might be ruining the result, so it’s a good idea to at least try it.

Vocoder Max/MSP
Multiplying the frequency spectrum of two audio samples that are previously filtered.

Example 1
Let’s get an idea of what it sounds like with a real example. Here we take a hi-hat sound which texture we want to modify. The texture to be added is taken from a noisy oscillator previously processed with some analog guitar pedals. The result doesn’t sound like a hi-hat anymore, it has the same dynamics but the texture is now a combination of both, the hi-hat and the noisy oscillator. Later, distortion and echo effects are added.

Listen to the audio track “Example 1” in the playlist at the bottom of the page, which contains the following clips:
Audio Clip 1: First sound, a hi-hat.
Audio Clip 2: Second sound, a noisy oscillator.
Audio Clip 3: The spectrum of both sounds (“Audio Clip 1” & “Audio Clip 2”) has been multiplied.
Audio Clip 4: The same sample as “Audio Clip 3”, processed with distortion and echo effects.

Soundtoys' Decapitator + EchoBoy
Example 1 – Audio Clip 4, processing chain: Soundtoys’ Decapitator → EchoBoy

VCA – Transplanting rhythm
There’s another way of copying the characteristics of one sound in order to replicate them in another one: modulation. In theory, all elements of a signal can be modulated: amplitude, frequency, phase, etc. In practice, when working with audio samples, it’s not that simple. However, amplitude modulation is really easy to create. In modular synthesis, there’s one module that is used on a daily basis and it’s called “VCA”.

VCA stands for “Voltage Controlled Amplification” and it means, in simple words, that the volume of one audio signal is controlled by another signal. This control signal can be an audio sample or not (e.g. a low-frequency oscillator). This is a good technique for replicating the “rhythm” of one audio clip in another one that is completely different. For instance, let’s say you have a sound with little dynamics, a bit dull and monotonous. By modulating its volume with the sound of a very dynamic hi-hat, you can add those dynamics to the original sound so that it becomes more vivid and enjoyable. This is what we do in the second example. Well, kind of…

Basic VCA modulation in Reaktor
Very basic VCA modulation using Reaktor Blocks. The volume of an oscillator is modulated with an external audio sample.

Using gates for controlling the volume
Sometimes, volume modulation doesn’t give good results when the dynamics of the modulating sound are not too defined. When this happens, I prefer to use thresholds. It works like this: instead of controlling the whole dynamics, a threshold is set so that the audio clip (the one that is to be modulated) is muted when the modulating signal is below the threshold. This can be done with a gate, using the “side-chain” feature. I personally prefer to program it in Max/MSP for a more “raw” effect, as it is shown in the image below. This gating technique is a bit more radical and is the one used in the second example.

Rhythm transplant in Max/MSP
Rhythm transplant in Max/MSP. The dynamics of one audio sample control the opening of a gate that is applied to another audio sample. This way, the “rhythm” is “transplanted” from one sound to another.

Example 2
This example may sound similar to the Example#1, but it’s a whole different process. In this example, the sound comes only from the distorted synthesizer. What the hi-hat does is to just regulate the dynamics, but it doesn’t transfer any texture at all, it just acts as a gate.

Listen to the audio track “Example 2” in the playlist at the bottom of the page, which contains the following clips:
Audio Clip 1: Sound to be modulated, a distorted synthesizer.
Audio Clip 2: Modulating sound, a hi-hat.
Audio Clip 3: Modulated sound, the synthesizer has now the rhythm of the hi-hat.
Audio Clip 4: The same sample as “Audio Clip 3”, but this time the threshold of the gate was set lower.
Audio Clip 5: The same sample as “Audio Clip 3”, with a ping-pong delay on it.

Ableton Live - Ping Pong Delay
Example 2 – Audio Clip 5, processing chain: Ping Pong Delay.

Example 3
Just for contextualizing the sounds, in this example we add a techno kick drum to the previous examples.

(Listen to the audio track “Example 3” in the playlist at the bottom of the page).

When to perform a transplant?
Anytime! It’s a good way of discovering new sounds, especially when doing a “spectrum transplant”. Imagine you are a graphic designer and you are cropping an image, but instead of using a regular shape, you take the shape of another object (an apple, for instance) and you crop the image using this shape.

Rhythm! That’s another reason. By using VCA modulation, you can literally transplant the rhythm of one sample to another one, and the result is more “organic” than when using a tremolo.

Just think about features. Is there a feature of one sound that you would like to have in another one? If the answer is yes, you may get it by performing any of the “transplant” techniques explained in this article. There’s no life in risk here, so don’t hesitate to try.

One thought on “Sound transplant: taking the “DNA” of an audio clip and implanting it to another one.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s