How cutting tracks in reverse, then reversing those reversed tracks, will add zing to your mixes.
Hello and welcome to another Dojo! Since this issue is dedicated to all things acoustic, I thought I’d share a fun technique that I call “harmonic clouds.” It involves learning a section of your song backwards, recording it, reversing the new recording, and placing it back in the appropriate spot (or not!). I usually do this with acoustic guitars, but it can be applied with equal aplomb to electrics and can supercharge your creativity. Tighten up! The Dojo is now open.
We are all familiar with the sound of reverse delay. On the surface, you might be thinking, “I can do this already.” But you’d be missing out. The “harmonic clouds” technique offers many more possibilities and much more control than recording a guitar part with a reverse delay effect. In short, this technique is inspired more by the process and sounds of double tracking than using delays.
This initial track is the closest thing to a simple reverse delay but it’s not—because it is an entirely different performance, and all those subtle timing and timbral differences are there in all their glory.
By the mid-’60s, it was standard practice for the Beatles to sing all their lead vocals (and some background vocals) twice to thicken up their voices. The resulting deviations from each individual track heard together offered a slight, natural, chorusing effect as well as charming variations in timing of words, dynamics, and timbre. The net result was that the vocals stood out more on the final recordings.
However, it was time consuming. John Lennon, in particular, was always asking for a way to have the sound of “double tracking” without actually having to track the vocal twice. EMI’s brilliant studio engineer Ken Townsend devised an ingenious way of splitting the signal from just after the recording head on a Studer J37 tape machine (at 15 ips) and routing it through both recording and playback heads of the EMI BTR2 tape machine (at 30 ips), the sound from the BTR2 would then be heard at almost the same time as the sound from the Studer’s playback head [Fig. 1] With a little more help from a Levell oscillator, Townsend could varispeed the BTR2 machine with greater control (see my March 2022 article about varispeed). Thus, ADT (artificial double tracking) was born, and, FYI, Waves makes the Reel ADT plug-in ($29 street) as part of their Abbey Road Collection. But I’m going to take you a bit further than that, because we’re going to create new tracks that will increasingly differ from the original! Plus, you can always apply ADT to the new tracks later.
Fig. 1
Let’s get started. Here are the three basic steps:
Step 1 - Learn the Progression Backwards
Take the chords from a particular section of your song (perhaps the chorus or the bridge) and learn the progression backwards, including the rhythms as well. For this example, I was working on the bridge section of a song I wrote on my album that will be released this fall called Jacob’s Well. The way I do this is by writing a chart, then reversing the order and playing it until it feels natural.
Step 2 - Create a New Track
Create a new track and then record the “new” rhythm guitar part you just learned by muting all the other tracks and playing along with the click track.
Step 3 - Reverse the Track
Reverse the track you just recorded and listen to it. Before you unmute all the other tracks and listen to how it sounds, you may have to align it a bit depending on when you stopped recording. Feel free to experiment and play around with aligning the new track in different places rhythmically and listen to how it changes the section. This initial track is the closest thing to a simple reverse delay but it’s not—because it is an entirely different performance, and all those subtle timing and timbral differences are there in all their glory.
Step 4 - Explore Your Own Music
Now we’re ready to have some real fun. Create some new tracks and repeat steps one though three, but each time play the same reversed passage in different parts of the guitar (i.e., you can change the tuning, use a capo, use only power chords, add effects, etc). As the versions pile in and you get used to the process, I think you’ll be really surprised by the results. Who knows, you might even start trying this with all kinds of instruments! Just remember to always serve the song and stay true to the emotional content you want to use these tracks to achieve. Most often for me, less is more.
Until next month, blessings, and keep sharing your gifts with the world. Namaste.
The entire world of ’verb—from traditional to extreme—really does lie at your fingertips. Here’s how to access it.
This article is for recording guitarists eager to make the most of reverb plug-ins. We’ll explore the various reverb types, decode the controls you’re likely to encounter, and conclude with some suggestions for cool and creative reverb effects.
This is not a buyer’s guide, though you’ll hear many different products. Our focus is common reverb plug-in parameters and how to use them. Nearly all modern DAWs come with good-sounding reverbs. You can also add superb third-party plug-ins. But there are also plenty of free and budget-priced reverbs—just google “free reverb plug-in.”
Reverb = delay. Reverb is merely a delay effect. Sounds traveling through air eventually encounter surfaces. Some sound bounces off these surfaces, producing a complex network of echoes, made even more complex when the initial reflections bounce off secondary surfaces.
The controls on reverb plug-ins define how the software mimics this process. Function names can be confusing, but remember, everything relates to acoustic phenomena that you already understand intuitively. For example:
- The space’s size. (The further a sound travels before hitting a surface, the slower the echoes arrive.)
- The hardness of the reflective surfaces. (The harder the material, the louder, brighter, and more plentiful the echoes.
- The relative angles of the reflective surfaces. (A square room sounds different than a round one, which sounds different than a trapezoidal one.)
- The presence of other objects. (Soft surfaces like carpets, cushions, and acoustic foam diminish the reverb, usually affecting some frequencies more than others.)
- The listener’s location. (The further an ear or microphone from the sound source, the more reverberation is perceived.)
Understanding Reverb Types
By definition, all reverb plug-ins are digital. Most are either algorithmic or convolution-based. Algorithmic reverbs employ delay, feedback, and filters to mimic sounds bouncing around in space. Convolution reverb (also called impulse response or IR reverb) creates “snapshots” of actual sonic spaces and audio devices. In convolution, developers amplify a test tone in the targeted space (or through a target piece of audio gear) and record the results. The software compares the new recording to the dry test tone, and then it applies corresponding adjustments to any audio, making it sound as if it was recorded in the modeled space or through the modeled gear. (That’s how the speaker simulations work in most amp modelers.) Algorithmic and convolution reverbs often perform the same tasks, just via different methods.
But when we make musical choices, we rarely think, “This should be algorithmic and that should be convolution.” We’re usually trying to evoke a particular sound: a place, an old analog device, a freaky sound not found in nature. So, let’s take a whirlwind tour of reverb history, with thoughts about obtaining those sounds via plug-ins.
A Haul-Ass Reverb History
Real spaces. Before the 20th century the only reverbs were actual acoustic environments: caves, castles, temples, tombs. It wasn’t till the 18th century that people began constructing spaces specifically for their sonic properties—the roots of the modern concert hall.
Convolution reverbs excel at conjuring specific places. Most IR reverbs include libraries of such sounds. Some evoke iconic spaces and famed studios. IRs can also mimic small spaces, like a closet or compact car.
Clip 1 — A Guitarist’s Guide to Reverb Plug-ins by premierguitar
In Clip 1, you hear the same acoustic guitar snippet through IRs captured inside the Great Pyramid of Giza, the isolation block at Alcatraz prison, Chartres cathedral, and the interior of a VW Beetle, all using Audio Ease’s Altiverb library. (For demo purposes, reverb is applied rather heavily in all audio examples.)
Echo chambers were the earliest form of artificial reverb, though they aren’t all that artificial. The chamber is usually a room with hard, reflective surfaces. A loudspeaker in the chamber amplifies dry recordings, and a distant microphone records the results. It’s still “real reverb,” only it can be added and controlled independently from the original recording. This process evolved during the 1930s and ’40s. The first popular recording to use the effect was 1947’s “Peg o’ My Heart” by the Harmonicats, produced by audio genius Bill Putnam.
PEG O' MY HEART ~ The Harmonicats (1947)
During a recent recording session at Hollywood’s Sunset Sound, I shot Video 1 in the famed Studio A echo chamber, thanks to house engineer George Janho. You’ve heard this very room countless times. The Doors and Van Halen made most of their records here. You also hear this reverb on “Whole Lotta Love,” the vocal tracks on the Stones’ “Gimme Shelter,” Prince’s 1999 and Purple Rain, and countless other famous recordings.
Sunset Sound Chamber
Echo chambers are well represented in most IR reverb libraries. Most algorithmic reverbs do chambers as well, replicating the general effect without modeling a particular space. You can even find plug-ins dedicated to a specific chamber, like Universal Audio’s Capitol Chambers, which models the Hollywood chamber famously used by Frank Sinatra.
Spring reverbs. These were the first truly artificial reverbs. They initially appeared in pre-WWII Hammond organs, and by 1960 or so they had migrated to guitar amps. Fender wasn’t the first company to make reverb-equipped amps, but their early-’60s reverb units still define the effect for many guitarists.
The reverb effect is produced by routing the dry signal through actual springs, with a microphone capturing the clangorous results and blending them with the original tone. Springs generally sound splashy, trashy, and lo-fi, often in glorious ways. It’s an anarchic sound, best captured in a plug-in via IRs. Most of the spring reverb sounds in guitar modelers are IR-based. Meanwhile, reverb stompboxes—usually algorithmic—mimic the sound with varying degrees of success.
Plate reverb appeared in the late 1950s, initially in the Elektromesstechnik EMT-140, which remains a sonic gold standard. Plate reverb works similarly to spring reverb, but a massive metal sheet replaces the springs. It’s generally a smooth, sensuous sound relative to a spring.
Clip 2 — A Guitarist’s Guide to Reverb Plug-ins by premierguitar
In Clip 2, you hear the same acoustic guitar snippet through impulse responses of a Fender spring reverb unit and a vintage EMT-140 plate.
There are countless plate clones among today’s reverb plug-ins. Some are convolutions based on analog gear. But algorithmic reverbs also excel at faux-plate sounds. In fact, one of the initial goals of early digital reverb was to replace cumbersome mechanical plates. Speaking of which.…
Digital reverb (the algorithmic kind) arrived in 1976 via the EMT-250, also from Elektromesstechnik. Lexicon and AMS produced popular rivals. They focused largely on mimicking rooms, chambers, and plates. Sound quality has improved over the decades thanks to increased processing power and clever programming.
Today you can get far “better” algorithmic reverb from plug-ins. But ironically, those primitive digital ’verbs are trendy again in pop production. You can find precise clones of retro-digital hardware in plug-in form.
Convolution reverb debuted at the end of the century, popularized by Sony’s DRE S777 unit. Convolution reverbs often have fewer controls than their algorithmic cousins because most of the process is baked into the impulse response.
Most convolution reverbs have similar sound quality. The free ones can sound as good as the pricy ones. Higher prices are often based on the size and quality of the included IR libraries. Google free reverb impulse responses for gratis goodies.
Recent wrinkles. There are always interesting new reverb developments. For example, Things — Texture from AudioThings and Silo from Unfiltered Audio are anarchic granular reverbs that loop and manipulate tiny slices of the reverb signal to create otherworldly effects ranging from the brutal to the beautiful.
Clip 3 — A Guitarist’s Guide to Reverb Plug-ins by premierguitar
Clip 3 includes several granular reverb examples.
Image 2: Zynaptiq’s innovative Adaptiverb generates reverb via pitch-tracking oscillators rather than delays and feedback loops.
Some newer reverbs employ artificial intelligence to modify the effect in real time based on the audio input. iZotope’s Neoverb automatically filters out frequencies that can muddy your mix or add unwanted artifacts. And Zynaptiq’s Adaptiverb generates reverb in a novel way: Instead of echoing the dry signal, it employs pitch-tracking oscillators that generate reverb tails based on the dry signal. It, too, excels at radical reverbs suitable for sound design.
Clip 4 — A Guitarist’s Guide to Reverb Plug-ins by premierguitar
Clip 4 demonstrates a few of its possibilities.
Common Reverb Plug-in Controls
The knob names on a reverb plug-in can get confusing, but remember that they control variables that you already understand intuitively. Also, not all controls are equally important. The most essential ones are the wet/dry balance and the reverb decay time (how long it continues to sound). By all means learn the subtler functions, but don’t be surprised if you use them only rarely.
Video 2 walks you through most of the controls you’re likely to encounter on an algorithmic reverb plug-in. I used ChromaVerb from Apple’s Logic Pro DAW for the demo, but you’ll encounter similar parameters on most algorithmic reverb plug-ins.
Digital Reverb Walkthrough
Creative Reverb Ideas
Spring things. The single reverb knob on vintage amps is simply a wet/dry blend control. Some spring reverbs add a dwell control to set the amount of reverb input. Higher settings mean louder, longer reverberation.
But in the digital realm, you can deploy old-fashioned spring reverb in newfangled ways. For example:
- Pan the dry signal and spring sound apart for a broad stereo effect. (Traditional spring reverb is strictly mono.)
- Add predelay, inserting space between the dry and wet signals. (If the plug-in has no predelay control, just add the effect to an effect bus with a 100 percent wet, no-feedback delay upstream.)
- Route a guitar signal to two different spring reverb sounds, panned apart.
- Assign the reverb to an effect send, add a compressor to the effect channel, and then sidechain the compressor to the dry guitar sound. That way, the reverb is ducked when the guitar is loud, but swells to full volume during quiet passages.
- Apply digital modulation to the wet signal for detuned or pulsating effects.
Clip 5 — A Guitarist’s Guide to Reverb Plug-ins by premierguitar
Clip 5 starts with a straightforward spring sound before demonstrating the above options in order.
Fender-style reverb is so ubiquitous that simply using less familiar spring sounds can be startling.
Clip 6 — A Guitarist’s Guide to Reverb Plug-ins by premierguitar
Clip 6 is a smorgasbord of relatively obscure spring sounds from AudioThing’s Springs and Amp Designer, Logic Pro’s amp modeler.
Finally, it can be exciting to use springs on tracks that don’t usually get processed that way. For example, spring reverb is often considered too quirky and lo-fi to use on acoustic guitar or vocals.
Clip 7 — A Guitarist’s Guide to Reverb Plug-ins by premierguitar
But Clip 7 shows how attractive springs can sound on voice and acoustic. (You hear the dry sounds first.)
Unclean plates. In contrast to a spring’s lo-fi clank, simulated plate reverb is smooth and warm. Even if your track already has spring reverb, you might apply some plate ’verb to integrate it into a mix.
One creative avenue is deploying smooth plate reverb in relatively lo-fi ways. For example:
- Try placing the reverb before an amp modeler on a track to mimic a reverb stompbox. That way, the reverb is colored by both amp and speaker.
- Imagine a guitar amp with a huge metal plate inside instead of springs. If your amp modeler lets you use pure amp sounds without speaker modeling and vice-versa, try sandwiching a plate sound between two instances of amp modeler on the same track. Turn off the speaker sound on the first amp sim and use only the speaker sound on the second one. This way, only the speaker colors the reverb.
- Plate reverb also sounds great panned separately from the dry sound.
Clip 8 — A Guitarist’s Guide to Reverb Plug-ins by premierguitar
Clip 8 starts with a conventional plate sound before demoing the above ideas.
Liquid reverb. Reverb plug-ins have one big advantage over hardware: Everything can be automated within your DAW.
Automated Reverb
In Video 3 I’ve written automation for both the decay time and reverb damping for an evolving effect that would have been difficult on hardware.
Oh, the places you’ll go. Convolution reverbs usually have fewer controls than their algorithmic cousins. You might do no more than adjust the wet/dry or fine-tune the decay time. But IR reverbs don’t have to be “plug and play”—especially if you create your own reverbs. It’s a surprisingly simple process. (Some IR reverbs, like Altiverb and Logic Pro’s Space Designer, come with an app to generate the needed signals and process the recordings for use.)
Image 3: You can get cool, if unpredictable, results by dropping random audio files into an impulse response reverb like Logic Pro’s Space Designer.
Theoretically, you need a hi-fi PA system to amplify the needed tones in the target space, and good microphones to capture the results. But not always! I’ve captured cool IRs in my travels with nothing more than an iPhone and a spring-loaded clipboard in lieu of the traditional starter pistol. I’ve even obtained decent results by clacking a couple of stones together.
Clip 9 — A Guitarist’s Guide to Reverb Plug-ins by premierguitar
Clip 9 includes quick and dirty IRs that I captured in a Neolithic cave painting site in France, a thousand-year-old ancient Anasazi ball court in Arizona, an ancient Greek stone quarry, a 19th-century limestone kiln in Death Valley, and the inside of an acoustic guitar.
You can also get interesting, if unpredictable, results loading random audio files into the IR reverb.
Clip 10 — A Guitarist’s Guide to Reverb Plug-ins by premierguitar
Clip 10 features a dry guitar snippet, followed by bizarre reverb effects generated by drum loops, synth tones, and noises.
New sounds, new spaces. Using reverb plug-ins can be incredibly simple. Often it’s just a matter of scrolling through factory presets, or making basic balance and decay time adjustments. You can also use them in endlessly creative ways. Whatever your goals, I hope this article helps you find exactly the sounds you seek.
Three steps to exploring the wonders of tempo shifting.
Hello and welcome back to another Dojo. This time I’m going to be talking about the joy of using varispeed in your tracking productions to give your music a different timbral shift and open you up to some very creative possibilities.
Varispeed is essentially a way of controlling pitch by adjusting playback speed. In pre-digital days, turntables and tape machines used different speeds for both recording and playback. Turntables had three speeds: 78, 45, and 33 1/3 rpm, and pro tape machines had three standard choices for starters: 7 1/2, 15, and 30 ips. In essence, if you record a fast passage at a slow speed, once it’s played back at normal or standard speed the pitch and tempo will go up. We’ve all heard the chipmunk effect—high pitched, helium-tinged vocals achieved by recording at a slow speed and playing back at normal speed. But there are more interesting and subtle ways to use varispeed.
My three favorite examples are Les Paul’s “Caravan” (on 1950’s The New Sound), the piano solo played by George Martin on the Beatles’ “In My Life” (Rubber Soul), and the Beatles’ “Rain,” the B-side of “Paperback Writer” (which is my favorite single the Fab Four released). The first two examples use varispeed on various tracks within a normal-speed mix. With “Rain,” however, the entire mix was shifted down in pitch (and tempo) after it was recorded at a faster tape speed! It was also the first Beatles song to feature reversed vocals, which occur at the end. For fun, try singing along with this song and you’ll feel like you’re in audio quicksand. It’s almost impossible to match Lennon’s words exactly because all your consonances will have to be slower than normal.
With old school varispeed, pitch and speed (transients and tempo) are tied together.
I want to make a distinction here: It’s important to know the difference between time stretching (changing the duration or speed of an audio signal without affecting its pitch) and pitch shifting (changing the pitch without affecting the speed). With old school varispeed, pitch and speed (transients and tempo) are tied together. This means the transients, formants, and overtones of all recorded material (an instrument, a vocal, or even a mix) are shifted. Which leads to an intriguingly unnatural sound, not possible in the real world. How can we do this in our DAW? For starters, make sure your DAW of choice has a varispeed function or setting. I’m going to show you how I do this in Universal Audio’s LUNA (which is free with an interface hardware purchase).
We need to do some prep work to start. Let’s assume you are recording a guitar/vocal at 100 bpm in the key of E (try singing and playing a 16th note palm-muted rhythm part on your guitar). Now, do the same thing again, but make a “varispeed” version of it by speeding the tempo up and playing/recording it in a new key. You can compare the differences when done. That should help your ears adjust to the concept.
Before you begin, calculate the transposition to tempo ratio. I use a great app on my phone called musicMath ($5.99 street) to do this. For this example, to transpose up a minor third (from E to G) the new tempo is 118.92 bpm [Fig.1].
Next, change the tempo in your DAW to 118.92 bpm, and then play/sing it again in the key of G (up a minor third) [Fig.2]. If you’re not sure where the chords are in the new key, use a capo at the 3rd fret and play the same chords you’ve been playing. Personally, I like playing without a capo because the voicings are different and the sound will be as well. More fun!
Now, render/bounce the new performance and import it back into your DAW session. Next—following the cue of “Rain”—enable the varispeed function in your DAW [Fig.3] and change the tempo to 100 bpm. If you look at what I’ve circled, you’ll see that the mode is set to “tempo” and the warp is set to “varispeed.” Your particular DAW may be different, so make sure your speed/tempo and pitch are linked. Otherwise, when you slow the tempo back down to 100 bpm, the recording will still be in the key of G, but slower. As usual, I invite you to come to bryanclarkmusic.com to watch this technique in action. Have fun and try this on everything! Until next month, namaste.