Our columnist shares the benefits of recording those moments where you’re just improvising and experimenting with ideas. If you make a practice of it, you’re more likely to strike gold.
Welcome back to another Dojo. To date, I’ve somehow managed to write over 50-plus articles and never once addressed the importance of recording your experimentations and early rehearsals in the studio (and of course, your live performances as well). Mea culpa!
This time, I’d like to pay homage to one of my greatest teachers and espouse the joy of recording the unedited, “warts-and-all,” part of the creative process. Don’t worry, you’re still beautiful!
Many times, early in the experimental development of riffs and songs, there are episodes where you simply play something that’s magical or particularly ear-catching—all without effort or forethought. It’s those moments when your ego has somehow dozed off in the backseat and your “higher power” takes over (for a moment, a minute, or more) before the ego jerks the wheel back and lets out a white-knuckled scream of sheer terror.
These are the “What was that?!” time gaps that you often wish you had been recording, because it’s usually these moments we frantically chase down by memory so we can capture them again—often with diluted results, where we’re left with a pallid approximation of what occurred.
Here’s another common scenario. As you work your way through developing rhythms and melodies, there are many gems that fall by the wayside because they don’t exactly fit the prevailing emotional ethos at the time. Without recording them in real time, these nuggets may be forever lost in the creative cosmos.
Both examples are coming from the same sacred place, where we give ourselves permission to try new things and step outside our ingrained, habitual patterns of composing and playing.
“It’s usually these moments we frantically chase down by memory so we can capture them again—often with diluted results.”
For several years I had the good fortune to study with one of the great maestros of jazz guitar, Joe Diorio. Simply put, he was the Yoda of jazz guitar for me and influenced many great players over the years through his virtuosity, creativity, and mystical improvisations.
One of the things we used to do on a regular basis was what he called “gestural playing.” Meaning, we would try and copy the rhythmic and melodic contour of musical passages we’d never heard before. Often, it wasn’t jazz, but world music, where the goal was to condense a symphonic work down to be playable on solo guitar (Stravinsky’s The Rite of Spring, Lutosławski’s Symphony No. 1, etc.). The point wasn’t note accuracy, but gestural similarity and committing to the emotion it invoked. Inevitably, it led both of us to play something unplanned, and jump-started our creativity—stumbling upon diamonds in the rough just waiting to be polished and cut.
There were always “Oh, that was cool! What was that?!” moments, and as we were recording a lesson, we could stop and play back the licks to investigate further. These examinations, in turn, led to other licks, and before we knew it, we had pages full of new melodic material to digest that all started from simple gestures.
To hear this process in action, listen to the bridge section of my song “Making the Faith,” into the guitar solo starting around 2:22. There are lots of odd meters and modulations that lead to a very gestural-inspired solo. Just to pique your interest even further, the chorus’ words are also gestural, and they form an acrostic puzzle that reveals a hidden message that I’ll leave you to figure out.
What I’d really like to do is to encourage you to try this the next time you are feeling creative, and, hopefully, on your next recording. With computers having more and more storage and hard-drive prices ever falling, there’s no excuse to not try the following:
1. Open your DAW and get a drum groove going.
2. Create a guitar track and allow yourself to simply improvise and make gestures for an open-ended period of time.
3. Afterwards, go back and listen.
4. Highlight the moments that pique your interest, and finally....
5. Compile these moments into a new track by mixing them up into edited “mini gestures.”
6. Listen to the results.
This type of experimentation will definitely lead you into new musical territory and then you can start to add harmonic implications, as well as refine things along the way.
Until next time, namaste.
Learning the ins and outs of reverb can help you access a more creative approach to your mixes.
Hello, and welcome to another Dojo. This month I want to give you some creative ideas for using the oldest natural effect we have—reverb.
Reverb is a fundamental tool in every audio engineer’s arsenal, often employed to create depth, space, and ambiance. We use it to simulate the natural reflections of sound in physical spaces, like rooms, halls, or chambers, but the creative possibilities of reverb can extend far beyond that when used as part of a larger effects grouping, and can enable you to sculpt some downright captivating soundscapes.
One of the most common creative uses of reverb is to manipulate the perceived spatial dimensions of a reverberant sound. By adjusting parameters such as decay time, pre-delay, and diffusion, we can alter the size and character of the virtual space in which a sound appears to exist—like a gritty spring reverb imbuing a guitar riff with vintage charm or a shimmering granular reverb enveloping a synth pad in sparkling, crystalline reflections.
But what if we experiment with it in more creative ways by warping naturally occurring physical properties, or playing with pitch, or even side-chaining various parameters? Tighten up your belts, the Dojo is now open.
Reverb itself is the last of three basic events:
1. Direct sound: sound that reaches the listener’s ears directly without reflecting off of any surface.
2. Early reflections: Early reflections are the first set of reflections that reach the listener’s ears shortly after the direct sound, typically within the first 50 milliseconds. They contribute to the perception of spaciousness and localization (coming from the left or the right), and help establish the size and character of the virtual space. The timing, directionality, and intensity of early reflections depend on room geometry, surface materials, and the position of the sound source and listener.
3. Reverb: Homogenized late reflections comprise the prolonged decay of reverberant sound following the initial onset of reflections. The length of which is measured in RT60 (Reverberation Time 60). Think of RT60 as the amount of time it takes for the reverb to decrease by 60 dB or match the inherent noise floor of the space (whichever comes first). For example, most great concert halls have an RT60 of around 2.4 seconds before the hall is “silent” again. The reverb tail is characterized by a gradual decrease in intensity from the complex interplay of overlapping reflections. The shape, density, and duration of the reverb tail are influenced by factors such as room size, surface materials, and acoustic treatment.The beauty of digital reverbs is that we have the ability to adjust these parameters in ways that simply cannot exist in the physical world. Some plugins like Waves’ TrueVerb ($29 street) will allow you to adjust these parameters to unnatural proportions.“The creative possibilities of reverb can enable you to sculpt some downright captivating soundscapes.”
Now, let’s try to use reverb paired with other effects rather than an end result by itself. Put a short reverb (RT60 of less than 1.5 seconds) on any audio track, then follow it with a reverse delay with a small amount of feedback (around 30 percent) and around 1 to 2 seconds delay time. Hear how the reverb feeds into the reverse delay? Adjust to taste and experiment.
We can also start to modulate the reverb. On an aux bus, place a pitch-shifter like Soundtoys Little AlterBoy ($49 street), set the transpose to +12 semitones (up an octave), and mix to 100 percent wet. Adjust the formant to make it sound even more strange. Follow this with a reverb of your choice, also set to 100 percent wet. Now, route a selected audio track (perhaps a vocal or a lead guitar solo) to the aux bus and adjust your aux send level. Now you have a pitch-shifted reverb to add some octave sparkle.
Next, add in a tempo-synced tremolo or panner after the reverb and enjoy the results! I like doing things this way because you can easily switch the order of any effect and save the effect chain. For added bliss, try applying a high-pass filter to remove low-frequency mud, allowing the reverb to sit more transparently in the mix without clouding the low end.
To hear this in action, I invite you to listen to my new single “Making the Faith” (Rainfeather Records), especially the bridge section of the song along with my guitar solo. Until next time, namaste.
Streaming platforms each have their own volume standards for uploaded audio, and if you don’t cater your mixes to each, you risk losing some dynamic range.
Here’s the scenario: You’ve finished your latest masterpiece, and now it’s time to start considering how your mixes and their loudness levels will be perceived across all digital platforms (Apple Music, Spotify, Amazon, etc.). In addition, you might also make sure your music adheres to the strict audio broadcast standards used in film, TV, podcasts, video games, and immersive audio formats like Dolby Atmos.
These considerations, among many others, are typically reserved for mastering engineers. However, you may not have the budget for a mastering engineer, so in the meantime I’d like to give you some expert advice on making sure your loudness levels are in check before you release your music into the wild. Tighten up your belts, the Dojo is now open.
Hail LUFS Metering!
LUFS (Loudness Units Full Scale) is unique in that it is attempts to measure how the human brain perceives loudness, which is accomplished by using a K-weighted scale with 400 ms “momentary” measurement windows (each overlapping the other by 75 percent), resulting in super smooth and accurate readings. This momentary method also allows for additional LUFS short-term and long-term readings (Fig.1), and it is this later measurement, LUFS long-term (aka LUFS integrated), that all of the digital music platforms will be placing their utmost attention upon. For those who are curious, the K-weighted audio scale places less emphasis on bass frequencies and more on higher frequencies above 2 kHz—and is a refined emulation of how humans perceive sound. It is not a musical scale like C harmonic minor, but rather a scaled algorithm for measuring frequencies.
The Wild West of dBs
Less than 10 years ago, there was no loudness standard for any of the audio-streaming platforms. In 2021, the Audio Engineering Society (AES) issued their guidelines for loudness of internet audio-streaming and on-demand distribution in a document named AESTD1008.1.21-9, which recommends the following:
News/talk: -18 dB LUFS
Pop music: -16 dB LUFS
Mixed format: -17 dB LUFS
Sports: -17 dB LUFS
Drama: -18 dB LUFS
However, most services have their own loudness standards for music submission.
“We adjust tracks to -14 dB LUFS, according to the ITU 1770 (International Telecommunication Union) standard. We normalize an entire album at the same time, so gain compensation doesn’t change between tracks.” —Spotify
They are not alone; YouTube, Tidal, and Amazon also use this measurement. Deezer uses -15 dB LUFS and Apple Music has chosen -16 dB LUFS, while SoundCloud has no measurement at all.
To make things more confusing, some services automatically normalize songs to match their predefined LUFS target. Think of normalization as a way of dynamically homogenizing all audio on their platform to the same volume level, regardless of genre or decade. This ensures that the listening end-user will never have the need to adjust their volume knob from song to song.
“Think of normalization as a way of dynamically homogenizing all audio on their platform to the same volume level, regardless of genre or decade.”
What does that mean for your music? If you upload a song to Spotify above -14 dB LUFS, they will turn it down and you’ll lose dynamic range. If the song is below -14 dB LUFS, they will normalize it, or in other words, turn it up to match all the songs on the platform—you can turn it off if you choose—but you’ll also still suffer some dynamic-range loss.
However, that same quiet song on YouTube will not be turned up even though they use the same -14 dB LUFS target. Apple Music normalizes, and will turn up quieter songs relative to peak levels and use both track and album normalization. Deezer and Pandora always use normalization, but only on a per-track basis, while Tidal uses album normalization. Confusing, right? So, how can we make our mixes sound their very best and perhaps get an idea of what it will sound like on various platforms?
1. Before you use any type of plugin (compression, limiting, EQ) on your stereo bus, make sure your dynamic range within the song itself is intact, and nothing peaks over 0 dBFS on your meters—no little red lights should be triggered.
2. Use an LUFS metering plugin like Waves’ WLM ($29), FabFilter’s Pro-L 2 ($169), or Izotope’s Insight ($199).
3. Set your true peak limiter to -1 dB and your long-term LUFS to -14 dB, and you’ll be in the sweet spot.
4. Play your song from beginning to end, look at the readings, and adjust gain accordingly.
Next month, I’ll be showing you some creative ways to use reverb! Until then, namaste.