A few small organizational tricks can set your digital workspace up for success.
Hi, and welcome to another Dojo. This time, I’m going to give you ways to cut the clutter from your sessions and help make your recording process more efficient—in short, more kaizen. This compound Japanese word is usually translated as “good change” but has morphed over the years to mean something closer to “continual improvement.” The concept is applied in multiple industries from auto manufacturing to healthcare, and it can certainly be effectively applied on an individual level.
The idea is that multiple small improvements over time will produce big results. Legendary British cycling coach Dave Brailsford called this “the aggregation of marginal gains.” His strategy was simple: Focus on getting one percent better in every area related to riding a bike. Within 10 years, the British cycling team went on 178 World Championship races and won five Tour de France victories and over 60 Olympic gold medals. Kaizen, indeed! I’m still amazed when I get sessions from other engineers who have no color-coded recording session tracks, haphazard organization within the session itself, and haven’t saved multiple versions. These are three problems that are easily solved with a bit of kaizen. Tighten up your belts, the Dojo is now open.
Color differentiation reduces your cognitive load and allows for faster, more efficient recording, editing, mixing, and overall session management.
Diversify Your Color Palette
Color-coding recording session tracks is a powerful tool for visual organization. It’s an essential, non-technical practice that can significantly enhance workflow efficiency and track management. In a typical modern recording session, there can be between 30 and 100 tracks, each representing different instruments, vocals, effects, and other elements. Without a clear organizational strategy, navigating through these tracks can become overwhelming and time-consuming.
By assigning specific colors to different types of tracks, producers and engineers can quickly identify and locate the tracks they need to work on, so establish a consistent color scheme for types of instruments.
Here’s mine:
• Drums are always slate blue.
• Guitars are various shades of green because they’re made from trees (of course, almost everything else is, too, but both guitar and green share the same first letter).
• Bass instruments are always brown (because they’re powerful and can make you brown your trousers).
• Synths and keys are various hues of purple (I think of Prince and “Purple Rain”).
• Vocals are always yellow because when you get lost in the stifling dark caverns of your mix and can’t find your way out, focus on the vocals—they will lead you toward the light.
An example of our columnist’s strict session color coding in his DAW.
Regardless of your choices, color differentiation reduces your cognitive load and allows for faster, more efficient recording, editing, mixing, and overall session management. Moreover, color coding helps in identifying groups of tracks that need to be processed together, such as a drum bus or background vocals, thus making it easier to apply group processing and adjustments.
Your layout of a recording session is another critical factor for maintaining organized and productive workflows. A well-structured session layout ensures that all elements of the recording are easily accessible and logically arranged. My tracks have a consistent order: drums at the top, followed by bass, guitars, keyboards, vocals, and effects. There’s no right way to do this, but whatever you do, be consistent.
“I have an existential map. It has 'You are here' written all over it.” – Steven Wright
Consistency helps individual producers and engineers to work more efficiently, but also facilitates collaboration with others. When multiple people are involved in a project, establish a standardized layout that will allow everyone to quickly understand the session structure, find specific tracks, and contribute without confusion. Also, a clear layout helps minimize mistakes during recording, editing, and mixing, like possibly overlooking important tracks or processing the wrong ones.
Your layout of a recording session is another critical factor for maintaining organized and productive workflows. A well-structured session layout ensures that all elements of the recording are easily accessible and logically arranged. My tracks have a consistent order: drums at the top, followed by bass, guitars, keyboards, vocals, and effects. There’s no right way to do this, but whatever you do, be consistent.
Consistency helps individual producers and engineers to work more efficiently, but also facilitates collaboration with others. When multiple people are involved in a project, establish a standardized layout that will allow everyone to quickly understand the session structure, find specific tracks, and contribute without confusion. Also, a clear layout helps minimize mistakes during recording, editing, and mixing, like possibly overlooking important tracks or processing the wrong ones.
“Waste Not, Want Not”
One of the most important things to always remember is to immediately save a new version the very first time you open a project or session. That way, if something happens, and it will eventually (I’ve even had session data get corrupted on that specific sector of the hard drive), you’ve left the original session alone. Every time you work on the song, or project, save a new version. This practice safeguards the process and ensures project security.
This is also important during the creative phase when trying out different ideas and arrangements. If a new idea doesn't work out, it's easy to revert to a previous version without losing valuable progress. Furthermore, saving versions at critical milestones—such as after recording, editing, and mixing—provides fallback options in case of technical issues or unexpected problems. And lastly, saving versions creates a chronological historical record of the session's development, which is invaluable for reviewing the evolution of the track, project, or entire record!
Learning the ins and outs of reverb can help you access a more creative approach to your mixes.
Hello, and welcome to another Dojo. This month I want to give you some creative ideas for using the oldest natural effect we have—reverb.
Reverb is a fundamental tool in every audio engineer’s arsenal, often employed to create depth, space, and ambiance. We use it to simulate the natural reflections of sound in physical spaces, like rooms, halls, or chambers, but the creative possibilities of reverb can extend far beyond that when used as part of a larger effects grouping, and can enable you to sculpt some downright captivating soundscapes.
One of the most common creative uses of reverb is to manipulate the perceived spatial dimensions of a reverberant sound. By adjusting parameters such as decay time, pre-delay, and diffusion, we can alter the size and character of the virtual space in which a sound appears to exist—like a gritty spring reverb imbuing a guitar riff with vintage charm or a shimmering granular reverb enveloping a synth pad in sparkling, crystalline reflections.
But what if we experiment with it in more creative ways by warping naturally occurring physical properties, or playing with pitch, or even side-chaining various parameters? Tighten up your belts, the Dojo is now open.
Reverb itself is the last of three basic events:
1. Direct sound: sound that reaches the listener’s ears directly without reflecting off of any surface.
2. Early reflections: Early reflections are the first set of reflections that reach the listener’s ears shortly after the direct sound, typically within the first 50 milliseconds. They contribute to the perception of spaciousness and localization (coming from the left or the right), and help establish the size and character of the virtual space. The timing, directionality, and intensity of early reflections depend on room geometry, surface materials, and the position of the sound source and listener.
3. Reverb: Homogenized late reflections comprise the prolonged decay of reverberant sound following the initial onset of reflections. The length of which is measured in RT60 (Reverberation Time 60). Think of RT60 as the amount of time it takes for the reverb to decrease by 60 dB or match the inherent noise floor of the space (whichever comes first). For example, most great concert halls have an RT60 of around 2.4 seconds before the hall is “silent” again. The reverb tail is characterized by a gradual decrease in intensity from the complex interplay of overlapping reflections. The shape, density, and duration of the reverb tail are influenced by factors such as room size, surface materials, and acoustic treatment.The beauty of digital reverbs is that we have the ability to adjust these parameters in ways that simply cannot exist in the physical world. Some plugins like Waves’ TrueVerb ($29 street) will allow you to adjust these parameters to unnatural proportions.“The creative possibilities of reverb can enable you to sculpt some downright captivating soundscapes.”
Now, let’s try to use reverb paired with other effects rather than an end result by itself. Put a short reverb (RT60 of less than 1.5 seconds) on any audio track, then follow it with a reverse delay with a small amount of feedback (around 30 percent) and around 1 to 2 seconds delay time. Hear how the reverb feeds into the reverse delay? Adjust to taste and experiment.
We can also start to modulate the reverb. On an aux bus, place a pitch-shifter like Soundtoys Little AlterBoy ($49 street), set the transpose to +12 semitones (up an octave), and mix to 100 percent wet. Adjust the formant to make it sound even more strange. Follow this with a reverb of your choice, also set to 100 percent wet. Now, route a selected audio track (perhaps a vocal or a lead guitar solo) to the aux bus and adjust your aux send level. Now you have a pitch-shifted reverb to add some octave sparkle.
Next, add in a tempo-synced tremolo or panner after the reverb and enjoy the results! I like doing things this way because you can easily switch the order of any effect and save the effect chain. For added bliss, try applying a high-pass filter to remove low-frequency mud, allowing the reverb to sit more transparently in the mix without clouding the low end.
To hear this in action, I invite you to listen to my new single “Making the Faith” (Rainfeather Records), especially the bridge section of the song along with my guitar solo. Until next time, namaste.
Streaming platforms each have their own volume standards for uploaded audio, and if you don’t cater your mixes to each, you risk losing some dynamic range.
Here’s the scenario: You’ve finished your latest masterpiece, and now it’s time to start considering how your mixes and their loudness levels will be perceived across all digital platforms (Apple Music, Spotify, Amazon, etc.). In addition, you might also make sure your music adheres to the strict audio broadcast standards used in film, TV, podcasts, video games, and immersive audio formats like Dolby Atmos.
These considerations, among many others, are typically reserved for mastering engineers. However, you may not have the budget for a mastering engineer, so in the meantime I’d like to give you some expert advice on making sure your loudness levels are in check before you release your music into the wild. Tighten up your belts, the Dojo is now open.
Hail LUFS Metering!
LUFS (Loudness Units Full Scale) is unique in that it is attempts to measure how the human brain perceives loudness, which is accomplished by using a K-weighted scale with 400 ms “momentary” measurement windows (each overlapping the other by 75 percent), resulting in super smooth and accurate readings. This momentary method also allows for additional LUFS short-term and long-term readings (Fig.1), and it is this later measurement, LUFS long-term (aka LUFS integrated), that all of the digital music platforms will be placing their utmost attention upon. For those who are curious, the K-weighted audio scale places less emphasis on bass frequencies and more on higher frequencies above 2 kHz—and is a refined emulation of how humans perceive sound. It is not a musical scale like C harmonic minor, but rather a scaled algorithm for measuring frequencies.
The Wild West of dBs
Less than 10 years ago, there was no loudness standard for any of the audio-streaming platforms. In 2021, the Audio Engineering Society (AES) issued their guidelines for loudness of internet audio-streaming and on-demand distribution in a document named AESTD1008.1.21-9, which recommends the following:
News/talk: -18 dB LUFS
Pop music: -16 dB LUFS
Mixed format: -17 dB LUFS
Sports: -17 dB LUFS
Drama: -18 dB LUFS
However, most services have their own loudness standards for music submission.
“We adjust tracks to -14 dB LUFS, according to the ITU 1770 (International Telecommunication Union) standard. We normalize an entire album at the same time, so gain compensation doesn’t change between tracks.” —Spotify
They are not alone; YouTube, Tidal, and Amazon also use this measurement. Deezer uses -15 dB LUFS and Apple Music has chosen -16 dB LUFS, while SoundCloud has no measurement at all.
To make things more confusing, some services automatically normalize songs to match their predefined LUFS target. Think of normalization as a way of dynamically homogenizing all audio on their platform to the same volume level, regardless of genre or decade. This ensures that the listening end-user will never have the need to adjust their volume knob from song to song.
“Think of normalization as a way of dynamically homogenizing all audio on their platform to the same volume level, regardless of genre or decade.”
What does that mean for your music? If you upload a song to Spotify above -14 dB LUFS, they will turn it down and you’ll lose dynamic range. If the song is below -14 dB LUFS, they will normalize it, or in other words, turn it up to match all the songs on the platform—you can turn it off if you choose—but you’ll also still suffer some dynamic-range loss.
However, that same quiet song on YouTube will not be turned up even though they use the same -14 dB LUFS target. Apple Music normalizes, and will turn up quieter songs relative to peak levels and use both track and album normalization. Deezer and Pandora always use normalization, but only on a per-track basis, while Tidal uses album normalization. Confusing, right? So, how can we make our mixes sound their very best and perhaps get an idea of what it will sound like on various platforms?
1. Before you use any type of plugin (compression, limiting, EQ) on your stereo bus, make sure your dynamic range within the song itself is intact, and nothing peaks over 0 dBFS on your meters—no little red lights should be triggered.
2. Use an LUFS metering plugin like Waves’ WLM ($29), FabFilter’s Pro-L 2 ($169), or Izotope’s Insight ($199).
3. Set your true peak limiter to -1 dB and your long-term LUFS to -14 dB, and you’ll be in the sweet spot.
4. Play your song from beginning to end, look at the readings, and adjust gain accordingly.
Next month, I’ll be showing you some creative ways to use reverb! Until then, namaste.