Dojo Columnist Bryran Clark at the helm.
A few small organizational tricks can set your digital workspace up for success.
Hi, and welcome to another Dojo. This time, I’m going to give you ways to cut the clutter from your sessions and help make your recording process more efficient—in short, more kaizen. This compound Japanese word is usually translated as “good change” but has morphed over the years to mean something closer to “continual improvement.” The concept is applied in multiple industries from auto manufacturing to healthcare, and it can certainly be effectively applied on an individual level.
The idea is that multiple small improvements over time will produce big results. Legendary British cycling coach Dave Brailsford called this “the aggregation of marginal gains.” His strategy was simple: Focus on getting one percent better in every area related to riding a bike. Within 10 years, the British cycling team went on 178 World Championship races and won five Tour de France victories and over 60 Olympic gold medals. Kaizen, indeed! I’m still amazed when I get sessions from other engineers who have no color-coded recording session tracks, haphazard organization within the session itself, and haven’t saved multiple versions. These are three problems that are easily solved with a bit of kaizen. Tighten up your belts, the Dojo is now open.
Color differentiation reduces your cognitive load and allows for faster, more efficient recording, editing, mixing, and overall session management.
Diversify Your Color Palette
Color-coding recording session tracks is a powerful tool for visual organization. It’s an essential, non-technical practice that can significantly enhance workflow efficiency and track management. In a typical modern recording session, there can be between 30 and 100 tracks, each representing different instruments, vocals, effects, and other elements. Without a clear organizational strategy, navigating through these tracks can become overwhelming and time-consuming.
By assigning specific colors to different types of tracks, producers and engineers can quickly identify and locate the tracks they need to work on, so establish a consistent color scheme for types of instruments.
Here’s mine:
• Drums are always slate blue.
• Guitars are various shades of green because they’re made from trees (of course, almost everything else is, too, but both guitar and green share the same first letter).
• Bass instruments are always brown (because they’re powerful and can make you brown your trousers).
• Synths and keys are various hues of purple (I think of Prince and “Purple Rain”).
• Vocals are always yellow because when you get lost in the stifling dark caverns of your mix and can’t find your way out, focus on the vocals—they will lead you toward the light.
An example of our columnist’s strict session color coding in his DAW.
Regardless of your choices, color differentiation reduces your cognitive load and allows for faster, more efficient recording, editing, mixing, and overall session management. Moreover, color coding helps in identifying groups of tracks that need to be processed together, such as a drum bus or background vocals, thus making it easier to apply group processing and adjustments.
Your layout of a recording session is another critical factor for maintaining organized and productive workflows. A well-structured session layout ensures that all elements of the recording are easily accessible and logically arranged. My tracks have a consistent order: drums at the top, followed by bass, guitars, keyboards, vocals, and effects. There’s no right way to do this, but whatever you do, be consistent.
“I have an existential map. It has 'You are here' written all over it.” – Steven Wright
Consistency helps individual producers and engineers to work more efficiently, but also facilitates collaboration with others. When multiple people are involved in a project, establish a standardized layout that will allow everyone to quickly understand the session structure, find specific tracks, and contribute without confusion. Also, a clear layout helps minimize mistakes during recording, editing, and mixing, like possibly overlooking important tracks or processing the wrong ones.
Your layout of a recording session is another critical factor for maintaining organized and productive workflows. A well-structured session layout ensures that all elements of the recording are easily accessible and logically arranged. My tracks have a consistent order: drums at the top, followed by bass, guitars, keyboards, vocals, and effects. There’s no right way to do this, but whatever you do, be consistent.
Consistency helps individual producers and engineers to work more efficiently, but also facilitates collaboration with others. When multiple people are involved in a project, establish a standardized layout that will allow everyone to quickly understand the session structure, find specific tracks, and contribute without confusion. Also, a clear layout helps minimize mistakes during recording, editing, and mixing, like possibly overlooking important tracks or processing the wrong ones.
“Waste Not, Want Not”
One of the most important things to always remember is to immediately save a new version the very first time you open a project or session. That way, if something happens, and it will eventually (I’ve even had session data get corrupted on that specific sector of the hard drive), you’ve left the original session alone. Every time you work on the song, or project, save a new version. This practice safeguards the process and ensures project security.
This is also important during the creative phase when trying out different ideas and arrangements. If a new idea doesn't work out, it's easy to revert to a previous version without losing valuable progress. Furthermore, saving versions at critical milestones—such as after recording, editing, and mixing—provides fallback options in case of technical issues or unexpected problems. And lastly, saving versions creates a chronological historical record of the session's development, which is invaluable for reviewing the evolution of the track, project, or entire record!
LUFS offers three different readings, with LUFS long-term, or integrated, being the one digital streaming platforms are paying the most attention to.
Streaming platforms each have their own volume standards for uploaded audio, and if you don’t cater your mixes to each, you risk losing some dynamic range.
Here’s the scenario: You’ve finished your latest masterpiece, and now it’s time to start considering how your mixes and their loudness levels will be perceived across all digital platforms (Apple Music, Spotify, Amazon, etc.). In addition, you might also make sure your music adheres to the strict audio broadcast standards used in film, TV, podcasts, video games, and immersive audio formats like Dolby Atmos.
These considerations, among many others, are typically reserved for mastering engineers. However, you may not have the budget for a mastering engineer, so in the meantime I’d like to give you some expert advice on making sure your loudness levels are in check before you release your music into the wild. Tighten up your belts, the Dojo is now open.
Hail LUFS Metering!
LUFS (Loudness Units Full Scale) is unique in that it is attempts to measure how the human brain perceives loudness, which is accomplished by using a K-weighted scale with 400 ms “momentary” measurement windows (each overlapping the other by 75 percent), resulting in super smooth and accurate readings. This momentary method also allows for additional LUFS short-term and long-term readings (Fig.1), and it is this later measurement, LUFS long-term (aka LUFS integrated), that all of the digital music platforms will be placing their utmost attention upon. For those who are curious, the K-weighted audio scale places less emphasis on bass frequencies and more on higher frequencies above 2 kHz—and is a refined emulation of how humans perceive sound. It is not a musical scale like C harmonic minor, but rather a scaled algorithm for measuring frequencies.
The Wild West of dBs
Less than 10 years ago, there was no loudness standard for any of the audio-streaming platforms. In 2021, the Audio Engineering Society (AES) issued their guidelines for loudness of internet audio-streaming and on-demand distribution in a document named AESTD1008.1.21-9, which recommends the following:
News/talk: -18 dB LUFS
Pop music: -16 dB LUFS
Mixed format: -17 dB LUFS
Sports: -17 dB LUFS
Drama: -18 dB LUFS
However, most services have their own loudness standards for music submission.
“We adjust tracks to -14 dB LUFS, according to the ITU 1770 (International Telecommunication Union) standard. We normalize an entire album at the same time, so gain compensation doesn’t change between tracks.” —Spotify
They are not alone; YouTube, Tidal, and Amazon also use this measurement. Deezer uses -15 dB LUFS and Apple Music has chosen -16 dB LUFS, while SoundCloud has no measurement at all.
To make things more confusing, some services automatically normalize songs to match their predefined LUFS target. Think of normalization as a way of dynamically homogenizing all audio on their platform to the same volume level, regardless of genre or decade. This ensures that the listening end-user will never have the need to adjust their volume knob from song to song.
“Think of normalization as a way of dynamically homogenizing all audio on their platform to the same volume level, regardless of genre or decade.”
What does that mean for your music? If you upload a song to Spotify above -14 dB LUFS, they will turn it down and you’ll lose dynamic range. If the song is below -14 dB LUFS, they will normalize it, or in other words, turn it up to match all the songs on the platform—you can turn it off if you choose—but you’ll also still suffer some dynamic-range loss.
However, that same quiet song on YouTube will not be turned up even though they use the same -14 dB LUFS target. Apple Music normalizes, and will turn up quieter songs relative to peak levels and use both track and album normalization. Deezer and Pandora always use normalization, but only on a per-track basis, while Tidal uses album normalization. Confusing, right? So, how can we make our mixes sound their very best and perhaps get an idea of what it will sound like on various platforms?
1. Before you use any type of plugin (compression, limiting, EQ) on your stereo bus, make sure your dynamic range within the song itself is intact, and nothing peaks over 0 dBFS on your meters—no little red lights should be triggered.
2. Use an LUFS metering plugin like Waves’ WLM ($29), FabFilter’s Pro-L 2 ($169), or Izotope’s Insight ($199).
3. Set your true peak limiter to -1 dB and your long-term LUFS to -14 dB, and you’ll be in the sweet spot.
4. Play your song from beginning to end, look at the readings, and adjust gain accordingly.
Next month, I’ll be showing you some creative ways to use reverb! Until then, namaste.
In Universal Audio’s LUNA DAW, “trim” is offered as an additional automation option.
In the modern world of immersive audio capabilities, knowing how to automate mix parameters is essential.
Let me focus on the paradigm shift in the mixing world—immersive audio. It’s been coming quietly for a long time, and I believe it might just survive the bleached-bone-littered landscape of previous multi-channel mixing technology incarnations that were left for dead and never destined for success, like Quad and 5.1 surround sound.
Unless it’s a live recording and you’re being true to the original audience experience, I’ve never really been that enthralled with the “static mixing” mindset—where once the instruments are placed in the stereo field, they never move—as has been the case on the vast majority of records over the last several decades. Especially when one considers immersive audio and the vast amount of possibilities to place and move musical elements of a song in space over time, listening to static mixes seems, well … boring. Granted, my attention span is shorter than a ferret’s on espresso, but c’mon folks, we’re 20-plus years into the new millennium. Onward!
The Good News
With the ever-evolving immersive audio environment and renderers, and breakthroughs in HRTF (head-related transfer function) technology, now more than ever we are able to experience decoded, folded-down 7.1.4 spatial audio mixes in a binaural audio format through a regular pair of headphones (or earbuds). Finally, we’re making progress.
With this in mind, your automation skills need to be on point in order to take full advantage of these new possibilities. This time, I’d like to highlight core types of automation for you to start employing (regardless of your DAW) to add some new dimension within your mixes. Tighten up your belts, the dojo is now open.
All Hands on Deck
I suppose you could say automation has been around and available to mixing engineers since the first time multiple pairs of hands were on a console and engineers were choreographing fader rides as the mix printed. One of my favorite, extreme examples of this is the classic, smash hit “I’m Not in Love” by 10cc, released in 1975. Remember all those gorgeous pads? Those chords were created by having the group sing “ah” multiple times, which created a 48-voice “choir” for each one of the 12 notes of the chromatic scale. With the tape machine looping the 12 tracks of “ah”s, the band rode the console’s volume faders for each track to create the appropriate chord progressions.
By the end of the decade, Brad Plunkett and Dave Harrison’s Flying Faders came online and allowed installed motorized faders to be automated by a dedicated computer. We still use this technology on our Neve 8078 console here at Blackbird Studios.
By the early ’90s, DAWs offered comprehensive automation capabilities within the program itself that spanned from volume and panning to console settings, MIDI data, and now, plugin settings for spatial audio parameters.
Latch or Touch
Let’s start with top-level volume automation choices. These are perhaps the most important to your overall mix, and there are various ways of writing volume automation. DAWs can vary in number of options, but most feature the five following choices: off, write, read, touch, and latch. The first three are very intuitive—don’t play back the automation, write it, or play it back. But what is the difference between “touch” and “latch?” It’s important to know, especially since this can be applied to every kind of automation parameter, such as advanced things like effect sends, MIDI data, and plugin controls that allow every parameter to be automated. I use “touch” for highly nuanced fader rides and “latch” for more general maneuvers.
“Now more than ever, we are able to experience decoded, folded down 7.1.4 spatial audio mixes in a binaural audio format through a regular pair of headphones.”
After your initial “write” pass, “touch” automation plays back any previously written automation and only writes over it when you touch or move the fader, and upon release. It then immediately goes back to reading the previous automation.
In contrast, “latch” reads and writes automation similarly, but once a fader is released, it overwrites any previous automation and stays (or “latches”) at the point where the fader was released. This can be useful if you need to have certain sections higher or lower in volume, or are using effects sends. But remember, as soon as you let go of the fader, it’s going to keep overwriting all previous automation!
Universal Audio’s LUNA DAW adds another level of fine control by adding the “trim” option, which allows you to reduce or increase the overall level of an automation pass while still preserving the underlying automation. This is helpful when you need to do stem bounces, vocal up/vocal down mixes, etc.
Now you know the main differences of writing automation and can let your imagination go wild by experimenting with automating every possible parameter available in your DAW—from MIDI to soft synths to all your plugins. Until next time, namaste.