In the modern world of immersive audio capabilities, knowing how to automate mix parameters is essential.
Let me focus on the paradigm shift in the mixing world—immersive audio. It’s been coming quietly for a long time, and I believe it might just survive the bleached-bone-littered landscape of previous multi-channel mixing technology incarnations that were left for dead and never destined for success, like Quad and 5.1 surround sound.
Unless it’s a live recording and you’re being true to the original audience experience, I’ve never really been that enthralled with the “static mixing” mindset—where once the instruments are placed in the stereo field, they never move—as has been the case on the vast majority of records over the last several decades. Especially when one considers immersive audio and the vast amount of possibilities to place and move musical elements of a song in space over time, listening to static mixes seems, well … boring. Granted, my attention span is shorter than a ferret’s on espresso, but c’mon folks, we’re 20-plus years into the new millennium. Onward!
The Good News
With the ever-evolving immersive audio environment and renderers, and breakthroughs in HRTF (head-related transfer function) technology, now more than ever we are able to experience decoded, folded-down 7.1.4 spatial audio mixes in a binaural audio format through a regular pair of headphones (or earbuds). Finally, we’re making progress.
With this in mind, your automation skills need to be on point in order to take full advantage of these new possibilities. This time, I’d like to highlight core types of automation for you to start employing (regardless of your DAW) to add some new dimension within your mixes. Tighten up your belts, the dojo is now open.
All Hands on Deck
I suppose you could say automation has been around and available to mixing engineers since the first time multiple pairs of hands were on a console and engineers were choreographing fader rides as the mix printed. One of my favorite, extreme examples of this is the classic, smash hit “I’m Not in Love” by 10cc, released in 1975. Remember all those gorgeous pads? Those chords were created by having the group sing “ah” multiple times, which created a 48-voice “choir” for each one of the 12 notes of the chromatic scale. With the tape machine looping the 12 tracks of “ah”s, the band rode the console’s volume faders for each track to create the appropriate chord progressions.
By the end of the decade, Brad Plunkett and Dave Harrison’s Flying Faders came online and allowed installed motorized faders to be automated by a dedicated computer. We still use this technology on our Neve 8078 console here at Blackbird Studios.
By the early ’90s, DAWs offered comprehensive automation capabilities within the program itself that spanned from volume and panning to console settings, MIDI data, and now, plugin settings for spatial audio parameters.
Latch or Touch
Let’s start with top-level volume automation choices. These are perhaps the most important to your overall mix, and there are various ways of writing volume automation. DAWs can vary in number of options, but most feature the five following choices: off, write, read, touch, and latch. The first three are very intuitive—don’t play back the automation, write it, or play it back. But what is the difference between “touch” and “latch?” It’s important to know, especially since this can be applied to every kind of automation parameter, such as advanced things like effect sends, MIDI data, and plugin controls that allow every parameter to be automated. I use “touch” for highly nuanced fader rides and “latch” for more general maneuvers.
“Now more than ever, we are able to experience decoded, folded down 7.1.4 spatial audio mixes in a binaural audio format through a regular pair of headphones.”
After your initial “write” pass, “touch” automation plays back any previously written automation and only writes over it when you touch or move the fader, and upon release. It then immediately goes back to reading the previous automation.
In contrast, “latch” reads and writes automation similarly, but once a fader is released, it overwrites any previous automation and stays (or “latches”) at the point where the fader was released. This can be useful if you need to have certain sections higher or lower in volume, or are using effects sends. But remember, as soon as you let go of the fader, it’s going to keep overwriting all previous automation!
Universal Audio’s LUNA DAW adds another level of fine control by adding the “trim” option, which allows you to reduce or increase the overall level of an automation pass while still preserving the underlying automation. This is helpful when you need to do stem bounces, vocal up/vocal down mixes, etc.
Now you know the main differences of writing automation and can let your imagination go wild by experimenting with automating every possible parameter available in your DAW—from MIDI to soft synths to all your plugins. Until next time, namaste.Your favorite stomps are real-time, tactile sound processors. Plug them in and expand your DAW’s options.
Welcome to another Dojo. This time I want to help supercharge your creative process by advocating for a hybrid approach to effects processing. Specifically, I want you to embrace using stomp pedals as real-time, tactile effects processors and combine them with your favorite DAW effects and plugins.
You should be deeply familiar with how to insert plugins directly on your DAW tracks’ aux sends (for serial processing with modulation and time-based effects, like reverbs and delays) or aux buses (for parallel processing, like compressors). But what about guitar pedals? Yes, they’re typically used in live performance settings and get a lot of abuse on the stage and studio floor, but with the explosion of modern, programmable, MIDI-capable pedals on the market and their ever-increasing processing power, “lowly” stompboxes are long overdue to be elevated to the same level as rack effects and kept within arm’s reach on your mixing desk.
Pedals can add analog warmth, hands-on control, and modularity, are easy on your computer’s RAM and CPU resources, are always OS compliant, and retain their value.
Turning Knobs vs. Mouse-Clicking
When it comes to tracking and mixing, turning knobs on a physical, controllable surface, such as a pedal, mixing console, or MIDI controller, can provide a more tactile and intuitive way to make adjustments to your sound, compared to using a mouse to click and drag on virtual knobs and sliders within your DAW. Let’s face it, in the heat of a session this can be tedious—especially if you run out of trackpad or mousepad space while recording or mixing.
But what about exploring some hybrid approaches that take advantage of both formats? After all, plugins provide flexibility, precision, consistency, automation, portability (nothing to lug around), cost-effectiveness (cheaper than outboard gear), and compatibility (the same plugins can work on multiple DAWs). Pedals can add analog warmth, hands-on control, and modularity, are easy on your computer’s RAM and CPU resources, are always OS compliant, and retain their value (how much are original Klons going for now?!). They can also help you achieve a unique and personalized sound that can be difficult to replicate with digital plugins and their stock presets. Many modern foot pedals can also handle both line level and instrument level inputs.
Builders—Strymon, Eventide, Boss, EarthQuaker, Empress, Meris, Chase Bliss, and many more—have a wide range of pedals that are MIDI capable and, quite frankly, have processing power that far surpasses many classic rack effects units.
So, I’d like to offer some creative ways to use pedals, in addition to your regular plugins, in your next session or mix:
Getting Ready
Before starting, remember to be aware that some pedals are looking for instrument level and not line level inputs (the latter is what typically is output from your interface). You can find helpful info by reading my September 2022 Dojo column, “What You Should Know Before Using Guitar Pedals with Other Instruments.”
To start, duplicate the track(s) you want to process with your effects pedal(s) in your DAW and route the output of those tracks to one or two of your line outputs on your interface. Depending on the pedals in question, you may have options for mono in and out, mono in/stereo out, or stereo in and out. Connect all relevant cables and connect the output of the last pedal to the input of your interface. If your incoming signal is low, switch from line to mic on your interface for each input.
Next, in your DAW, create one mono or one stereo track, depending upon how you are going to return the processed signal from your interface and record-enable the track(s). Now you’re ready to record new, processed material (from one pedal or your entire pedal board!) in real-time and take advantage of every parameter on each pedal.
You can now use your pedals to adjust distortion levels, reverb, and delay times in real-time (with all the glorious artifacts, glitches, and smears), as well as adjust tremolo rates and chorus depths on the fly. Get creative! Take chances and invite any and all happy accidents!
One particular approach I love is throwing loop pedals into this equation, after all the other pedals, for some wild, abstract processing. My signal flow usually goes from overdrive to mod-based effects (chorus, phaser, tremolo) to time-based effects (delays and reverbs) followed by a looper. At present, my favorite looper pedal for this by far is Habit by Chase Bliss ($399 street). It has three minutes of loop time and can take user-definable snippets of your loop, play them back asynchronously, feed that back into the loop itself, and record all modifications as well (and this is just scratching the surface). Highly recommended!
Combine this “out-of-the-box” technique along with your normal “in-the-box” workflow and you should be creating some pretty amazing sounds. Let me know if you find a cool approach! I’ll share it in the Dojo channel.
Until next time, blessings, and continue to share your gifts with the world. It matters, and you matter!
Free your microphone placement and gain structure, and your EQ and compression will follow.
Hello everyone, and welcome back to another Dojo! In the last two columns, I’ve focused on bus mixing techniques to get your recordings more on point—and I hope that was helpful. This time, I’d like to place focus in the other direction and give you three tips to capture your best recorded tones yet.
In my experience, the best way to get great recordings begins with getting in tune with your inner ear and the tones you are hearing in your head. This understanding will act as a catalyst for the first important tip: choice and placement of microphones. As simple as this is, we run the risk of listening with our eyes instead of our ears, because we are creatures of habit. How many times have you placed the same mic in the same place on the same amp (or same place at the guitar, for acoustic players)? Did you really explore the possibilities, or was this the best solution at the time and now it has become ingrained? Maybe it’s time to re-think the process and try something new?
Regular Dojo readers are already familiar with the three most common microphones used in recording: condenser, ribbon, and dynamic. Regardless of what mics you have, use your ears and listen to the source you want to record. For example, listen not only to where the amp sounds the best at the speaker, but also in the room. For acoustic guitar, placing the mics near the 14th fret in addition to other locations can yield a wide variety of tones. If you are recording by yourself, make several different short recordings and document the mic placement for each, listen, and then make decisions. The idea here is that you want to get the sound you’re looking for without using any EQ. In short, if you don’t like the sound you’re getting, move the mics until you do!
Once the decision has been made, the second tip for making better recordings is to pay careful attention to your gain structure (aka recording level) and give yourself plenty of headroom. The best way to do this is to set the recording track’s fader in your DAW to unity (zero), and then adjust your preamp’s gain level until the signal meters between -15 and -5 for most DAWs (check your specific DAW to find out which VU metering type you are using). If you’re somewhere in this range, you’ll have good signal-to-noise ratio and ample headroom for loud passages, like when you kick in the overdrive channel for the chorus and solo sections.
A scenario like Fig. 1 has bad news written all over it. The track faders are pushed near the top of their range and the master bus has already peaked. This can happen quicker than you think if you didn’t set your input levels properly to begin with. If you find yourself in this predicament, you’ll need to recalibrate your gain structure for every track for the entire mix. Ouch!
The final tip is focused on signal processing and preserving the efforts of the first two tips. Once your tracking is completed, don’t be too quick to start adding copious amounts of EQ and compression. The reason for steps one and two was to mitigate the need for EQ and preserve the natural dynamic range of your tracks. Now, when you need to use EQ and compression, you can use it with subtlety and not out of necessity to fix a poorly recorded track.
As always, if you have any questions you can reach me at recordingdojo@premierguitar.com, and I also want to invite you to checkout my new single “Christian Graffiti” on your favorite music platform to hear all of these tips in action. Until next time, namaste.