Streaming platforms each have their own volume standards for uploaded audio, and if you don’t cater your mixes to each, you risk losing some dynamic range.
Here’s the scenario: You’ve finished your latest masterpiece, and now it’s time to start considering how your mixes and their loudness levels will be perceived across all digital platforms (Apple Music, Spotify, Amazon, etc.). In addition, you might also make sure your music adheres to the strict audio broadcast standards used in film, TV, podcasts, video games, and immersive audio formats like Dolby Atmos.
These considerations, among many others, are typically reserved for mastering engineers. However, you may not have the budget for a mastering engineer, so in the meantime I’d like to give you some expert advice on making sure your loudness levels are in check before you release your music into the wild. Tighten up your belts, the Dojo is now open.
Hail LUFS Metering!
LUFS (Loudness Units Full Scale) is unique in that it is attempts to measure how the human brain perceives loudness, which is accomplished by using a K-weighted scale with 400 ms “momentary” measurement windows (each overlapping the other by 75 percent), resulting in super smooth and accurate readings. This momentary method also allows for additional LUFS short-term and long-term readings (Fig.1), and it is this later measurement, LUFS long-term (aka LUFS integrated), that all of the digital music platforms will be placing their utmost attention upon. For those who are curious, the K-weighted audio scale places less emphasis on bass frequencies and more on higher frequencies above 2 kHz—and is a refined emulation of how humans perceive sound. It is not a musical scale like C harmonic minor, but rather a scaled algorithm for measuring frequencies.
The Wild West of dBs
Less than 10 years ago, there was no loudness standard for any of the audio-streaming platforms. In 2021, the Audio Engineering Society (AES) issued their guidelines for loudness of internet audio-streaming and on-demand distribution in a document named AESTD1008.1.21-9, which recommends the following:
News/talk: -18 dB LUFS
Pop music: -16 dB LUFS
Mixed format: -17 dB LUFS
Sports: -17 dB LUFS
Drama: -18 dB LUFS
However, most services have their own loudness standards for music submission.
“We adjust tracks to -14 dB LUFS, according to the ITU 1770 (International Telecommunication Union) standard. We normalize an entire album at the same time, so gain compensation doesn’t change between tracks.” —Spotify
They are not alone; YouTube, Tidal, and Amazon also use this measurement. Deezer uses -15 dB LUFS and Apple Music has chosen -16 dB LUFS, while SoundCloud has no measurement at all.
To make things more confusing, some services automatically normalize songs to match their predefined LUFS target. Think of normalization as a way of dynamically homogenizing all audio on their platform to the same volume level, regardless of genre or decade. This ensures that the listening end-user will never have the need to adjust their volume knob from song to song.
“Think of normalization as a way of dynamically homogenizing all audio on their platform to the same volume level, regardless of genre or decade.”
What does that mean for your music? If you upload a song to Spotify above -14 dB LUFS, they will turn it down and you’ll lose dynamic range. If the song is below -14 dB LUFS, they will normalize it, or in other words, turn it up to match all the songs on the platform—you can turn it off if you choose—but you’ll also still suffer some dynamic-range loss.
However, that same quiet song on YouTube will not be turned up even though they use the same -14 dB LUFS target. Apple Music normalizes, and will turn up quieter songs relative to peak levels and use both track and album normalization. Deezer and Pandora always use normalization, but only on a per-track basis, while Tidal uses album normalization. Confusing, right? So, how can we make our mixes sound their very best and perhaps get an idea of what it will sound like on various platforms?
1. Before you use any type of plugin (compression, limiting, EQ) on your stereo bus, make sure your dynamic range within the song itself is intact, and nothing peaks over 0 dBFS on your meters—no little red lights should be triggered.
2. Use an LUFS metering plugin like Waves’ WLM ($29), FabFilter’s Pro-L 2 ($169), or Izotope’s Insight ($199).
3. Set your true peak limiter to -1 dB and your long-term LUFS to -14 dB, and you’ll be in the sweet spot.
4. Play your song from beginning to end, look at the readings, and adjust gain accordingly.
Next month, I’ll be showing you some creative ways to use reverb! Until then, namaste.
Using a contact mic on your acoustic guitar has many advantages—and can open the door to some adventurous experimentation.
For example, during a chamber music concert, I placed a contact mic under the chess board as we reenacted, move for move, the legendary 1972 World Chess Championship Game 6 of Bobby Fischer and Boris Spassky, while rice grains were dropped on the board as the rest of the ensemble made an ongoing soundtrack. (I highly recommend watching HBO’s 2011 documentary, Bobby Fischer Against The World.) In short, it’s my go-to initial technique for making totally new sounds, textures, timbres, samples, and sound design that I incorporate into my music. Tighten up your belts, the Dojo is now open.
Vibration Positive
Before we start, there are many benefits of using a contact microphone. It can pick up sounds that are not audible to the human ear. For example, if you attach the microphone to a metal surface and strike it with a mallet, you will hear not only the sound of the mallet hitting the metal, but also the vibrations of the metal itself. Which is exactly how Ben Burtt got the blaster sound effects for Star Wars—by hitting a certain radio tower’s support wire (guy wire) in the Mojave Desert.
“It’s my go-to initial technique for making totally new sounds, textures, timbres, samples, and sound design that I incorporate into my music.”
Recently, I showed our students at the Blackbird Academy how to create new samples and sounds by attaching a contact mic to the outside of a 5-gallon water jug, then pouring water inside and hitting the side of the jug while gently swirling the water. We eventually ended up with an entire “water jug” drum kit.
Another benefit of using a contact microphone is that it can eliminate unwanted background noise. Because the microphone is only picking up vibrations from the surface it is attached to, it is less likely to pick up ambient noise in the room. However, because it is sensitive to vibrations, it may pick up unwanted sounds from handling or movement. Also, it may not capture the full range of frequencies that a traditional microphone would capture.
Lastly, they really come in handy for older vintage acoustic instruments that you may want to leave in their original state and have the flexibility to mic from any position without harming them.
Um … How Do I?
To use a contact microphone, you need to attach the microphone to the surface you want to capture the sound from. I only use Loctite Fun-Tak Mounting Putty because it is non-permanent, leaves no residue, and is non-tarnishing, malleable, and non-toxic. I simply place a tab of the Fun-Tak on the back of my contact mic and then mount it to whatever I want to record.
Check out Fig. 1. You can see I’ve attached my Zeppelin Labs Cortado MkIII mic ($159 street) to the headstock of my National Estralita Deluxe. This gives me that piezo/electric sound that I can in turn reamp or process with plugins, etc.
Be sure to experiment with different placements all over the instrument to find the sound you are looking for. Ever wonder what it might sound like inside your slide when playing slide guitar? Tape the mic on the top of your slide and play away. But don’t stop there! You could also place it on electronic kids’ toys that make noise (toy pianos, baby shakers, celeste, handheld electronic games), or pitched percussion, like kalimbas, log drums, vibraphones, and even cymbals. Or, think way outside the box—literally. Mount it on all kinds of cups, glasses, bowls, buckets, doors, and windows. Or on glass shower doors (outside the shower of course!), or the inside of your car windshield the next time you wash your car or it rains, flagpoles on windy days, park slides, merry-go-rounds, swing sets, and basically anything else you can imagine.
After you get some great source sounds, head back to the studio, keep what you like and process the sounds with reckless abandon. Until next time, namaste.
Using templates when recording makes a big difference in streamlining your workflow, and will leave you more time to get creative.
Hello and welcome to another Dojo! This time I’d like to focus on the benefits of using templates in your recording and mixing process. I’ll also show you some ways in which you can increase your productivity by using customized templates for your particular workflow regardless of what DAW(s) you use. Whether you’re recording a live band or a solo artist, you can create templates that include the necessary tracks, processing, and routing setups to meet your unique requirements. Tighten up, the Dojo is now open.
Over the last 30 years, digital audio workstations (DAWs) have revolutionized the way music is produced and recorded, making it easier to create high-quality recordings from the comfort of your own home. With so many options now available, it can be challenging to streamline the recording process and maintain consistency across multiple sessions. This is where templates—pre-configured session setups that can be customized and reused to simplify the recording process—come in.
The main point here is to create a template that works for you. I have found that the more specialized the template, the less flexible it becomes for use in other scenarios. For example, a 48-channel mixing template with specific plugins, buses, and other routing assignments won’t be a first choice when recording a power trio. I think the important thing is to recognize the type(s) of work you do and make different levels of templates accordingly. By creating various kinds of templates that include all the necessary tracks, plugins, and settings, you can ensure that each recording or mix session starts with a consistent foundation, allowing you to focus on the creative process rather than technical setup.
“By sharing templates, you can ensure that everyone is working with the same setup and settings, making it easier to collaborate and share ideas.”
Saving Time
Creating a new tracking session in your DAW from scratch can be a time-consuming process, especially if you’re working with a large number of tracks or complex routing setups. Using templates allows you to quickly set up your session and get to work, without having to waste time configuring settings or searching for the right plugins. I find this particularly useful when starting a new project that involves recording multiple songs with the same artist or band.
Typically, I create the session’s tracks and buses, assign, route, and organize my signal flow, in-the-box or outboard (Fig. 1), and get sound levels from each musician by making adjustments at the mic first, then add EQ and compression as needed. Once all that is done, I save the session as a “tracking template” with the artist/band name and date. When we’re ready to move on to the next song, I pull up the “tracking template” and save it as a “new session”! Now I have the same organization of track count, routing, etc., and I am able to repeat the process for each song moving forward.
Mixing It Up
The same logic applies when moving to the mixing stage. I’ll create a new template focused on advanced signal routing and incorporate things like console and tape emulation (if it wasn’t tracked through a console), side-chain options, routing folders, and instrument groups specific to that project. I found that using one-size-fits-all, highly specialized mixing templates end up being overbuilt and I waste time parsing out only what is necessary, as well as making sure that it is not draining my RAM and CPU resources.
Collaboration
Using templates can also be beneficial when collaborating with other musicians or engineers. By sharing templates, you can ensure that everyone is working with the same setup and settings, making it easier to share ideas and tracks. This can be especially important when working remotely, as it can help ensure that everyone is on the same page, even if they are not in the same physical location.
Creating templates can also help future-proof your recording process, ensuring that your recordings remain consistent and of high quality as your needs change over time. By creating templates that can be easily updated or modified, you can adapt to new recording technologies or workflows without having to start from scratch. This can help you stay ahead of the curve and ensure that your recordings are always of the highest quality.
Finally, you can create templates that use console emulation on every channel, aux, and mix bus. There’s Universal Audio’s LUNA API Vision Console Emulation Bundle ($559 street), Neve and API summing plugins ($149 street) and many other possibilities from Waves NLS, and Slate Digital’s Virtual Console Collection ($149).
Regardless of the DAW you use, taking the time to create some different types of templates will save you time and help keep you and everyone involved in the creative state of mind. Until next time, keep creating! Namaste.
Learning the differences between various cables can greatly improve the quality of your recordings.
Hello, and welcome to another Dojo session! This time I’d like to drill down to some audio bedrock and unearth the differences between balanced and unbalanced cables. I want to help you understand the differences and give you some strategies to greatly reduce noise (hums, buzzes, and static) in your recordings. Tighten up, the Dojo is now open.
There are many different connection types and gauges of balanced and unbalanced audio cables, and both are used to transmit audio signals from one device to another. However, they differ in their construction and performance, and understanding these differences is essential for achieving optimal audio quality.
Tipping the Scales
Unbalanced audio cables are the most common type of cable used in consumer audio equipment. This includes our beloved 1/4" TS (tip-sleeve) instrument and speaker cables, RCA, and TRS (tip-ring-sleeve) 3.5 mm and 1/4" headphone cables. The first two kinds of cables consist of two wires—a signal wire and a ground wire, while the headphone cables are in stereo, with three wires: left, right, and ground. Signal wires carry the audio signal, while the ground wire acts as a reference point. At the cable’s end, the tip (and in the case of headphone cables, the ring) of the plug carries the signal, while the sleeve is the ground connection.
Unbalanced cables are very limited in the distance they can transmit audio signals cleanly (preferably less than 20 feet). The longer the cable, the less high frequencies, and the more susceptible it is to noise and interference from external sources—like electromagnetic fields created by other electronic devices nearby (amps, synths, drum machines, outboard gear, cell phones, computers, televisions, etc.) and radio frequency interference.
Balancing the Scales
Balanced audio cables, on the other hand, which include XLR and balanced 1/4" TRS types of connectors, are designed to reduce interference and improve audio quality. They always consist of three wires—two signal wires and a ground wire. Note that while some unbalanced cables have three wires, the two signal wires in balanced cables carry the same audio signal, with one flipped 180 degrees out of phase, making them balanced mono as opposed to unbalanced stereo. Balanced cables are ideal for use in recording studios and live sound because they are capable of transmitting audio signals over longer distances (several hundred feet) without introducing noise or hum.
How? Without getting too technical, when the audio signal is split into two separate, identical paths across the two signal wires (with one being out of phase), and then recombined in phase once again, the resultant signal is amplified, and any noise that was present is canceled out. This includes 60 Hz buzz, hum (ground loops), white noise (thermal sound), digital clock jitter, and more.
They Look the Same, but Are They?
One mistake that’s easy to make is to confuse an unbalanced stereo headphone cable with a balanced mono TRS cable, as they both look the same and both have three wires. But if you tried to connect your unbalanced stereo cable output from your smartphone or tablet to a balanced input of a mixer, anything from the center of the stereo field (most likely the main vocals, kick, snare, and bass instruments) will be canceled, because the balanced input will sum both the left and right from the stereo cable, and anything common to both will be 180 degrees out of phase. Essentially, the balanced input will treat the center image as “noise” and remove it.
Can I Convert Balanced Into Unbalanced and Vice Versa?
Yes, you can, and that’s exactly what DI (direct injection) boxes and reamp boxes do. A DI box will convert unbalanced instrument level signals to balanced line level signals and reamp boxes do the opposite—balanced line levels to unbalanced instrument levels. If you’re unfamiliar with these devices and how they work, check out my Dojo video on how to reamp your guitar.
How to Reamp Your Guitar | Recording Dojo
Until next time, namaste and keep making your music!
Your favorite stomps are real-time, tactile sound processors. Plug them in and expand your DAW’s options.
Welcome to another Dojo. This time I want to help supercharge your creative process by advocating for a hybrid approach to effects processing. Specifically, I want you to embrace using stomp pedals as real-time, tactile effects processors and combine them with your favorite DAW effects and plugins.
You should be deeply familiar with how to insert plugins directly on your DAW tracks’ aux sends (for serial processing with modulation and time-based effects, like reverbs and delays) or aux buses (for parallel processing, like compressors). But what about guitar pedals? Yes, they’re typically used in live performance settings and get a lot of abuse on the stage and studio floor, but with the explosion of modern, programmable, MIDI-capable pedals on the market and their ever-increasing processing power, “lowly” stompboxes are long overdue to be elevated to the same level as rack effects and kept within arm’s reach on your mixing desk.
Pedals can add analog warmth, hands-on control, and modularity, are easy on your computer’s RAM and CPU resources, are always OS compliant, and retain their value.
Turning Knobs vs. Mouse-Clicking
When it comes to tracking and mixing, turning knobs on a physical, controllable surface, such as a pedal, mixing console, or MIDI controller, can provide a more tactile and intuitive way to make adjustments to your sound, compared to using a mouse to click and drag on virtual knobs and sliders within your DAW. Let’s face it, in the heat of a session this can be tedious—especially if you run out of trackpad or mousepad space while recording or mixing.
But what about exploring some hybrid approaches that take advantage of both formats? After all, plugins provide flexibility, precision, consistency, automation, portability (nothing to lug around), cost-effectiveness (cheaper than outboard gear), and compatibility (the same plugins can work on multiple DAWs). Pedals can add analog warmth, hands-on control, and modularity, are easy on your computer’s RAM and CPU resources, are always OS compliant, and retain their value (how much are original Klons going for now?!). They can also help you achieve a unique and personalized sound that can be difficult to replicate with digital plugins and their stock presets. Many modern foot pedals can also handle both line level and instrument level inputs.
Builders—Strymon, Eventide, Boss, EarthQuaker, Empress, Meris, Chase Bliss, and many more—have a wide range of pedals that are MIDI capable and, quite frankly, have processing power that far surpasses many classic rack effects units.
So, I’d like to offer some creative ways to use pedals, in addition to your regular plugins, in your next session or mix:
Getting Ready
Before starting, remember to be aware that some pedals are looking for instrumentlevel and not line level inputs (the latter is what typically is output from your interface). You can find helpful info by reading my September 2022 Dojo column, “What You Should Know Before Using Guitar Pedals with Other Instruments.”
To start, duplicate the track(s) you want to process with your effects pedal(s) in your DAW and route the output of those tracks to one or two of your line outputs on your interface. Depending on the pedals in question, you may have options for mono in and out, mono in/stereo out, or stereo in and out. Connect all relevant cables and connect the output of the last pedal to the input of your interface. If your incoming signal is low, switch from line to mic on your interface for each input.
Next, in your DAW, create one mono or one stereo track, depending upon how you are going to return the processed signal from your interface and record-enable the track(s). Now you’re ready to record new, processed material (from one pedal or your entire pedal board!) in real-time and take advantage of every parameter on each pedal.
You can now use your pedals to adjust distortion levels, reverb, and delay times in real-time (with all the glorious artifacts, glitches, and smears), as well as adjust tremolo rates and chorus depths on the fly. Get creative! Take chances and invite any and all happy accidents!
One particular approach I love is throwing loop pedals into this equation, after all the other pedals, for some wild, abstract processing. My signal flow usually goes from overdrive to mod-based effects (chorus, phaser, tremolo) to time-based effects (delays and reverbs) followed by a looper. At present, my favorite looper pedal for this by far is Habit by Chase Bliss ($399 street). It has three minutes of loop time and can take user-definable snippets of your loop, play them back asynchronously, feed that back into the loop itself, and record all modifications as well (and this is just scratching the surface). Highly recommended!
Combine this “out-of-the-box” technique along with your normal “in-the-box” workflow and you should be creating some pretty amazing sounds. Let me know if you find a cool approach! I’ll share it in the Dojo channel.
Until next time, blessings, and continue to share your gifts with the world. It matters, and you matter!