Streaming platforms each have their own volume standards for uploaded audio, and if you don’t cater your mixes to each, you risk losing some dynamic range.
Here’s the scenario: You’ve finished your latest masterpiece, and now it’s time to start considering how your mixes and their loudness levels will be perceived across all digital platforms (Apple Music, Spotify, Amazon, etc.). In addition, you might also make sure your music adheres to the strict audio broadcast standards used in film, TV, podcasts, video games, and immersive audio formats like Dolby Atmos.
These considerations, among many others, are typically reserved for mastering engineers. However, you may not have the budget for a mastering engineer, so in the meantime I’d like to give you some expert advice on making sure your loudness levels are in check before you release your music into the wild. Tighten up your belts, the Dojo is now open.
Hail LUFS Metering!
LUFS (Loudness Units Full Scale) is unique in that it is attempts to measure how the human brain perceives loudness, which is accomplished by using a K-weighted scale with 400 ms “momentary” measurement windows (each overlapping the other by 75 percent), resulting in super smooth and accurate readings. This momentary method also allows for additional LUFS short-term and long-term readings (Fig.1), and it is this later measurement, LUFS long-term (aka LUFS integrated), that all of the digital music platforms will be placing their utmost attention upon. For those who are curious, the K-weighted audio scale places less emphasis on bass frequencies and more on higher frequencies above 2 kHz—and is a refined emulation of how humans perceive sound. It is not a musical scale like C harmonic minor, but rather a scaled algorithm for measuring frequencies.
The Wild West of dBs
Less than 10 years ago, there was no loudness standard for any of the audio-streaming platforms. In 2021, the Audio Engineering Society (AES) issued their guidelines for loudness of internet audio-streaming and on-demand distribution in a document named AESTD1008.1.21-9, which recommends the following:
News/talk: -18 dB LUFS
Pop music: -16 dB LUFS
Mixed format: -17 dB LUFS
Sports: -17 dB LUFS
Drama: -18 dB LUFS
However, most services have their own loudness standards for music submission.
“We adjust tracks to -14 dB LUFS, according to the ITU 1770 (International Telecommunication Union) standard. We normalize an entire album at the same time, so gain compensation doesn’t change between tracks.” —Spotify
They are not alone; YouTube, Tidal, and Amazon also use this measurement. Deezer uses -15 dB LUFS and Apple Music has chosen -16 dB LUFS, while SoundCloud has no measurement at all.
To make things more confusing, some services automatically normalize songs to match their predefined LUFS target. Think of normalization as a way of dynamically homogenizing all audio on their platform to the same volume level, regardless of genre or decade. This ensures that the listening end-user will never have the need to adjust their volume knob from song to song.
“Think of normalization as a way of dynamically homogenizing all audio on their platform to the same volume level, regardless of genre or decade.”
What does that mean for your music? If you upload a song to Spotify above -14 dB LUFS, they will turn it down and you’ll lose dynamic range. If the song is below -14 dB LUFS, they will normalize it, or in other words, turn it up to match all the songs on the platform—you can turn it off if you choose—but you’ll also still suffer some dynamic-range loss.
However, that same quiet song on YouTube will not be turned up even though they use the same -14 dB LUFS target. Apple Music normalizes, and will turn up quieter songs relative to peak levels and use both track and album normalization. Deezer and Pandora always use normalization, but only on a per-track basis, while Tidal uses album normalization. Confusing, right? So, how can we make our mixes sound their very best and perhaps get an idea of what it will sound like on various platforms?
1. Before you use any type of plugin (compression, limiting, EQ) on your stereo bus, make sure your dynamic range within the song itself is intact, and nothing peaks over 0 dBFS on your meters—no little red lights should be triggered.
2. Use an LUFS metering plugin like Waves’ WLM ($29), FabFilter’s Pro-L 2 ($169), or Izotope’s Insight ($199).
3. Set your true peak limiter to -1 dB and your long-term LUFS to -14 dB, and you’ll be in the sweet spot.
4. Play your song from beginning to end, look at the readings, and adjust gain accordingly.
Next month, I’ll be showing you some creative ways to use reverb! Until then, namaste.
The music industry is leaving brilliant artists high and dry. What do we stand to lose?
Great jazz drummer Milford Graves was an innovator in every sense of the word. The definition of a polymath, he did so many things, from botany to computer science, at such a high level that it was hard for those in the know to think of him as any one thing. However, one little-known thing is that young Milford was also an early pioneer of independent records, meaning he was one of the first musicians to record, press, and release his own. Even lesser known is that he was responsible for introducing John Coltrane, one of the biggest of the jazz names within the major label pantheon, to this idea near the end of Coltrane’s life.
At the time, most artists struggled for control and more equitable treatment by record labels, who routinely signed them to predatory deals while controlling almost every aspect of their careers. And of course, at the end of the day, these labels owned the lucrative masters, ensuring they’d get the lion’s share of any profits across multiple generations. In fact, many major labels were built by exploiting such jazz deals.
Record labels, and in fact the entire industry, are actually byproducts of technological innovations made around sound recording, at the end of the 1800s. By the time Milford met Coltrane, record players in every household, and record enthusiasts who filled their collections with their favorite artists, had become cultural bedrock. Recording artists made a small percentage, a royalty, on every record sale. For artists such as Coltrane, all these royalties accumulated to make him one of the biggest earners in jazz towards the end of his life.
“Just like internet providers, they sold the idea to the public that information—music—was free, but the pipeline that supplied it—their networks—weren’t.”
When CDs arrived in the 1980s, they were incorporated into the existing model. Though they eventually replaced vinyl, CDs actually injected even more cash into major labels, as they reissued their back catalogues and convinced most people to repurchase their entire collections. Just as everything changed with the invention of the phonograph, it changed again with the invention of the MP3. MPEG-1 Audio Layer 3 is a data compression algorithm which took the large digital files on CDs and turned them into something comparatively tiny, that could be stored on an iPod or transmitted across the internet. However, unlike vinyl, cassettes, or CDs, MP3s did not inject more cash into the now extremely large and very prosperous music industry, at first. They began undercutting record sales, as collectors began to “rip” their entire CD libraries and then share them for free via peer-to-peer file-sharing applications like Napster, LimeWire, and Kazaa. If vinyl had originally established music in the minds of listeners as a tangible product that could be bought and sold, the MP3 now did the exact opposite. Music was now intangible, and potentially free.
In early attempts to monetize on sharing, streaming services such as Pandora and Spotify began offering massive collections of illegitimately procured MP3s to stream for a fixed monthly subscription, which exacerbated the problems labels were already facing by doing away with record collections altogether. Just like internet providers, they essentially sold the idea to the public that information—music—was free, but the pipeline that supplied it—their networks—wasn’t. This approach allowed them to make billions in a very short period of time, while sharing none of this profit with the artists and record companies who owned the rights.
What followed next was a series of futile attempts by major labels to shut down streaming. They eventually realized that this would never work because the nature of the product had already shifted in the mind of the consumer. Listeners would never go back to owning a handful of records purchased from a closed network of highly curated stores at a premium. People now expected access to the entirety of recording history from their smartphone, at a moment’s notice. What ensued were backroom negotiations, where record labels agreed to grant streaming companies licenses to their massive catalogs in return for a cut of the subscription revenues.
Such deals were a temporary fix with one major flaw: Record companies didn’t advocate for their artists, who actually made the music. Since the licensing deals between labels and streaming companies were opaque, artists now had no way of knowing how much labels made from their music, and predictably, their royalties continued to vanish before their eyes.
All of this eventually brought us the current unsustainable scenario: A major artist might accrue five million streams of their hit song over a year, yielding just $11,900, according to Spotify’s rate calculator. (For reference, the federal poverty level for an individual currently stands at $15,060.) For the average artist, who may limp to 2,000 streams per month, that royalty becomes just $4.76, or a loaf of bread!
The huge inequity that this demonstrates has become one of the major hurdles that both the music industry and music rights community must solve if they wish to continue to have jobs. In this scenario, successful recording artists like Coltrane may not have been able to afford to become musicians in the first place, and Graves may have stuck to botany!