AI, which generated this image in seconds, can obviously do amazing things. But can it actually replace human creativity?
Technology has always disrupted the music biz, but we’ve never seen anything like this.
AI has me deeply thinking: Is guitar (or any instrument) still valid? Are musicians still valid? I don’t think the answer is as obvious as I’d like it to be.
As a professional musician, I’ve spent the vast majority of my days immersed in the tones of tube amps, the resistance of steel strings under my fingers, and the endless pursuit of musical expression. Each day, I strive to tap into the Source, channel something new into the world (however small), and share it. Yet, lately, a new presence has entered the room—artificial intelligence. It is an interloper unlike any I’ve ever encountered. If you’re thinking that AI is something off in the “not-too-distant future,” you’re exponentially wrong. So, this month I’m going to ask that we sit and meditate on this technology, and hopefully gain some insight into how we are just beginning to use it.
AI: Friend or Foe?
In the last 12 months, I’ve heard quite a bit of AI-generated music. Algorithms can now “compose,” “perform” (with vocals of your choosing), and “produce” entire songs in minutes, with prompts as flippant as, “Write a song about__in the style of__.” AI never misses a note and can mimic the finer details of almost any genre with unnerving precision. For those who are merely curious about music, or those easily distracted by novelty, this might seem exciting … a shortcut to creating “professional” sounding music without years of practice. But for those of us who are deeply passionate about music, it raises some profound existential questions.
When you play an instrument, you engage in something deeply human. Each musician carries their life experiences into their playing. The pain of heartbreak, the joy of new beginnings, or the struggle to find a voice in an increasingly noisy and artificial online world dominated by algorithms. Sweat, tears, and callouses develop from your efforts and repetition. Your mistakes can lead to new creative vistas and shape the evolution of your style.
Emotions shape the music we create. While an algorithm can only infer and assign a “value” to the vast variety of our experience, it is ruthlessly proficient at analyzing and recording the entire corpus of human existence, and further, cataloging every known human behavioral action and response in mere fractions of a second.
Pardon the Disruption
Technology has always disrupted the music industry. The invention of musical notation provided unprecedented access to compositions. The advent of records allowed performances of music to be captured and shared. When radio brought music into every home, there was fear that no one would buy records. Television added visual spectacle, sparking fears that it would kill live performance. MIDI revolutionized music production but raised concerns about replacing human players. The internet, paired with the MP3 format, democratized music distribution, shattered traditional revenue models, and shifted power from labels to artists. Each of these innovations was met with resistance and uncertainty, but ultimately, they expanded the ways music could be created, shared, and experienced.
Every revolution in art and technology forces us to rediscover what is uniquely human about creativity. To me, though, this is different. AI isn’t a tool that requires a significant amount of human input in order to work. It’s already analyzed the minutia of all of humanity’s greatest creations—from the most esoteric to the ubiquitous, and it is wholly capable of creating entire works of art that are as commercially competitive as anything you’ve ever heard. This will force us to recalibrate our definition of art and push us to dig deeper into our personal truths.
“In an age where performed perfection is casually synthesized into existence, does our human expression still hold value? Especially if the average listener can’t tell the difference?”
Advantage: Humans
What if we don’t want to, though? In an age where performed perfection is casually synthesized into existence, does our human expression still hold value? Especially if the average listener can’t tell the difference?
Of course, the answer is still emphatically “Yes!” But caveat emptor. I believe that the value of the tool depends entirely on the way in which it is used—and this one in particular is a very, very powerful tool. We all need to read the manual and handle with care.
AI cannot replicate the experience of creating music in the moment. It cannot capture the energy of a living room jam session with friends or the adrenaline of playing a less-than-perfect set in front of a crowd who cheers because they feel your passion. It cannot replace the personal journey you take each time you push through frustration to master a riff that once seemed impossible. So, my fellow musicians, I say this: Your music is valid. Your guitar is valid. What you create with your hands and heart will always stand apart from what an algorithm can generate.
Our audience, on the other hand, is quite a different matter. And that’s the subject for next month’s Dojo. Until then, namaste.
Bryan in a presidential pose before some of the boards at Blackbird Studio.
Take it from English cyclist Sir Dave Brailsford: With an all-encompassing approach to improving the marginal aspects of your methods, you can get quite the payoff on the quality of your endeavors. And that goes for recording, too.
Technology is a strange bedfellow in the arts. We’re either dazzled or disenchanted, love it or hate it, and the drive behind it all is a relentless need to gain a slight competitive edge on our own creativity—at least that’s how I think of it. Last month I wrote about the benefits of using a modeling microphone on a single source. This month, I want to expand that to a larger format.
Recently, I did a live recording and mixing masterclass with Universal Audio, Guitar Center Pro, and the Blackbird Academy back in Dallas, Texas (my hometown). The format: Record a live performance of a band including acoustic, electric, bass, and drums, plus vocals with additional synth tracks, and then immediately pivot to mixing in the box—all in front of a live audience. In addition, I also wanted to do something very different. I wanted to use modeling mics to record the drum kit and simultaneously use them without modeling for the live performance. My hope was that later during mixing, I would compare and contrast to see if I could get more of a “studio” sound.
There are many modeling microphone choices on the market today, mostly made by Slate, Antelope, and Universal Audio, ranging in list price from $129 to $1,500. For this masterclass, I used UA’s Standard Microphones with Hemisphere Modeling (starting at $129).
Live vs. Studio
Now for those of you that read my Dojo offerings regularly, you know I always emphasize mic placement as well as using as little EQ and dynamics processing as possible. In short, always start by taking as much time as you can to adjust the mic to get the best sound possible before reaching for the EQ knobs on any sound source. If you have more than one mic to choose from, switch mics and listen. Are you getting closer to the sound(s) you want?
After making sure the band was totally happy with their monitor mix and things sounded good in the house, the show began. To ensure that the tracks would be as clean as possible, I recorded the performances into my DAW with no modeling, EQ, or dynamics on the drums (or for the rest of the band). I did use some EQ and a little bit of dynamic control for the live show to keep the vocals out in front of the band.
The drum layout was as follows:
• Overheads: two SP-1s (spaced pair)
• Two rack toms and floor tom: three SD-7
• Kick drum: one SD-5
• Snare: one SD-3
Marginal Gains
Once I got the drum kit balanced in volume, I proceeded to bring in a pair of Neumann KH 310 monitors so the masterclass participants could hear what tracks sounded like in a more “studio” mix environment. I cycled through the various modeled mic profiles to hear the differences until we all reached a consensus as to which model worked best for each specific drum in the kit. (My picks: Neumann KM 54 for overheads, cream-colored Sennheiser MD 421 for toms, AKG D12 for kick, and SM57 for snare.) I could then toggle on and off all the profiles at once and hear a completely modeled-mic drum kit as opposed to the “natural” one. The results definitely raised some eyebrows and proved the efficacy of the “aggregation of marginal gains.” This term was coined by Sir Dave Brailsford, who catapulted British Cycling to legendary achievements and wins by choosing not to focus on big gains in a single area, but rather highly detail-oriented marginal gains in many areas (“The 1-Percent Factor”). Thus, by using seven modeled mics on the kit, the composite result was noticeably more flattering than without, and a more polished “studio” sound was achieved.
“Always start by taking as much time as you can to adjust the mic to get the best sound possible before reaching for the EQ knobs on any sound source.”
You may be asking, “Did the mics sound good in the house without any modeling?” Yes! I found them to be equally on par with the standard “live mic” stalwarts we all know. Now, this isn’t a review of the microphones as much as what I mentioned at the beginning—that technology can offer us unique possibilities if we start thinking outside conventional norms and use products outside their primary design. Look around your studio right now, or think about the gear and instruments you have. Can you challenge your creativity and try something new? Can you embrace the 1-Percent factor?
As for me, my next recorded live gig will very likely be with modeling mics!
Until next time, namaste.
Boss unveils a professional 500-watt bass amp with advanced Boss technology and companion two-way speaker cabinet.
Katana-500 Bass Head
The compact Katana-500 Bass Head brings serious bassists a next-generation sound experience backed by decades of BOSS R&D. This amp is incredibly clear, punchy, and responsive, realized with 500 watts of carefully tuned Class D power. Cab Resonance takes things further, using high-tech calibration to internally fine-tune the power output section for the user’s preferred speaker cabinet.
With a simple button press, the Cab Resonance feature in the Katana-500 Bass Head calibrates the amp’s reactive output stage circuitry to match the unique impedance and frequency properties of the connected cabinet. This proprietary BOSS process analyzes numerous elements, including the lowest frequency the cabinet can produce, high-end resonances, and many other factors. With this information, advanced processing dynamically enhances the output to provide superior feel and response, powerful lows, and high-definition overall sound.
The Katana-500 Bass Head offers a deep selection of controls to hone the primary sound. There’s a four-band active EQ with three adjustable frequency options for both the low-mid and high-mid controls. A Bottom knob tunes the low end for different stages, while three Hi Cut settings smooth the sound as needed. With the Shape button, users can instantly revoice the amp with mid-scoop, bright, and wide-range curves.
The Katana-500 Bass Head also includes sophisticated tone-shaping tools to refine the fundamental sound. Via the Amp Feel switch, the player can apply Modern or Vintage tonal characteristics for different styles if desired. Comp and Drive types are available to control dynamics and introduce grit and aggression, while the FX section provides three bass-tuned effects for color and inspiration. There’s also a Blend knob to dial in some direct bass sound for additional clarity.
The Katana-500 Bass Head integrates a versatile range of connectivity into its compact design. Two locking-style speaker outputs support high-current operation, and the XLR line output can be used to send a direct, pre, or post signal to a house PA. Players can shape the line output voice from the front panel with three mic’d speaker emulation presets and three custom settings. There’s also a dual-function output for headphones practice or analog recording, plus USB for capturing mix-ready tracks in computer music software.
With BOSS Tone Studio for Windows and macOS, players can edit amp parameters and access over 60 BOSS effects to swap alternate types for the Comp, Drive, and FX sections. Plugging in a GA-FC or GA-FC EX foot controller provides remote operation of numerous amp functions, including the front-panel user memory and two additional memories. The optional Bluetooth® Audio MIDI Dual Adaptor allows users to stream music from a mobile device and wirelessly shape tones via the BTS editor app for iOS and Android.
Katana Cabinet 112 Bass
The Katana Cabinet 112 Bass delivers big, punchy, and articulate sound for any professional bass amp. It’s an ideal match for the Katana-500 Bass Head, providing full support for the amp’s 500-watt output in a compact, space-saving footprint.
The Katana Cabinet 112 Bass features a two-way design with a 12-inch woofer and a high-frequency tweeter. It comes loaded with an Eminence Neodymium Series woofer, a highly regarded driver with powerful sound and reduced weight. Switches on the rear panel allow the tweeter to be bypassed or set to two preset levels. The cabinet features locking-style connectors on the primary speaker input and the link output for connecting a second cabinet.
BOSS KATANA-500 BASS HEAD | Outstanding Bass Tones with Innovative Cabinet Calibration
You can purchase the new BOSS Katana-500 Bass Head and the Katana Cabinet 112 Bass at authorized US BOSS retailers in May for $799.99 and $699.99 respectively.
To learn more about Katana-500 Bass Head and Katana Cabinet 112 Bass, visitwww.boss.info.