Helmuts Bems - How AI is Changing Music Production

Interview with Helmuts Bems of Sonarworks

As AI reshapes creative industries, Sonarworks CEO Helmuts Bems offers a clear-eyed look at its role in music production. In this Q&A, he reflects on technological disruption, shifting business models, and how AI-assisted tools are unlocking new possibilities for artists, producers, and platforms navigating an increasingly fast-paced digital landscape. 

Your career intersects music, technology, and innovation in remarkable ways. Can you walk us through the key milestones that shaped your path to becoming a leading voice in AI and music?

I first studied for a business degree, and I went on to study physics. I chose business because I thought it would help my career, and physics because I was passionate about it. I started my first company while still in university (studying physics), and it eventually took off, requiring me to drop out. That first company was a success. Shortly after, I started a second one, which failed miserably and quickly. After that reality check, I went to work for the first venture capital fund in Latvia (2007–2010). That was an eye-opening experience, and I learned just how important technology is in everything.

In 2012, we started Sonarworks — a pure technology play. Initially, it was all about data, software, and research. As Sonarworks evolved, we began developing technologies for the consumer electronics segment. Back in 2021, we saw the need to start experimenting with machine learning algorithms, so we built internal capacity in that area. A few years later, LLMs exploded, and we were naturally inspired by that. It quickly became clear that this technology would be the next big disruptor for the industry.

We formed an internal think tank to explore the impact AI would have on our industry (music production). In parallel, we also began developing our AI-assisted product line. Our mission is to identify problems in the industry that we can address meaningfully with technology — ultimately to help creatives.

What first drew you to explore the possibilities of AI within music creation, and how has your vision evolved as the technology has matured?

Initially, we formed an internal think tank to assess AI technology. This grew into a broader internal market and technology research project. The technology was new and developing so quickly that we felt no one could fully grasp its impact. It quickly became clear that this was the next big thing — and that we either commit and jump on board, or risk becoming irrelevant.

The technology was new and developing so quickly that we felt there was no one could fully grasp its impact. It quickly became clear that this was the next big thing — and that we either commit and jump on board, or risk becoming irrelevant.

When we started our think tank, LLMs were just beginning to emerge as the powerhouses they are today. It was clear that AI’s capability to generate music would eventually evolve; however, none of us initially thought it would happen this fast. Now, the trajectory for AI in music is much clearer, which allows us to start assessing its true impact. The speed of development is just unbelievable.

As a leader navigating a fast-changing industry, what personal principles or habits have helped you stay ahead of the curve and drive meaningful change?

I am a strategy-first and execution-second type of personality. That’s just my natural way of being. It’s an advantage in some environments and a disadvantage in others. We have a strategy in place for every function of the company.

I have a high- risk tolerance personally and have spent most of my professional life in very high-risk environments. This has helped me develop the skills to cope with and navigate risk and uncertainty. Strategy is one of the best tools for that. Regardless of the environment, I always have a plan.

Sonarworks is a small company, and our size is definitely an advantage in such a disruptive environment. There’s a famous quote from the movie Margin Call: “There are three ways to make money: be first, be smarter, or cheat. Outsmarting the industry is very difficult, and we do not cheat.” So being first is the best strategy.

AI-generated and AI-assisted music is redefining creativity. What are the biggest disruptions you’re seeing today, and how are creators and platforms responding to them?

We believe that producers and composers are poised to be the major beneficiaries of the AI era. Thanks to AI, they’ll be able to generate more content independently, without relying heavily on collaborators to complete their projects. While professional musicians working in areas like background scores and advertising may find fewer opportunities, hobbyists and independent artists will gain new creative freedoms — no longer needing costly equipment or extensive technical skills.

This broadening of access lowers the barriers to music creation, opening the door for many more people to participate. But it also saturates the market, making it harder for individual artists to gain visibility or earn a stable income. In this evolving environment, raw creativity isn’t enough; success will require artists to also act as curators, strategists, and technologists.

On a broader scale, AI will significantly reduce the cost of content production and lead to an explosion of new material. Humans will still play a role, but expectations for productivity will climb sharply. We see the real threat to musicians not in AI-assisted creation, but in fully AI-generated content. Ironically, it’s the AI tools designed to assist humans that offer the strongest counterbalance to this trend.

Economically, the advantage will go to those who adapt — producers, composers, and creators who leverage AI to maximize their productivity and creative output. However, it also suggests that a growing share of revenue from streaming, licensing, and royalties may shift from artists to platforms and AI technology companies.

In terms of response, I think artists and streamers are both experimenting. Labels do what they always do: try to defend what they have.

Many in the industry are debating ownership, authenticity, and monetization in AI-generated music. How do you see business models evolving to address these complex challenges?

This is very difficult to predict. People tend to think linearly, trying to forecast the future based on experience. However, major technology shifts almost always disrupt existing business models — and usually in ways that are hard to anticipate. I’m quite confident that the models will change. We can look at some of the main factors influencing that change:

  1. I believe the share of cash flow tied to ownership rights will decrease. In turn, the share of revenues going to technology and computing power providers will increase. The point at which these lines are drawn will be determined by legal frameworks, which makes it hard to predict.
  2. Another crucial factor is whether end users will be willing to pay more or less for AI-created music. My linear-thinking gut says that, similar to how downloads disrupted CDs, users will pay less at first — but more in the long run.

What role do you believe real-time AI music will play in areas like gaming, live performance, or immersive experiences—and how can businesses prepare for this shift?

My personal vision is that real-time generation and augmentation will become a major aspect in the long term. At the moment, it’s held back by technology; however, I do believe this will be solved, and real-time or near real-time will eventually dominate.

There are historical analogies: think of royalty in the past. They had musicians available to entertain them while they were eating — and a king could ask them to change the tone if he desired. This would be the same, but for everyone.

Real-time AI processing capability is a major disruptor to how the industry operates today. The current recording industry is, in essence, built on post-processing — and moving to real-time processing would disrupt everything. I’m confident this will eventually happen, but it’s very difficult to put a timeframe on it. My “shoot from the hip” estimate would be 10–15 years from now. If it happens within that timeframe, it would be very disruptive.

I think we will get there, because the human brain can do it — a trained musician can participate in the creation of music in real time, so why couldn’t an advanced machine? This would essentially mean having a personal musician always at your disposal.

There are historical analogies: think of royalty in the past. They had musicians available to entertain them while they were eating — and a king could ask them to change the tone if he desired. This would be the same, but for everyone.

I want to stress again: this is just speculation, and we are not currently at that capability.

Looking ahead, what are you most excited about when it comes to AI’s role in music, and how should leaders across industries be thinking about the convergence of creativity and machine intelligence?

I see a massive explosion in creativity in the near term. AI will enable talented artists to create much more than they could have before. It will also encourage many new people to try creating music. This means more content at lower cost — which translates into greater efficiency and productivity. When I think about the AI era, I think about productivity first.

Executive Profile

Helmuts BemsHelmuts Bems is the CEO and Co-Founder of Sonarworks, a pioneering audio technology company based in Latvia. With a background in economics, physics, and venture capital, Bems has spent his career launching and scaling innovative ventures at the intersection of sound, software, and emerging technologies. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here