Orfeas Boteas

Interview with Orfeas Boteas, CEO of Krotos 

Orfeas Boteas, a sound designer turned entrepreneur, grew frustrated with the slow, rigid tools available to audio professionals. In this Q&A, the Krotos CEO shares how that frustration led to building industry-shaping solutions — and why AI-powered tools should empower, not replace, the creatives working in post-production. 

What first sparked your interest in sound design, and how did that passion evolve into founding Krotos? 

I’ve always loved music and technology, so studying Music Technology felt like a natural path. After graduating, I worked in post-production for a few years and got my first taste of sound design while working on a short film. I realised how much power sound has to add emotion and impact to a visual story, and I was hooked. 

That led me to Scotland to pursue a Master’s in Sound Design. While working on a video game, I needed to create monster sounds and found the process incredibly time-consuming—layering plugins, editing sounds, and manually combining elements. For my final project, I built a piece of software that let me create those monster sounds in real time. That became Dehumaniser. 

Initially, I gave it away for free and was surprised when a few thousand people downloaded it. That’s when I realised there was a real opportunity to improve how people work with sound. So I founded Krotos to help sound designers and creators be more creative, faster, and with fewer barriers.  

As a leader in a rapidly evolving creative tech space, how do you balance innovation with maintaining a clear creative vision for your team and your products? 

For me, innovation has to serve a purpose. We focus on innovating to improve workflows, not just for the sake of doing something new. The goal is always to help people work faster, be more creative, and maintain the highest quality. 

The goal is always to help people work faster, be more creative, and maintain the highest quality. 

We constantly push ourselves to break boundaries, but we stay anchored to our core vision: to change the way people design and perform sound by removing barriers between ideas and execution. Whether we’re developing AI tools or refining user interfaces, the question we always ask is: Will this genuinely help creators do their job better and with more creative freedom? 

Internally, I encourage the team to question why we’re building something, not just what we’re building. That keeps us aligned and ensures we’re innovating with intention rather than chasing trends.  

How have you seen the role of sound designers change with the introduction of AI-assisted tools, particularly in post-production workflows?  

AI is shifting the role of sound designers toward the truly creative parts of the job. Instead of spending hours editing, searching through endless libraries, or manually processing files, they can focus on shaping the sound and telling stories. 

Sound designers will increasingly spend more time crafting the unique emotional and artistic aspects of a project, while AI handles the repetitive, time-consuming tasks in the background. The creative vision remains human—it’s just supported by tools that reduce the barriers to realising ideas quickly. 

In what ways is AI transforming traditional approaches to Foley and ambient sound in film, television, and gaming?  

AI reduces the barrier to entry, allowing people to create sounds in a completely new way, breaking barriers of what’s possible.  

Traditionally, Foley and ambient sound are painstakingly crafted through physical recording sessions and meticulous editing. That’s still an important art form. But AI is opening up complementary workflows that can generate high-quality results far more quickly. 

For instance, AI can analyse visuals and suggest relevant soundscapes. Or it can generate variations on a Foley effect—say, footsteps on different surfaces—so designers aren’t stuck repeating the same samples. One example of this is an AI-powered tool that interprets images or text prompts to suggest appropriate ambient soundscapes—helping streamline the process of building backgrounds without manually layering individual sound files. 

But it’s not about replacing the craft. It’s about giving professionals new options and freeing them from time-consuming tasks so they can focus on the creative details that make a scene believable and emotionally impactful. 

What are some of the biggest misconceptions about using AI in sound design and how do you address concerns about creative authenticity?

The biggest misconception is that AI will do all the creative work and leave humans redundant. That’s simply not true—nor should it be. 

AI can’t replicate human taste, intuition, or creative vision. It can generate raw material, suggest possibilities, or handle tedious tasks, but it’s the sound designer who shapes those results into something meaningful and emotionally resonant. 

The biggest misconception is that AI will do all the creative work and leave humans redundant. That’s simply not true—nor should it be. 

Another misconception is that AI somehow makes all sound design feel the same, stripping away uniqueness. That only happens if creators use AI outputs blindly, without curating or refining the results. The same tool in two different designers’ hands will produce completely different outcomes because it’s driven by human choices. 

In my view, AI tools should remain firmly under the user’s control—serving as creative collaborators rather than taking over the process. The goal isn’t to replace the artist, but to remove friction from their workflow and support their creative decisions.  

How do you see the relationship between human creativity and AI evolving in professional sound design environments? 

I see AI and human creativity becoming more collaborative and fluid. The future isn’t about either/or, it’s about human+AI. 

AI will increasingly handle the groundwork: creating drafts, suggesting ideas, automating file management, and removing bottlenecks in the workflow. That means creatives will spend more time making high-level artistic decisions rather than wrestling with technical minutiae. 

In the same way that digital audio workstations revolutionised editing and mixing, AI will become another essential tool in the sound designer’s toolbox. But the spark—the artistry, will always come from humans. 

I’m excited by the idea that AI could help people who’ve never worked in sound before start exploring it creatively. One of my own motivations has been making sound design more accessible — not just for professionals, but for anyone with a story to tell.

Looking ahead, how might emerging technologies reshape the way sound is created, edited, and integrated into immersive media experiences? 

Emerging tech is going to change sound design in ways we’re only beginning to imagine. Generative AI, interactive experiences, and mixed reality demand new approaches to how sound is created and delivered. 

I believe AI will become even more integrated with visuals, capable of dynamically generating soundscapes in response to real-time stimuli, whether that’s gameplay, VR experiences, or interactive films. 

We’ll also see more intelligent tools that understand narrative context, emotional tone, and even audience reactions, allowing sound designers to fine-tune experiences on the fly. 

Ultimately, sound is becoming more than just an accompaniment to visuals,it’s becoming an equal storytelling partner. The future of immersive media will belong to those who can harness technology without losing the human touch that makes sound so powerful. 

Executive Profile

Orfeas BoteasOrfeas Boteas is the Founder and CEO of Krotos, a world leader in AI audio technology used in major productions like Avengers and Game of Thrones. A Royal Society of Edinburgh Fellow and two-time Edge Award winner, he’s now spearheading Krotos Studio to democratize cinematic-quality sound creation for content creators worldwide. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here