Interview with Petr Malyukov of dTelecom
Enterprise technology is shifting toward autonomous systems and intelligent infrastructure. In this interview, Petr Malyukov explains why agentic AI represents a fundamental change in how organisations design digital architecture. Discover how decentralized infrastructure, real-time communication, and traceable decision-making can help build scalable and trustworthy AI operations.
Can you share your professional journey and the key experiences that have shaped your perspective on technology, artificial intelligence, and digital infrastructure?Â
My perspective is built on 17 years’ worth of navigating the intersection of telecommunications and emerging tech. Early in my career, I focused on high-load systems at companies like Connect.Club, which gave me a good foundation for building digital architectures. Â
My journey has been a transition from managing traditional tools to governing decentralized ecosystems where the users themselves actually own the rails they communicate on.
But the real turning point came in 2022. At the time, I was building an AI-powered translation startup YOUS, and our team had hit a wall. We realized that the real-time communication (RTC) infrastructure of the last decade was never designed for the “AI era.” Centralized clouds were too slow, too expensive, and too opaque.  Â
This realization led to the creation of dTelecom. We didn’t just want to build another app; we set out to redefine the infrastructure itself using DePIN (Decentralized Physical Infrastructure Networks). Â
In short, my journey has been a transition from managing traditional tools to governing decentralized ecosystems where the users themselves actually own the rails they communicate on.
How have you seen the role of technology leadership evolve over the past decade, and what insights have most influenced your thinking today?Â
Broadly speaking, technology leadership has shifted from “managing systems” to “orchestrating intelligence.” A decade ago, a CTO’s job was about ensuring 99.9% uptime for human users. Today, we are increasingly building systems for a digital workforce of AI agents rather than humans.Â
The most influential shift in my eyes has been the move toward the “Revenue Era” of Web3. That’s when the industry on the whole has moved past the hype of “blockchain for blockchain’s sake.” From that point, the majority of projects were no longer being built just for experimentation or short-term gains driven by hype. Â
Now that we’re in 2026, leadership – to me, at least – is all about operational readiness and measurable ROI. If your decentralized stack doesn’t provide a clear cost advantage, it’s just a science project instead of a practical solution. The real goal today is to build for stable real-world use and lasting trust, not just for the next token generation event.
What does agentic AI really mean in an enterprise context, and why does it represent a strategic shift rather than a technical upgrade?Â
For enterprises, agentic AI means a shift from systems that focus on information retrieval to those capable of intentional action. Â
Here’s a simple example: typical AI tools like chatbots can tell you that your flight was delayed, and that’s about the extent of it. Agentic AI, meanwhile, goes further: it recognizes the delay, rebooks your ticket, sends an update (via message or voice call), and adjusts your calendar. And that entire process is done autonomously, with no input from you as the user.  Â
This change is strategic in nature and scope because the bottlenecks in operations effectively move from human capacities to how well-built the infrastructure itself is. If an agent takes 3 seconds to “hear” your input and “think” before responding in a voice workflow, the agentic loop breaks. That interaction feels unnatural and awkward.Â
Organizations that are serious about deploying AI agents at scale already realize that to do so effectively, they need to own and control the real-time communication layer that connects those agents to the rest of the world.
As AI systems become more autonomous, how are organisations redefining decision-making, accountability, and risk ownership within their operations?Â
We are moving from the “human-in-the-loop” model to what can be called a “human-ON-the-loop” one.
Organizations are now treating AI operations like high-frequency trading: you don’t approve every single transaction, but you govern the algorithm. Or, in other words, the AI’s decisions happen automatically, but humans still monitor the system.
This redefines accountability, bringing it to what I call the “Replayability Test.” If an autonomous agent commits a firm to a contract or makes a medical recommendation, the organization must be able to “replay” that decision. To reconstruct how it was made.Â
They need to know exactly which model version was used for the decision, what data was behind the AI’s output, and what logic was applied. It’s all about traceability. Without that “proof spine,” organizations cannot take ownership of the risks associated with autonomous AI.Â
What are the key challenges in ensuring that agentic AI systems remain compliant, explainable, and trustworthy under the EU AI Act?
The biggest challenge comes down to the fact that many centralized AI providers still operate on a “black box” approach with their systems.Â
By using a decentralized architecture on Solana, we can provide verifiable logs of AI-agent interactions without compromising privacy.Â
Under Articles 50 and 73 of the EU AI Act, high-risk AI systems must be transparent and traceable. But if you rely on a proprietary, centralized API, you often don’t have the necessary clarity of evidence to show to regulators. You can’t demonstrate openly and transparently how AI decisions are being made in your work.Â
At dTelecom, we address this problem by embedding compliance into the infrastructure. By using a decentralized architecture on Solana, we can provide verifiable logs of AI-agent interactions without compromising privacy.Â
Compliance in 2026 isn’t just a legal checklist; it’s tied directly to technical capabilities. If a company can’t reconstruct an AI decision with evidence, it effectively cannot operate in Europe.Â
How can European enterprises leverage advanced AI capabilities to remain globally competitive while still maintaining strong regulatory standards and ethical responsibility?
They have to embrace Sovereign AI Infrastructure. To be competitive on a global scale, companies need to control their costs and avoid too much dependence on external providers. For example, a lot of voice AI services delivered today through U.S.-centric cloud providers can eat 30-50% of your operating margins. That’s not a kind of expense that many organizations can afford. Â
Decentralized infrastructure like DePIN offers an alternative by enabling localized, high-performance compute that stays within European jurisdictional boundaries. This allows firms to meet the EU’s high ethical standards – such as data residency and explainability – while keeping operating costs down by a significant amount. Based on various calculations, it might actually be around 12 times cheaper than relying on traditional platforms like Vapi. Â
When we look at this context, it should be abundantly clear that sovereign AI is a technological and economic necessity for any business that intends to compete on the global stage.Â
Looking ahead, how do you see agentic AI transforming enterprise strategy and infrastructure over the next five to ten years, and what should business leaders prepare for today?Â
In the next five years, I fully expect that AI agents will become the primary users of the internet. Â
More and more, we will see Machine-to-Machine communication happening in real time. Personal AI agents will be “calling” a business’s AI agent to negotiate services or resolve issues in milliseconds. As a result of this transformation, the current infrastructure, which was built primarily for human communication and slow human reaction times, will buckle under this load. Â
Business leaders should already start preparing now by moving away from depending on vendors and their “black box” models. Instead, they should focus on building “agent-native” tech stacks that prioritize low latency (<200ms), verifiable decision-making, and decentralized ownership. Infrastructure like that will become necessary to support large-scale AI interactions.
In the end, the winners won’t be those with the biggest models, but those who can run them most efficiently and transparently.Â








