By Jacques Bughin
The third wave of AI is shifting focus from generative outputs to agentic orchestration. This article explains how the real competitive edge lies in building AI operating systems that coordinate autonomous agents, workflows, and tools. Control of this orchestration layer will determine which platforms lead the future of enterprise AI.
We are entering the third wave of artificial intelligence. The first wave was predictive, driven by pattern recognition and analytics: while confined to data science analysts and those mastering ML techniques, it proved that AI can deliver strong value. The second, generative AI, dazzled many of us with its ability to produce human-like text, code, and imagery. And in insights, the killer app is how it could make coding and software development a near-commodity.
But the third wave, now gathering dominance, is agentic AI: systems that don’t just generate, but act autonomously, plan, and reflect. What makes them transformative is not just intelligence, but orchestration: the ability to coordinate goals, tools, workflows, and learning loops across complex digital environments.
While agentic systems are still early in development, agentic augmentation has leapfrogged raw model upgrades in less than a year.
While agentic systems are still early in development, agentic augmentation has leapfrogged raw model upgrades in less than a year. Agentic systems are being deployed in production, showing strong executional advantages. Moveworks, acquired by ServiceNow in 2025 for $2.9B, uses agents to resolve IT tickets, HR queries, and access requests. In one municipal deployment, over 3,000 hours of human work were offloaded. Their agentic enterprise search tool reduced lookup time by over one hour per employee per day. Success rates exceed 95% for core workflows. Other agents, like OpenAI’s “Operator” and “Deep Research,” show real-world task execution: browsing websites, booking meetings, summarizing reports, and citing live sources.
In this context, and as seen in the past for any new platform, PC, mobile, and others, the new war is about about who will control the AI-native OS (operating system) — not merely models, but the substrate that governs agency: orchestration — both technically and strategically — is the defining axis of this battle.
Agentic AI needs an OS
Unlike predictive or generative AI, agentic systems pursue goals with minimal supervision. They break objectives into steps, select and invoke tools, execute actions, and reflect on outcomes. These systems are inherently asynchronous, interleaved, multi-agent, and multi-modal. As such, they require a dedicated orchestration layer to manage:
- Workflow memory and context
- Tool access and chaining
- Secure action execution
- Role separation between agents
- Compliance and guardrails
This orchestration requirement distinguishes the AI OS from traditional operating systems. It is more akin to a cloud-native runtime or a distributed coordination protocol. Without this layer, agents are brittle, untrustworthy, or confined to isolated domains. With it, they become scalable, enterprise-grade workhorses.
Orchestration also underpins economic control. Drawing on platform economics, the AI OS acts as a multi-sided platform: coordinating agents (supply), enterprise users (demand), data and workflow providers (complementors), and infrastructure (foundation). Whoever controls the OS controls pricing, access, monetization, and feedback loops.
The race is on
In this new universe, LLMs may become commoditized. The differentiation would lie in the orchestration stack—how agents are chained, how tools are invoked, and how memory is structured. The OS owner will set the standards, extract value, and control distribution across the ecosystem. The OS also might become the trusted intermediary for sensitive data, workflows, and compliance.
The race for the (agentic) AI platform is on. It includes Microsoft, and with Copilot+ now integrated into Windows and M365, Microsoft controls the agentic layer for over 400 million enterprise users, and its graph API and Semantic Kernel are orchestration initiatives. Through ChatGPT Team/Enterprise and the Operator browser, OpenAI is building a Chromium-based OS for agents, complete with memory, app store, and execution capability, while Perplexity ‘s Comet is building a vertical agentic stack focused on search and information tasks.
ServiceNow is the most mature enterprise agile platform, embedding AI OS logic across ITSM workflows. And while not an OS vendor, Nvidia’s control of the orchestration runtime (GPUs + microservices) makes it foundational. Nvidia launched NeMo Retriever, NIM microservices, and the AI Workbench SDK to let developers orchestrate agents across devices and clouds. Google Vertex AI Extensions now support tool use, agent scheduling, and dynamic memory.
Call Out from the Roads: Driverless Cars as a Case of OS Deployment
To illustrate the stakes and logic of orchestration, consider the battle for control in autonomous vehicles. The race is not just about whose AI sees pedestrians better. It’s about who orchestrates decision-making, safety layers, navigation, and compliance in real time. Tesla, Waymo, and NVIDIA aren’t just shipping hardware—they are building autonomous operating systems like Tesla’s Dojo or Waymo’s Chauffeur. These AI OS layers integrate real-time sensor fusion, traffic-aware planning, edge computing, and failover strategies. They turn intelligence into coordinated, accountable action.
That orchestration logic is what enterprise AI now faces. Like AVs, agents in business must integrate signals, invoke APIs, adapt in real time, and remain compliant. Whoever owns this AI OS stack—whether Microsoft with Copilot, OpenAI’s Operator, or Perplexity’s vertical browser stack—controls not just intelligence but execution. The AV sector teaches that the biggest prize is not perception, but control of the logic layer where risk, data, and performance converge. The same is unfolding in the AI software stack.
Sorting out winners from losers
The battle for Agentic AI OS dominance must take into account more than firm assets. This should include the effects of:
- Open Source: Hugging Face to Warmwind OS, a wave of open-source and cloud-native platforms is challenging closed ecosystems, promising transparency and customization. Langchain’s 90k+ GitHub stars (as of July 2025) and 500+ plugins available and developer traction indicate that open composability is outpacing closed agent stacks in early adoption.
- Geopolitics: Through Huawei, China’s HarmonyOS Next is a strategic move to build a homegrown, sovereign digital ecosystem, while Europe pushes for open frameworks and trusted execution environments. Gaia-X, EU AI Act, and TEEs (Trusted Execution Environments) show a strong preference for auditable, privacy-preserving OS layers. Sovereign LLM efforts (Mistral, Aleph Alpha, Luminous) are all building toward agent readiness.
- Legal and Ethical Battles: As AI Os become central, legal disputes (such as OpenAI’s recent trademark and IP controversies) and regulatory scrutiny will likely be intensifying :
Lessons from the past
If history is any guide—especially the PC, mobile, and cloud eras—it teaches a few lessons.
Control of the orchestration layer consistently decides platform dominance: During the PC Era, Microsoft didn’t just build an OS—it orchestrated an entire ecosystem. Windows provided standardized APIs, development tools, backward compatibility, and distribution agreements with hardware partners. This made it the default platform for developers, pushing network effects that strengthened its position. The mobile war of the 2000s saw iOS and Android reach dominance through platform orchestration. Apple iOS used vertical integration—hardware, OS, App Store, and SDKs—to guarantee performance, security, and quality. Android, by contrast, leveraged openness and broad adoption across device manufacturers (Samsung, Huawei, Xiaomi). Platform scholarship emphasizes this dual model: “open enough” to scale, “controlled enough” to monetize—exactly as platforms like Uber or Airbnb balance openness with control. During the cloud era, cloud platforms moved orchestration into the data center. Amazon Web Services, Microsoft Azure, and Google Cloud converged on offering not only virtual machines but also dev toolchains, APIs, and serverless runtimes. This “programmable infrastructure”.
Ecosystems—not features—drive platform lock-in. The most successful platforms created massive developer flywheels. Apple did this through its iOS SDK and App Store, offering developers monetization, distribution, and quality control in a single stack. Android scaled globally by opening its OS to device manufacturers while anchoring control through Google Play Services. These ecosystems created positive feedback loops: more developers meant more apps, which attracted more users, which drew in even more developers. In the age of agentic AI, SDKs for building agents, marketplaces for composable tools, and developer-facing orchestration libraries will be the new engine rooms of platform lock-in.
The agentic AI OS must offer composability and extensibility while securing monetization layers such as memory state management, compliance APIs, and runtime governance.
Openness and control must be carefully balanced. Platforms that were too open often failed to capture value, while those too closed risked stagnation. Android succeeded because it was open enough to drive adoption by OEMs, yet retained control through proprietary services and APIs. Kubernetes, an open orchestration framework, became dominant only after managed services by cloud vendors (like GKE or EKS) wrapped it in enterprise-grade compliance and support. The agentic AI OS must offer composability and extensibility while securing monetization layers such as memory state management, compliance APIs, and runtime governance.
Open source shapes the stack but rarely captures the profit. The rise of Linux, PyTorch, and TensorFlow illustrates how open frameworks often define developer standards. However, value capture shifted to those who offered hosted infrastructure, tooling, and compliance. Red Hat, AWS, and Microsoft Azure monetized these ecosystems more effectively than the communities that created them. In the agentic AI context, LangChain, Hugging Face, and LangGraph are winning early adoption, but unless they wrap their offerings in enterprise-grade orchestration and compliance, they risk becoming commoditized.
Regulation is both a constraint and an accelerator of platform consolidation. Past platform giants faced significant regulatory hurdles: Microsoft endured antitrust litigation, Facebook faced data privacy crackdowns, and the GDPR redefined platform responsibility in Europe. In the agentic AI era, regulation will go even further. The EU AI Act classifies agentic systems as “high-risk,” requiring explainability, override mechanisms, and auditable memory. Compliance will not be optional. The platforms that embed safety, audit, and governance into their orchestration layers will gain both trust and a competitive moat.
The futures
These five strategic learnings also lead to three important tensions for the future of the agentic AI OS. The first tension is between centralization and decentralization. Orchestration layers tend to centralize over time due to network effects, but open source and geopolitical forces may resist this. The second tension lies in regulatory burden: platforms may need to slow down or redesign systems to satisfy compliance requirements, or they may embed governance so effectively that regulation becomes a moat. The third tension is the modularity of agentic systems: if agents are portable and composable, they may run across platforms; if not, vertical stacks may emerge.
Crossing those tension lines, three scenarios emerge.
- « Power of the few». Or orchestration being bundled into enterprise stacks by a few dominant players. Here, Microsoft and Nvidia extend their lead. Microsoft integrates agent orchestration into every Office workflow, into Azure, and developer tools. Nvidia supplies the runtime SDKs, model deployment frameworks, and infrastructure to host the entire lifecycle. This scenario is marked by tight vertical integration and high lock-in. Innovation continues, but within controlled environments. It is the natural continuation of what worked in the cloud and productivity eras.
- “Open federations.” Open-source tools and frameworks like LangGraph, Hugging Face, and LangChain converge to form a standard for portable agents and composable toolchains. Agentic orchestration becomes like Kubernetes: modular, standardized, and wrapped in enterprise offerings by vendors. This scenario reflects the success of Linux. Here, no one controls everything, but value accrues to those who provide the best wrappers, managed services, or domain-specific platforms.
- ” Localized sovereignty”. This is a future defined by political fragmentation and regional regulatory divergence. In this world, China advances its closed HarmonyOS Next stack; Europe mandates sovereign AI stacks that comply with Gaia-X, local data residency, and explainability laws. The US becomes a dual-track ecosystem with Big Tech controlling commercial agent systems and a parallel open-source movement serving developers.
Making sense of those futures
While the outcome across these scenarios may vary dramatically, it offers a few important constants for CEOs
The first lesson for CEOs is to move fast. The battle of AI OS means that big players are doubling down on innovations regarding Agentic AI. As a consequence, Agentic AI is evolving at a rapid pace, with new tools and features rolling out every few months. Companies that start early will build up valuable experience and know-how, making it much harder for slower competitors to catch up. Early adoption means your team learns how to automate, adapt, and improve processes, while waiting means you’ll need to spend more time and resources just to close the gap.
The second lesson is that control lies in how you organize and manage work, not in the tasks themselves. The real power is in setting up the flow of work—deciding how agents interact, what data they use, and who checks their work. If you let outside vendors control these rules, you risk losing oversight and flexibility. By designing your own rules and keeping a grip on how agents work together, you can switch tools more easily, protect your data, and stay in charge of your business processes.
The real advantage comes from making agents that can be reused and improved, encouraging teams to share what works, and tracking how much of your work is being handled by agents.
The third lesson is that the new way to compete is by using agents well, not just by having the best technology. Companies with libraries of reusable agent workflows can solve problems faster and adapt to change more easily. Each time you use an agent, you learn and improve, building up a base of knowledge that keeps you ahead. The real advantage comes from making agents that can be reused and improved, encouraging teams to share what works, and tracking how much of your work is being handled by agents.
In this new environment, you should review your current tools to see if they help you control workflows or if they take control away from you. Assign someone to lead your efforts in building and improving agent workflows. Start with small tests, learn quickly, and expand what works. Set clear rules for managing and checking agents, and regularly measure your progress.
Agentic AI is not just another tool—it’s a new way to run your business. Move fast, keep control, and focus on building flexible, reusable systems to stay ahead.

Jacques Bughin





