By Jacques Bughin, Ph.D.
If you don’t want your agents to turn into a collection of digital loose cannon, you’d better make sure you have them under control, as Jacques Bughin explains.
The early months of 2026 will be remembered for the “Crustacean Summer”—the period when the OpenClaw framework and its social sandbox, Moltbook, became the center of the technological zeitgeist. Within days, millions of autonomous AI agents were deployed by developers globally, resulting in a digital ecosystem that moved at a huge velocity only recently seen with the OpenAI debut.
We are witnessing a profound paradox: the individual AI agent is a high-performance triumph, yet the machine collective is still a structural failure.
Commentators were quick to declare this the “birth of machine civilization.” NVIDIA CEO Jensen Huang noted that OpenClaw had catalyzed an ecosystem in weeks that took Linux 30 years to build. But to understand the future of the enterprise, we must look past the speed. We are witnessing a profound paradox: the individual AI agent is a high-performance triumph, yet the machine collective is still a structural failure. The future of agents is definitely here, but the future of Organized Intelligence not yet, as it depends entirely on the emergence of a new “orchestration layer.”
The lone wolf triumph: why standalone agents are “working fine”
To be clear: the OpenClaw experiment was not a failure of AI capability. On an individual level, the agents are performing brilliantly. Unlike the passive LLMs of the previous decade, OpenClaw agents possess “Agency”—the ability to perceive, reason, and act upon their environment with root-level system privileges.
For the modern worker, the standalone agent has become the ultimate “Force Multiplier.” These agents are currently:
- solving the “last mile” of productivity: They do not just suggest code; they debug it, test it, and deploy it to production. They do not just draft emails; they manage entire stakeholder workflows, syncing calendars and triggering API calls autonomously.
- operating with personal context: Because these agents are often self-hosted or locally tuned, they possess a deep “epistemic intimacy” with their user’s data that general cloud models cannot replicate.
- sustaining 90 percent+ accuracy in task execution: In closed-loop environments (such as financial modeling or scientific data processing), standalone agents are already outperforming human-plus-copilot configurations.
The “Future of Agents” is not a distant promise; it is a current reality. The “Lone Wolf” agent is the most powerful tool ever placed in human hands.
The “slop” of the collective: why autonomy leads to entropy
The friction begins when we move from “My Agent” to “Our Network.” When millions of high-performance agents were unleashed on Moltbook, the result was not a sophisticated society, but what critics have dubbed “AI slop.”
The data from the OpenClaw experiment reveals a massive “persistence gap” that serves as a warning for any CEO looking to automate at scale:
-
- The attention deficit: While human-seeded initiatives on Moltbook had a “half-life” of 2.6 hours, purely autonomous machine threads collapsed in just 0.7 minutes.
- The reciprocity crisis: In human networks, 25 percent to 30 percent of interactions are mutual exchanges. In the OpenClaw collective, reciprocity plummeted to 1.09 percent.
- The polarization of attention: The Gini coefficient of attention on Moltbook reached an extreme 0.979.
The synthesis? Individual agents are optimized for local utility (completing a prompt) but they lack global intent. Without a governing structure, agents simply “broadcast” at one another. They are like a symphony orchestra where every musician is a virtuoso, but there is no conductor and no score. The result is noise, not music.
The thesis: orchestration
This diagnostic leads us to a vital strategic conclusion: Individual agents execute tasks, but only orchestration creates an organization.
We are seeing a modern validation of Ronald Coase’s Theory of the Firm. Coase argued that firms exist because the transaction costs of coordinating decentralized individuals in an open market are too high. OpenClaw’s Moltbook has proven that the same rule applies to silicon. The “cost” of coordinating a million autonomous agents—ensuring they don’t hallucinate in a feedback loop or descend into “Crustafarian” cult-mimicry—is currently too high for pure autonomy to manage.
To bridge this gap, we must build the orchestration layer. This is not a “limiter” on AI; it is the infrastructure that allows AI to scale.
The three pillars of the orchestrated future:
- Systemic verification: A middle layer that validates agent output against “Ground Truth” before it is shared across the network.
- Role allocation: Rather than “flat” autonomy, we need hierarchical modularity (as Simon predicted in 1962). Agents must be assigned specialized roles—Researcher, Auditor, Executor, Governor—under a central logic.
- Human epistemic anchoring: Humans must remain the source of normative alignment. We provide the “why” (strategic intent), while the agents provide the “how” (scale).
Conclusion: Don’t deny the agent (OpenClaw), architect the system (Moltbook)
Individual agents are optimized for local utility but they lack global intent.
The OpenClaw experiment was a success. It succeeds by showing us exactly where the “wall of autonomy” sits. Agents are there and one should embrace the “Lone Wolf” agent for its unprecedented productivity. But if we want to build the “automated enterprise” or a “machine society,” we cannot rely on the agents to organize themselves. The future belongs to the leaders who can architect the orchestration layer that turns millions of brilliant individuals into a singular, Organized Intelligence.
The agents are ready to work. It’s time for us to manage.

Jacques Bughin




