By Fahed Bizzari
AI will not equalise competition in 2026. As models commoditise, advantage shifts to organisations that redesign workflows, embed governance, and build repeatable AI capability. Reliable execution, not deployment volume, drives EBIT. Leaders who own cross-functional standards compound gains; others experience quiet re-ranking by customers and markets over time globally today.
A comforting idea is spreading in executive conversations that if everyone has access to the same AI models, any advantage should evaporate.
That comfort is misplaced.
Commoditisation rarely removes advantage. It changes where advantage lives. When a capability becomes widely available, customers stop rewarding novelty. They start rewarding reliability.
That “quiet re-ranking” doesn’t happen in a headline moment. It happens through small, repeated differences in response quality, turnaround and follow-through.
And those micro-differences do translate into market outcomes. Faster, clearer responses shorten buying cycles. Consistent delivery reduces friction and escalations. Fewer errors reduce rework and client anxiety. Over time, that shifts conversion, renewal and price tolerance, even if nobody describes it as “AI advantage.”
The EBIT lever is workflow redesign, not deployment
Most organisations still talk about AI progress in the language of rollout. Seats enabled. Platforms approved. Usage rising.
That may be necessary, but it’s not the source of advantage, because rollout measures activity, not operating change.
If you want a hard signal on what drives enterprise impact, look at whether work has been redesigned around AI.
McKinsey’s 2025 research is unusually clear on what matters most: “Out of 25 attributes tested … the redesign of workflows has the biggest effect on an organization’s ability to see EBIT impact from its use of gen AI.”
EBIT (earnings before interest and taxes) is a blunt proxy, but it’s useful here because it forces the conversation away from anecdotes and toward operating reality.
In practice, workflow redesign means changing the default path by which work moves from input to output. AI is placed deliberately, not sprinkled opportunistically. Quality gates become explicit. Verification is designed into the flow, not left to individual caution. Exceptions are anticipated, not discovered in front of a client.
The result is that AI stops being a productivity perk for individuals and starts becoming a dependable organisational advantage.
The non-commoditised asset is organisational AI capability
If tools are increasingly shared, why do results still diverge? Because what separates organisations is not access. It is whether they have built the capability to use AI reliably in real work.
In a widely cited academic framing, Mikalef and Gupta (2021) treat AI capability as a measurable organisational construct and examine its relationship with creativity and firm performance. That matters because it keeps leaders from collapsing the question into “Which platform?” or “Which model?” and points them toward the actual competitive asset.
AI capability shows up as repeatability under real conditions: unclear context, delivery pressure, cross-team handoffs, sensitive information and edge cases where the model is least reliable.
Two firms can use similar tools and still feel very different. One is crisp and coherent across teams. The other is uneven: strong in pockets and brittle under pressure.
Commoditisation doesn’t erase that gap. It often makes it easier to see, because “good enough drafting” becomes normal and customers start noticing who stays consistent across touchpoints.
Leadership ownership decides whether capability compounds or stays trapped
There is a common mistake at this stage. Organisations treat AI as an IT deployment.
IT involvement is essential for safe access, identity controls and platform decisions. But advantage does not emerge from access alone. Advantage emerges from changed work, how tasks are performed, checked, approved and reinforced through management.
Deloitte’s executive research makes the ownership point bluntly: AI efforts succeed when ownership sits with a cross-functional leadership group rather than being treated as a technology programme owned by IT alone, and executive alignment is often the limiter.
This is where “quiet re-ranking” becomes a leadership problem.
AI touches marketing, sales, legal, procurement, operations and leadership decision-making. If ownership is concentrated in one function, standards fragment. Teams improvise local defaults. Managers reinforce inconsistent norms. Learning stays trapped in pockets.
Cross-functional ownership doesn’t mean more committees. It means clear leadership decisions about priorities, standards and reinforcement so capability spreads and holds.
Vendors can ship ingredients. They cannot ship the recipe.
The skeptic objection returns: “Even if this matters now, vendors will productise best practice and close the gap.”
Vendors can productise features, templates and guardrails. They can make tools easier to use. They cannot install the operating discipline inside your organisation.
As Deloitte’s Bill Briggs put it in 2025, organisations are obsessing over the “ingredients” while ignoring the “recipe,” which includes the culture, workflow and training required to make the technology work.
That “recipe” is what leadership ownership is for. It is how the organisation behaves day to day:
- Do people know when to trust and when to verify?
- Is disclosure normal, or politically risky?
- Do managers model good practice, or treat AI as “something others do”?
- Is learning shared, or hoarded?
- Do standards hold under delivery pressure?
Vendors can support parts of this. They cannot substitute for leadership, management habits and workflow discipline. That is why commoditisation does not automatically equalise outcomes.
A decision rule that prevents quiet re-ranking
If commoditisation shifts advantage into operating capability, the leadership move becomes simpler. Stop asking, “Where can we use AI?” Start asking where AI-assisted work must become reliable.
A practical decision rule is four questions:
- Which three workflows most shape customer experience or margin? Pick them explicitly. McKinsey’s 2025 work on enterprise impact is a useful forcing function here: focus on redesigning the few workflows that matter most.
- In those workflows, what does “good AI-assisted work” mean here? Define it in behavioural terms: what must be verified, where ownership sits and what triggers escalation, because workflow redesign only works when quality gates are explicit.
- How will people learn this in real work, not just in training? The “recipe” Briggs describes: culture, workflow and training, only becomes real when practice is reinforced in the flow of delivery.
- Who owns it cross-functionally, so standards don’t fragment into pockets? Deloitte’s 2025 framing is the simplest reminder: cross-functional ownership is what prevents AI from becoming fragmented local practice.
Commoditisation is coming. The question is what it reveals.
Will it flatten everyone to the same baseline?
Or will it expose who has built real operating capability – and who has not?


Fahed Bizzari





