By David Stokes
Agentic AI is moving from pilots into core business operations faster than many anticipated. And as it does, a human challenge is emerging alongside the technical one: a growing proportion of the workforce does not want it. A challenge is forming. Leadership need to respond right away.
Something significant is happening in organisations across Europe, and most leadership teams are navigating it without a map. Agentic AI – systems capable of autonomous decision-making and action, not merely generating text or summarising data – is moving from pilots into core business operations faster than many anticipated. And as it does, a human challenge is emerging alongside the technical one: a growing proportion of the workforce does not want it.
Reuters reports that 71% of people fear AI will erase their jobs entirely. A separate survey found that 45% of CEOs already feel active resistance from their staff yet are proceeding with implementation regardless. That gap, between leadership intent and workforce sentiment, is where the real risk lives. And right now, most organisations are not adequately prepared to close it.
This piece does not offer a definitive playbook. The truth is that we are all still learning. What it does offer are some of the questions and frames of thinking that leaders need to be wrestling with before that gap widens further.
How Are We Thinking About Communication ?
The instinct of many leaders when facing resistance is to communicate more: more town halls, more memos, more reassurances. But the question worth asking is not how much we are communicating, but whether what we are saying is addressing what people fear.
Employee anxiety about AI is not primarily a communications problem. It is a certainty problem. People want to know what will happen to their role, their value, and their livelihood. Until leaders can answer those questions with genuine specificity, no amount of messaging will fully close the gap. The rumour mill, as anyone who has worked through a major organisational change will know, fills every vacuum that leadership leaves open.
The more useful question is: at what point in the AI journey are we bringing people in? In many European markets, this is not purely a cultural consideration. The EU AI Act and established works council frameworks in countries such as Germany, France, and the Netherlands create real obligations around consultation and transparency. Forward-thinking organisations are treating these not as compliance requirements to be managed, but as a genuine forcing function for the kind of early, honest dialogue that builds trust over time.
Are we Investing in people at the same rate as technology?
Forrester reports that 67% of decision-makers plan to increase AI investment across their organisations. The more uncomfortable question is what proportion of that investment is directed at the technology itself, and what proportion at the people expected to work alongside it.
Reskilling and upskilling are spoken about frequently in boardrooms, but the reality of implementation often lags significantly. Early data on agentic AI suggests the potential for a 40% increase in employee productivity, but that figure depends entirely on a workforce that understands the tools, trusts them, and knows how to leverage them. Without serious, sustained investment in capability building, the productivity gains will not materialise and the resistance will deepen.
The more nuanced challenge is that upskilling for an AI-augmented workplace is not simply a matter of technical training. It requires helping people reimagine what their role is for, what unique value they bring that AI cannot replicate, and how their working identity evolves in an environment where many of their current tasks are automated away. That is a profound shift, and few organisations are yet approaching it with the depth it deserves.
But can the right technology decisions help?
There is a deeper question beneath all of this that most organisations have not yet surfaced clearly enough: what does the workforce of the future look like, and are we building the right foundations for it?
The evidence is increasingly pointing in one direction. The future operating model will not be humans on one side and AI on the other. It will be humans and agents working together, fluidly and continuously, as integrated parts of the same workforce. Agents handling data processing, customer interactions, routine decisions. Humans providing higher level judgement, creativity, oversight, and relationships. Neither replacing the other, but each doing what they do best – noting that over time what each does best will evolve over time.
That future is closer than most organisations are prepared for. And one of the most consequential decisions a leadership team will make in the next few years is not which AI tools to buy, but what architecture to build around them. The organisations that get this right will create operating models where agents, humans, data, and customers come together seamlessly. The organisations that get it wrong will end up with fragmented systems, frustrated employees who cannot work effectively alongside the technology, and customers who feel the friction.
This is not primarily a technology question, though technology will shape the answer. It is a design question. What does a workflow look like when an agent handles the first layer and a human takes the second? How does customer data move between systems in a way that serves the interaction rather than complicating it? Who is accountable when an agent makes a decision? These are operating model questions, and they need to be on the boardroom agenda now, not after the architecture has already been set.
Most organisations are making these choices incrementally, tool by tool, without a coherent view of the whole. That is understandable at this stage of the technology’s development, but it carries real risk. The cost of rebuilding an architecture that was not designed for human-agent collaboration from the start is significant, in time, money, and the human disruption that comes with it.
What Does “People First” Actually Mean in Practice?
It is easy to say people come first. It is considerably harder to demonstrate it when you are simultaneously under pressure to drive efficiency, reduce costs, and show results from a significant technology investment. The tension is real, and leaders who pretend otherwise will quickly lose credibility with the very people they are trying to bring along.
The questions worth sitting with are: When employees raise concerns about AI, is leadership genuinely listening, or managing? Are the fears being addressed on their own terms, or reframed into a narrative of opportunity that serves the organisation’s agenda more than the individual’s? Are the people most affected by automation being given a meaningful voice in how it is implemented?
Artificial intelligence is the most powerful tool the modern organisation has access to. But it is still a tool, and tools do not deliver results on their own. The organisations that will gain the most from agentic AI are, almost certainly, those that approach it as a human challenge as much as a technical one.
How Do We Build Momentum From the Inside Out?
One of the more encouraging patterns emerging from early adopters is the power of internal advocacy. When employees who were initially sceptical become genuinely enthusiastic about what AI enables them to do, that shift is contagious in a way that top-down messaging simply is not. Research bears this out: 53% of employees report they learn more from peers than from management.
The practical implication is that identifying, supporting, and giving visibility to those internal champions may be one of the highest-leverage activities a leadership team can undertake. The question is whether your organisation is creating the conditions for that advocacy to emerge, or whether the implementation is being driven in a way that leaves little room for organic enthusiasm to develop.
The wins most worth highlighting are rarely the dramatic ones. They tend to be the quiet eliminations of the work people disliked most: the repetitive, the administrative, the draining. When AI removes those burdens, people begin to experience it as something working for them rather than against them. That shift in perception, once it takes hold, changes the conversation entirely.
We are at an early and genuinely uncertain moment. The organisations that come through it well will not necessarily be those with the most sophisticated technology or the most aggressive implementation timelines. They will be those that took the human side of this transition as seriously as the technical side, asked hard questions before they had comfortable answers, and led with honesty rather than optimism alone.
The challenge is still forming. The leadership response to it needs to begin now.


David Stokes





