By Chris Tamdjidi
AI can boost productivity, but without intentional design it erodes trust, meaning and culture. Here are five tips for navigating the AI wave
Let’s start where enthusiasm is justified. The productivity gains from generative AI are real. A Harvard Business School / BCG study found consultants using AI worked 25% faster with 40% higher quality. Lower-skilled workers benefit most, narrowing gaps with top performers by up to 43%, while burnout drops as repetitive work is removed. Accessibility, personalised learning, inclusion for neurodivergent colleagues — all genuine wins. Any honest conversation should start here.
Leaders who’ve lived through earlier tech waves should recognise the pattern. Social media connected billions – and fuelled a teen mental health crisis and polarisation. Blockchain promised financial inclusion – and delivered $17 billion in crypto fraud in 2025 alone. Both worked. Both did damage we’re still untangling.
AI is different in one critical way. Those earlier costs were borne at a distance – by users, voters, investors. With workplace AI, the externalities land inside the building. The trust damage, the wellbeing impact, the cultural erosion – these all sit with your people, your teams and your institutional capacity.
The visible dangers are already here
The immediate harms are clear. Research by BCG and HBR finds 14% of AI users report “brain fry” – cognitive overload from constant AI oversight – linked to more errors, decision fatigue and intent to quit. Stanford University and BetterUp Labs coined “workslop” – AI content that shifts effort to the receiver at an estimated $9 million annual cost per 10,000-person firm. These dangers are real — but they are just the surface.
The deeper erosion: meaning, belonging, trust
Underneath brain fry lies something harder to measure: the slow erosion of how work feels. Consider Peter, a mid-career developer, pushed to build the AI that could replace him – or reduce his role to reviewing “coding for dummies.” He asked: how is nobody else freaking out about this? What’s slipping isn’t just job security – it’s his felt sense of efficacy and meaning. The embodied satisfaction of mastery, built over decades.
Meanwhile, Sarah, a senior leader at a fast-scaling firm, is juggling a house move, two young children, and a CEO who’s gone, in her words, AI-crazy. Having built his trust over years, she now finds herself edged out — replaced by leaders building AI agents to do parts of her role. A new culture is taking hold where people stop reaching out because “AI can give you the answer.” She too is now losing her felt sense of belonging and trust.
The longer view: what differentiates when everyone has AI?
Now project forward. Within three to five years, every competitor will have the same models and tools. What will differentiate firms? The very things at risk now: culture, trust, judgement, engagement – and the capacity to sense-make under pressure. These are the real moats of the AI era. The tragedy would be to destroy them building the stack that was supposed to make them matter most.
The investment mismatch
MIT’s 2025 study of 300 enterprise AI initiatives found 95% deliver no measurable return, despite $30–40 billion in spend. The issue isn’t the tech – it’s the human side: integration, learning, adoption. Yet according to Deloitte, over 90% of investment still goes to technology, with little given to human capability and culture. That’s short-sighted. For every euro spent on tools, ask what’s going into people — sense-making, skills, trust and leadership. Rebuilding those later costs far more.
If these are the risks – erosion of meaning, trust and judgement – then the task for leaders is clear: design AI in a way that protects them. Here are five principles to ensure it doesn’t quietly hollow out culture and wellbeing.
Five principles for a human-centric AI culture
- Protect the effort-reward loop. This may feel counterintuitive – but making everything easy doesn’t make it meaningful. Keep humans in the hard parts of satisfying work. Effortless work is empty. Use AI as a co-pilot, not autopilot, wherever judgement matters. Three months after deployment, ask: did anyone feel genuinely challenged and satisfied this week?
- Honour the social contract. Social and emotional signals at work exist inside an implicit contract of mutual care. The moment you use them to predict, optimise or sort people, the contract breaks — and people feel it. Any insight derived from an employee’s behaviour must reach that employee first, framed as support, not surveillance.
- Make wellbeing a goal, not an afterthought. Assess every AI deployment, before and after, for its effect on felt efficacy, connection and meaning — the way environmental impact assessments work. Track change-fatigue and psychological safety on the same dashboard as revenue. It is not hard to do. It is just hard to take seriously.
- Measure trust as a form of wealth. Trust is the operating system for collective intelligence. When making cost decisions, require a clear trust impact alongside the financial case. Companies that trade trust for efficiency are selling the asset that underpins everything else. We already assess financial, quality, safety and CSR risks — this is no different.
- Keep purpose within two links. People must feel their contribution to real human outcomes within a short chain of action and impact. If an employee cannot name, concretely, who benefited from their week’s work, the purpose connection is broken. Design impact loops. Maintain frontline exposure.
This isn’t a brake on AI – it’s the discipline that makes it sustainable. Avoid becoming the next Klarna: cutting roles, watching quality fall, then rehiring 18 months later. The body knows first – when work is meaningful or hollow, when trust is real or performed. Leaders who listen will build companies people want to work for – which will also be the ones still standing when the hype fades.


Chris Tamdjidi





