By Matthew Geyman
AI is accelerating an asymmetric cyber threat landscape. Attackers need only one opening, while defenders require organisation-wide, machine-speed readiness. Resilience must be embedded in culture, with cybersecurity treated as a board-level priority. AI brings both opportunity and heightened risk. Intersus Managing Director Matthew Geyman takes a deep dive.
Cybersecurity has always been a cat-and-mouse game, but as we move into 2026, the rules of that game are being rewritten by artificial intelligence. Threat actors are not just human adversaries working with limited time and resources. They are now increasingly supported by systems that can automate reconnaissance, tailor attacks with frightening precision, and operate at machine speed.
Regulators are paying close attention. In its latest supervisory priorities, for instance, the UK Prudential Regulation Authority (PRA) makes clear that cyber risk remains elevated and that firms need robust capabilities both to prevent breaches as well as to detect, respond, and recover critical services within their impact tolerances. Operational resilience must be woven into the underlying risk culture, while advances in AI are seen as both an opportunity and a source of novel risks, amplifying issues like inaccurate data, third-party reliance, and cyber threats. That framing is exactly right. Because AI is not simply adding another layer of complexity to cybersecurity, it is fundamentally changing the threat landscape itself.
The rise of hyper-personalised social engineering
For years, phishing was largely a numbers game: send enough credible generic emails and someone will click. AI has turned that blunt instrument into a scalpel. Attackers can now generate highly convincing, context-rich messages tailored to individuals, drawing on scraped data from social media, breached datasets and even corporate disclosures. The result is hyper-personalised social engineering that feels authentic, timely, and almost impossible to distinguish from legitimate communication.
Deepfake audio and video add another dimension. Fraudulent “CEO calls” or synthetic customer requests are becoming more sophisticated, eroding trust in the most basic verification mechanisms organisations rely on. In 2026, the biggest danger may not be the obviously malicious email, but the perfectly plausible one.
Automated attack chains
AI is also accelerating the industrialisation of cybercrime. We are moving rapidly towards automated attack chains: systems that can identify vulnerabilities, exploit them, escalate privileges, and move laterally across networks with minimal human input.
The implication is stark. Defenders are still operating with models built around human-paced threats: detection rules, manual triage, and delayed patch cycles. Meanwhile, attackers are compressing the timeline from intrusion to impact from days to minutes. Traditional security operations centres were not designed for adversaries that never sleep, never slow down, and can adapt in real time.
Where businesses are most exposed
The organisations most at risk in 2026 are not necessarily those with the weakest security budgets. They are the ones where complexity, legacy infrastructure, and third-party dependency collide. The PRA explicitly highlights the obsolescence of legacy technology as a resilience issue, particularly as firms undergo transformation programmes and adopt cloud-based solutions.
This is a critical point. Many firms are trying to modernise while simultaneously keeping critical services running. But they are grappling with legacy systems that cannot be easily patched, or cloud migrations introducing new misconfigurations. At the same time, outsourced providers are expanding the attack surface, while AI tools are being adopted faster than governance frameworks can keep up.
The weakest link is rarely the technology itself. It is the unmanaged interaction between systems, suppliers, and decision-making structures. Indeed, perhaps the most dangerous aspect of AI-driven cyber risk is that it is still being underestimated.
Many boards and senior leaders view AI as a productivity tool rather than a threat multiplier. But the PRA is clear that advanced technologies present novel risks, amplifying existing issues such as inaccurate data, reliance on third-party providers and cyber risks.
In other words, AI does not create entirely new categories of risk; it supercharges the ones firms already struggle with. Poor data governance becomes more damaging when AI models depend on that data. Third-party reliance becomes more dangerous when vendors embed opaque AI capabilities into core services. Cyber threats become harder to detect when malicious activity blends into automated noise.
Practical steps organisations must take now
So what does staying ahead look like in 2026? First, organisations need to stop thinking purely in terms of prevention. Breaches are inevitable; resilience is the differentiator. Firms must be able to detect attacks quickly, respond effectively, and recover critical services within defined tolerances.
Second, operational resilience must be tested realistically. That means severe but plausible scenarios, including those involving third-party disruption. Too many firms still treat resilience as a compliance exercise rather than a strategic discipline. The ‘Zero Trust’ principle of ‘Assume Breach’ is a clarion call to review operational resilience and recovery frameworks.
Third, AI governance cannot be an afterthought. Businesses adopting AI must ask:
- What data is this model trained on?
- What decisions does it influence?
- What happens if it is manipulated or produces errors
- Who is accountable?
Finally, cyber defence itself must become more automated. Human-only response models will not scale against machine-speed adversaries. Security teams need AI-assisted monitoring, faster containment playbooks, and crisis rehearsals that assume acceleration, not stability.
We are in the middle of a convergence: regulators demanding stronger resilience, organisations racing to innovate with AI, and threat actors exploiting the same tools with fewer constraints.
The pace of the cat-and-mouse game is accelerating, and the game itself is asymmetric. Attackers need only one AI-enabled opening. Defenders need machine-speed readiness across the entire organisation. Organisations must recognise that resilience must be a truly embedded part of their culture. AI is both an opportunity and a risk, and cybersecurity is a board-level strategic priority, not an IT problem that can be patched later.


Matthew Geyman





