AI in cybersecurity

target readers-cv

By Thomas Drohan

Liz Kendall’s warning underscores a critical shift: in the age of AI, organisations must look beyond cybersecurity alone as risks increasingly emerge through trusted processes, people and decisions rather than just systems.

In a recent letter to British business leaders, Liz Kendall, Secretary of State for Science Innovation and Technology, highlighted a growing concern: AI is changing the risk landscape faster than many organisations are adapting. As she warned, the risks organisations face in cyberspace are evolving, and familiar approaches are no longer keeping pace.

Advanced AI tools are accelerating the discovery and exploitation of weaknesses, allowing threats to move more quickly, scale more broadly and remain concealed within legitimate activity. This is not simply an escalation of traditional cyber threats. It represents a shift in where and how harm occurs which is not through obvious system breaches, but though routine processes such as automated approval workflows, AI-assisted decision-making and the misuse of trusted access by people and systems that appear authorised to act.

This might involve an AI-generated email that impersonates a senior executive to approve a payment or an AI tool that is legitimately integrated into operations but trained or prompted in ways that expose sensitive information.

For business leaders, this creates a more fundamental challenge than improving technical defences. AI‑enabled threats are no longer contained within the IT or security function; they directly affect the organisation’s resilience, decision‑making and ability to operate with confidence. Put simply, cybersecurity capabilities on their own, even when technically strong, are no longer sufficient to manage AI‑driven risk. Security failures now arise as much from misplaced trust, fragmented ownership and slow institutional response as from unpatched systems.

Leaders can therefore no longer afford to treat security as a discrete technical problem or rely on incremental improvements to controls. Addressing AI‑driven risk requires a top‑down reassessment of how security is understood, governed and embedded across the organisation, before speed, scale and subtlety overwhelm existing safeguards.

Why traditional defensive controls are falling behind

For years, organisational security strategies were designed to protect networks and systems from external intrusion. Controls focused on monitoring endpoints, analysing logs and blocking suspicious traffic – an approach well suited to threats that were noisy, perimeter‑based and relatively predictable.

That model is now breaking down. Advanced AI tools, such as Anthropic’s Mythos, are accelerating the discovery and exploitation of vulnerabilities, dramatically shortening the window between exposure and impact. Capabilities such as automated reconnaissance and AI‑assisted analysis allow weaknesses to be surfaced at scale, often faster than organisations can assess or respond to them.

In response, detection capabilities have expanded rapidly. Security teams now enjoy unprecedented visibility into potential weaknesses across their environments. Yet this abundance of signal brings a new problem. As discovery scales, teams are confronted with an overwhelming volume of findings, increasing the likelihood that remediation becomes reactive and superficial – addressing exposed flaws quickly while leaving deeper structural issues unresolved.

This dynamic favours speed over judgement. Controls are optimised to detect and fix discrete issues, not to assess how vulnerabilities interact with processes, incentives or trust relationships. As a result, organisations may appear highly responsive while remaining structurally exposed.

At the same time, the operating environment has become more complex. AI‑enabled adversaries can operate with greater coordination, automation and efficiency, often across borders and supply chains that stretch far beyond an organisation’s direct control. These conditions strain defensive models built for bounded systems and clearly defined perimeters.

The result is a growing gap between what defensive controls can see and what they can meaningfully manage. Security teams are reacting faster than ever, but increasingly to symptoms rather than sources of risk.

How today’s threats go beyond technology

The limits of defensive security point to a broader shift in how harm now occurs. Many of today’s most serious risks do not arise from broken systems, but from the way organisations operate.

Modern threats increasingly exploit trust rather than bypass controls. Fraud, data misuse and intellectual property theft often unfold through routine business activity – approvals, payments, hiring decisions, supplier interactions – where actions appear legitimate and behaviour conforms to expectations. These failures do not register as anomalies; they are “normal by design.”

AI intensifies this dynamic. Deception can be personalised, convincing and scalable, enabling hostile actors to impersonate colleagues, executives or partners with a high degree of credibility. Communications look routine, decisions appear reasonable, and harm becomes visible only after the fact.

This shift pushes risk well beyond the boundaries of technology. Complex supply chains, outsourced services and delegated authority extend exposure across organisations with limited visibility or clear ownership, creating opportunities for fraud, data leakage or misuse of authority to occur through everyday interactions rather than technical failure. Accountability for these risks rarely sits with security teams alone and often falls between functions altogether.

In this environment, AI‑driven risk is not simply a cybersecurity issue. It is an organisational one, shaped by governance, incentives, decision rights and culture. Managing it requires leaders to understand how technology interacts with process and trust – and how AI accelerates weaknesses at those intersections.

Without this broader lens, organisations may succeed in defending their infrastructure while remaining deeply exposed to risks that move through people, judgement and routine activity rather than systems alone.

The shift to intelligence-driven, investigation-led security

These changes are forcing a reassessment of what effective security looks like. Simply detecting more issues or patching faster is not enough. In response, many organisations are moving towards an intelligence-driven, investigation-led approach which prioritises understanding how harm actually occurs rather than reacting to ever-growing volumes of alerts.

This approach is less about adopting new frameworks or automating intelligence and more about developing consistent habits. What matters in practice is how information is gathered, connected and acted upon, often under conditions of uncertainty.

Rather than treating every vulnerability or anomaly as equal, intelligence-led security builds a shared picture of risk by connecting inputs from people, identities, suppliers, sites and systems. It encourages early reporting so emerging issues can be seen before they escalate.

Decisions are made in context. Using intelligence-led triage, organisations can judge when to monitor, intervene and escalate. Structured investigation is reserved for repeated patterns, threats to sensitive assets, indicators of criminality or safety and legal risk.

Crucially, evidence is captured as decisions are taken. Recording what was checked, why actions were chosen and how issues were resolved supports accountability and enables follow-on action by security, HR, regulators or law enforcement.

Over time, organisations that consistently collect intelligence, assess risk in context and investigate selectively develop a clearer understanding of their real exposure and can respond with greater confidence and less disruption.

Cybersecurity alone no longer protects organisations

AI is accelerating risk faster than traditional security models can cope. As threats increasingly exploit trust, routine behaviour and organisational complexity, security can no longer be treated as a narrow technical function.

This gap between how risk now materialises and how organisations are structured to manage it is increasingly recognised at leadership level. Liz Kendall’s recent warning to business leaders has highlighted that cyber risk is evolving faster than established approaches are adapting, exposing limits of models built primarily around technical control.

Organisations that succeed will approach resilience as an organisation-wide capability, embedding intelligence, understanding how harm actually arises and responding in proportion to the real risk.

About the Author

Thomas DrohanThomas Drohan is Co-founder and Chief Strategy Officer at Clue Software. He is a senior leader with a background in technology and deep experience shaping intelligence and investigation software alongside real-world users. Thomas works closely with customers and the market to understand evolving challenges, ensuring Clue’s products and services remain practical, trusted, and effective in complex, high‑stakes environments.

LEAVE A REPLY

Please enter your comment!
Please enter your name here