By Ty Greenhalgh
Hospitals are rapidly adopting AI to improve care, but this shift brings urgent cybersecurity risks. Ty Greenhalgh warns that healthcare organisations must act now to secure their AI infrastructure. With high vulnerability rates and evolving threats like prompt injection, the sector cannot afford to wait for regulation to catch up.
Just like all other sectors, hospitals and healthcare providers are accelerating the adoption of artificial intelligence to improve patient outcomes and streamline operations.
However, they are also inadvertently expanding the susceptibility to cybersecurity risks. AI systems introduce new vulnerabilities that traditional healthcare security strategies are not equipped to handle. With the regulatory landscape still evolving and threat actors moving quickly to exploit emerging gaps, now is the time for healthcare organisations to take proactive steps to secure their AI infrastructure, before they’re forced to by compliance mandates. Or worse, learn the hard way through a breach.
How AI is changing the cybersecurity landscape in hospitals
AI is gradually transforming how hospitals operate, from improving diagnostic accuracy to streamlining administrative workflows. But alongside these benefits, it’s also introducing entirely new categories of cyber risk that many in the sector aren’t yet prepared to manage.
Quite frankly, the level of understanding around this technology is alarmingly low. AI tools, especially generative AI, are very intuitive and accessible, but most users don’t know what’s going on under the hood. This means users are less likely to notice when their tools are misfiring or hallucinating, and it also means they likely don’t understand the cyber risks around them.
One of the most concerning developments I’m seeing is the rise of prompt injection. This is a relatively new type of attack, but it has dangerous potential. It’s similar in concept to SQL injection, where an attacker manipulates a database query, but in this case, they manipulate the inputs to a large language model (LLM) to change its behaviour. In a clinical setting, that could mean influencing an AI system to generate false or misleading recommendations, or to reveal sensitive data it shouldn’t have access to.
Research has uncovered a “zero-click” vulnerability in Microsoft 365 Copilot, dubbed EchoLeak, that can expose confidential data from emails, spreadsheets, and chats with nothing more than a cleverly crafted email quietly read by the AI assistant. Hackers could send an email containing hidden instructions (a type of prompt injection), which Copilot would process automatically, leading to unauthorised access and sharing of internal data. No phishing links or malware were needed. The AI’s own background scanning was enough to trigger the breach.
Prompt Injection is just one example. There’s also the risk of model poisoning, where bad actors tamper with training data or adversarial prompts designed to manipulate decision outputs. All of this creates a layer of confusion and complexity where the integrity of AI models can’t be taken for granted.
The reality is that AI is being layered onto existing hospital networks that are already highly vulnerable, at a time when most healthcare environments are already exposed to elevated cyber risk.
We conducted in-depth research of 351 healthcare organisations and found that a near-universal 99% had connected systems with at least one known exploitable vulnerability (KEV). If those systems form the backbone of your AI infrastructure, you’re stacking advanced technology on a very fragile foundation. It’s a huge risk healthcare can’t afford to ignore.
Healthcare’s unique vulnerabilities to AI risk
Every business embracing AI needs to stop and think about the risks, but healthcare environments are uniquely vulnerable because of a few intersecting challenges. First, the pace of AI adoption is often outstripping our ability to implement the governance structures needed to secure it.
Imagine someone setting out on a journey with a flat tire, and then trying to fix it without stopping the car. That’s the situation we keep find ourselves in when new technology is introduced into live environments without fully understanding the risks and challenges.
Hospitals are introducing AI systems, everything from diagnostic algorithms to documentation assistants, without always having a clear view of where these tools are deployed or how they’re operating within the broader network.
We’ve been here before. With the rollout of electronic health records (EHRs), we saw what happens when new technologies are rushed into critical environments without sufficient safeguards. In the pursuit of improving patient care, we made the most valuable record in the world accessible to hackers.
One of the biggest gaps I see here is the lack of a comprehensive AI asset inventory. You can’t secure what you can’t see, and right now, many organisations don’t know which systems are leveraging AI, how those systems were trained, or what data they’re accessing. That creates massive blind spots, especially when AI is embedded into existing clinical workflows or integrated with older infrastructure.
The importance of regulation in improving AI security in healthcare
Healthcare is rightfully a tightly regulated space, and regulation absolutely has a role to play with AI tool. But it shouldn’t be the reason we act. If we wait for legislation to catch up, we’ll always be on the back foot. When patient safety is on the line, that’s just not acceptable.
So looking at the regulatory landscape right now, the EU AI Act is a solid framework. It classifies healthcare AI as “high risk” and sets out clear obligations around transparency, oversight, and risk management. That’s important because it acknowledges the critical nature of the decisions these systems are influencing. But we also know that the implementation process will take time, and that the level of enforcement will likely vary across member states.
In the UK, the regulatory approach is more decentralised, and approach that has been described as “pro-innovation.” While that can allow for flexibility, it also creates inconsistency. Right now, there’s a real risk that healthcare AI systems will operate without the same scrutiny we’d expect for other clinical technologies.
Regardless of geography, the takeaway is the same: hospitals can’t wait for regulation to tell them what to do. The principles behind these frameworks – understanding your systems, managing risk, and ensuring accountability – are steps we can and should take today. Compliance will follow, but resilience needs to come first.
The most important practical steps for any healthcare organisation
The first and most important step is visibility. You need to know where AI exists within your environment, whether it’s a standalone tool, embedded in a medical device, or integrated into your documentation system. Start by building an inventory of AI-enabled assets and mapping the data flows between them.
From there, it’s critical to integrate AI oversight into your broader asset protection strategy. AI isn’t separate from your infrastructure; it typically rides on top of it. That means it inherits all the risks we’re already seeing in healthcare, such as outdated operating systems, insecure network protocols, and poor segmentation. If you’re already managing exposure across your cyber-physical systems, your AI should be included in that same framework.
We recommend a five-step approach: discover what you have, validate what matters, scope the risk, prioritise remediation, and mobilise your resources. That model works especially well for AI, because it encourages ongoing assessment and action rather than one-off audits.
Finally, real-time monitoring is essential. AI systems evolve fast, and in a way that most of us don’t really understand. They learn, drift, and change. If you’re not watching for anomalous behaviour, you could miss the early signs of model degradation or manipulation by external threat actors. So, technical controls need to be combined with cross-functional oversight from cybersecurity, IT, and clinical leadership. Then you can ensure AI delivers on its promise without becoming a liability.








