AI and Healthcare Cybersecurity: New Capabilities, New Risks


Healthcare cybersecurity is a growing concern. Health services hold sensitive data. Systems are often legacy. Staff face constant phishing attempts. Ransomware incidents have affected Australian health services multiple times.

AI intersects with cybersecurity in two ways: as a defensive tool and as a source of new vulnerabilities. Both deserve attention.

AI for Cyber Defence

AI-powered cybersecurity tools are maturing:

Threat detection. AI systems analyse network traffic, user behaviour, and system logs to identify anomalies that might indicate attacks. Traditional rule-based systems catch known threats; AI can potentially identify novel attack patterns.

Phishing detection. AI analysis of email content, sender behaviour, and link destinations improves phishing detection beyond simple keyword filtering. Given how many successful attacks start with phishing, this matters.

User behaviour analytics. AI learns normal user behaviour patterns and flags deviations—a user accessing files they’ve never accessed, at times they don’t normally work, from unusual locations. These anomalies might indicate compromised credentials.

Automated response. AI can trigger automatic responses to detected threats—isolating affected systems, blocking suspicious connections, alerting security teams—faster than human response.

For healthcare specifically, AI can help address the challenge of protecting environments that include diverse devices (medical equipment, IoT devices, legacy systems) that traditional security tools struggle to cover.

The New Vulnerabilities

AI also creates new security challenges:

AI system compromise. Clinical AI systems are attack targets. Compromising a diagnostic AI could cause patient harm. Manipulating a treatment recommendation system could have serious consequences.

Adversarial attacks—inputs designed to make AI systems behave incorrectly—are well-documented in research. A radiograph subtly modified to cause AI misdiagnosis is theoretically possible.

Data poisoning. AI systems that learn from new data can be compromised by introducing malicious training data. If attackers can inject false patterns into training data, they can influence AI behaviour.

Model theft. AI models trained on clinical data contain information about that data. Extracting a model could expose patient information or proprietary clinical insights.

API vulnerabilities. Cloud-based AI creates attack surface through APIs. If an AI service is compromised, every organisation using it is affected.

Supply chain risks. AI often involves multiple vendors and dependencies. A vulnerability in any component can affect the overall system.

Healthcare-Specific Concerns

Several factors make healthcare AI security particularly challenging:

Medical device complexity. AI embedded in medical devices inherits medical device security challenges—long lifecycles, limited update mechanisms, safety considerations that complicate patching.

Integration requirements. AI systems need access to clinical data, which means connections to core clinical systems. Each integration point is a potential vulnerability.

Regulatory constraints. TGA-registered AI can’t be arbitrarily modified for security updates without considering regulatory implications.

Limited security expertise. Healthcare organisations often lack cybersecurity expertise to assess AI-specific risks.

Risk Mitigation Strategies

For organisations implementing clinical AI:

Vendor security assessment. Before implementing AI, assess the vendor’s security practices. SOC 2 compliance, penetration testing, security certifications—these matter.

Network segmentation. AI systems shouldn’t have unnecessary access to other systems. Segment networks to limit blast radius if AI systems are compromised.

Access controls. Principle of least privilege applies to AI. Systems should have minimum necessary access to data and other systems.

Monitoring. AI systems should be monitored for anomalous behaviour just like other critical systems. Unusual query patterns, unexpected outputs, or performance changes might indicate compromise.

Update processes. Establish processes for security updates to AI systems, including understanding regulatory implications for TGA-registered devices.

Incident response planning. Include AI system compromise in your incident response plans. Know what you’d do if a clinical AI was suspected to be compromised.

Staff training. Staff interacting with AI systems need security awareness training relevant to AI risks.

The Governance Intersection

AI security governance should connect to both cybersecurity governance and clinical AI governance:

  • Security teams need to understand clinical AI risks
  • Clinical informatics teams need to understand security requirements
  • Incident response should involve both perspectives

Siloed governance creates gaps. A clinical AI incident is both a security incident and a clinical safety incident.

Looking Forward

Some developments I’m watching:

AI security standards. Industry and regulatory standards for AI security are emerging. These will provide clearer guidance for healthcare organisations.

AI-specific security tools. Products specifically designed to protect AI systems—monitoring for adversarial inputs, detecting model manipulation—are developing.

Regulatory requirements. TGA and international regulators are considering cybersecurity requirements for AI medical devices more explicitly.

Threat evolution. As healthcare AI becomes more common, it becomes a more attractive target. Threat actors are likely developing AI-specific attack capabilities.

Practical First Steps

If you’re implementing clinical AI and want to improve security:

  1. Include security in vendor evaluation. Add security criteria to your AI vendor assessment process.

  2. Involve security teams early. Don’t treat AI implementation as purely a clinical informatics project.

  3. Document your AI inventory. Know what AI systems you have, where they’re deployed, and what data they access.

  4. Assess integration risks. Understand how AI systems connect to clinical systems and what vulnerabilities that creates.

  5. Plan for incidents. Develop response plans for AI system compromise scenarios.

AI in healthcare creates genuine security challenges. But they’re manageable with appropriate attention. The organisations that address AI security proactively will be better positioned than those that discover problems through incidents.


Dr. Rebecca Liu is a health informatics specialist and former Chief Clinical Information Officer. She advises healthcare organisations on clinical AI strategy and implementation.