Patient Consent for AI-Assisted Care: What Australian Health Services Need to Get Right


When should patients be told that AI is involved in their care? When should they be able to refuse? These questions are becoming increasingly urgent as clinical AI deployment expands.

I’ve been part of several governance committees grappling with these issues. The conversations reveal how little consensus exists—and how much organisations are making it up as they go.

The Current Situation

Australian law doesn’t have specific consent requirements for AI-assisted clinical care. General consent principles apply, but how they apply to AI specifically is ambiguous. Firms like AI consultants Sydney report that this is one of the most common governance questions they receive from healthcare clients.

The NHMRC guidelines on general clinical consent require that patients receive information material to their treatment decisions. But is the involvement of AI in a diagnosis or treatment recommendation “material”?

Different organisations are reaching different conclusions.

The Spectrum of Current Practice

Across Australian health services, I see the full range:

No disclosure. AI is used in clinical workflows without any patient notification. The view is that AI is a clinical tool like any other—patients aren’t told about every software system or imaging protocol used in their care.

Passive disclosure. Information about AI use is included in general consent forms or patient information materials. Patients who read carefully might notice; most probably don’t.

Active disclosure. Clinicians inform patients when AI is specifically used in their care, usually at the time of the relevant clinical interaction.

Explicit consent. Patients are asked to agree to AI-assisted care, with the option to receive care without AI involvement.

Each approach has defenders. Each has problems.

The Case for Full Transparency

Strong arguments support comprehensive disclosure and consent:

Patient autonomy. Patients have a right to know how clinical decisions affecting them are made. AI involvement is material information for many patients.

Trust maintenance. Discovering AI was used without their knowledge can erode patient trust. Transparency prevents this.

Error accountability. If an AI-influenced decision leads to harm, patients should know AI was involved. This affects how they understand what happened.

Research ethics parallel. We require consent for research participation. AI that’s still being evaluated has research-like characteristics.

Future-proofing. As AI becomes more prevalent and public awareness grows, retrospective discovery that AI was used without consent will look bad.

The Case for Limited Disclosure

Counter-arguments have weight too:

Practical constraints. In busy clinical environments, adding consent discussions for every AI-touched decision may not be feasible. Where does it end?

Anxiety induction. For some patients, knowing AI is involved may increase rather than decrease anxiety, even when AI improves care quality.

Consent fatigue. Patients already receive extensive consent requirements. Adding AI consent to every interaction may dilute the meaningfulness of consent overall.

Misleading distinctions. Non-AI clinical decision support tools, protocols, and guidelines also influence clinical decisions. Singling out AI for consent may imply a difference that isn’t meaningful.

Care refusal consequences. If patients refuse AI-assisted care, they may receive worse care. Is that ethical?

What I Recommend

After years of thinking about this, here’s where I’ve landed:

Transparency by Default

Organisations should be transparent about AI use in clinical care. This doesn’t mean individual consent for every AI-touched interaction, but it means:

  • Patient-facing information about which AI systems are used and for what purposes
  • Website, intake materials, and prominent signage communicating AI use
  • Staff trained to answer patient questions about AI
  • Clear pathways for patients who want to know more

The consent approach should match how AI is used:

Administrative/workflow AI (e.g., scheduling, documentation). General disclosure sufficient. No individual consent required.

Clinical decision support (e.g., alerts, risk scores, image pre-screening). Disclosed in consent materials. Patients informed that AI tools support clinical decisions. No individual opt-out, but transparency about use.

Significant diagnostic or treatment influence (e.g., AI as primary screen, AI-driven treatment recommendations). More explicit disclosure at time of care. Patient should know AI is specifically informing their care at that moment.

Experimental or emerging AI (e.g., pilot implementations, novel applications). Explicit consent, including option to receive care without AI. This parallels research ethics requirements.

Always Allow Questioning

Regardless of formal consent requirements, patients should always be able to ask about AI involvement and receive honest answers. Staff need training and authority to respond.

Document Your Approach

Whatever approach you take, document your reasoning. Governance decisions should be recorded, including:

  • What was considered
  • What was decided and why
  • Who made the decision
  • When it will be reviewed

This protects the organisation and demonstrates good faith.

Implementation Challenges

Several challenges make this difficult in practice:

Knowing where AI is. Many organisations lack complete inventories of AI used in clinical workflows. Hard to disclose what you don’t know about.

Boundary definitions. What counts as “AI” for disclosure purposes? Machine learning algorithms clearly qualify. What about simpler decision rules? Statistical risk scores? The line is blurry.

Staff knowledge. Clinicians often don’t know when AI is influencing their workflows. Systems run in the background. Training and visibility are needed.

Patient comprehension. Even with disclosure, do patients understand what AI involvement means? Meaningful consent requires comprehension, not just information provision.

Opt-out logistics. If patients can refuse AI-assisted care, how is that operationalised? Do clinicians know? What’s the alternative pathway?

The ADHA Perspective

The Australian Digital Health Agency has addressed some of these issues in guidance about My Health Record and data use, but specific AI consent guidance remains limited. I expect more detailed national guidance to emerge as AI deployment increases.

For now, organisations are largely on their own in determining appropriate approaches.

What I See Going Wrong

Common problems I encounter:

No governance attention. AI gets deployed without consent issues being addressed. Nobody asked the question.

Legal minimalism. “We’re not legally required to disclose” becomes the operating principle, ignoring ethical and trust considerations.

Impractical requirements. Consent requirements that can’t actually be implemented in clinical workflows, leading to either non-compliance or abandoned AI.

Consumer advocate exclusion. Decisions made without consumer input. Health services think they know what patients want without asking.

When organisations are working through these questions, bringing in external perspectives helps. AI consultants Melbourne and health informatics advisors often have experience from multiple organisations that helps benchmark approaches. Consumer representatives are essential.

Looking Forward

I expect patient consent for AI-assisted care to become more standardised over time. Regulatory guidance will clarify requirements. Professional bodies will issue standards. Case law may develop.

Organisations that develop thoughtful, documented approaches now will be positioned well for whatever emerges. Those that ignore the question may face retrospective challenges.

My advice: don’t wait for someone else to solve this. Engage your governance structures, involve consumer representatives, and develop an approach you can defend.

Your patients will increasingly ask. Have a good answer ready.


Dr. Rebecca Liu is a health informatics specialist and former Chief Clinical Information Officer. She advises healthcare organisations on clinical AI strategy and implementation.