The Ethics of Clinical Decision Support AI: Where I Think We're Getting It Wrong
I’ve been thinking about AI ethics in healthcare for most of my career. And honestly, I’m worried we’re focusing on the wrong things.
The current conversation about ethical AI in healthcare is dominated by abstract principles: fairness, transparency, accountability. These matter. But they’re often discussed in ways that don’t connect to actual implementation decisions.
Let me share where I think we’re getting it wrong—and what we might do differently.
We Talk About Bias Without Measuring It
Every healthcare AI discussion mentions algorithmic bias. Training data that underrepresents certain populations. Models that perform worse for some demographic groups. Real problems, well documented.
But here’s what I’ve observed: most healthcare organisations implementing AI have no plan to measure whether bias exists in their specific deployment.
I was reviewing a clinical AI implementation plan last month. It included a section on “ethical considerations” that acknowledged bias as a potential issue. The mitigation strategy? “The vendor has assured us their model was trained on diverse data.”
That’s not a mitigation strategy. That’s hope.
What would actual bias measurement look like? Tracking model performance across demographic groups in your patient population. Comparing outcomes for patients where AI influenced clinical decisions versus those where it didn’t. Looking for systematic differences in how AI flags cases across populations.
This requires infrastructure for measurement. It requires someone with the skills to analyse the data. And it requires willingness to act on what you find—including potentially discontinuing AI use if significant bias emerges.
I don’t see many organisations building this infrastructure. We’re deploying AI systems with theoretical commitments to fairness but no practical capacity to verify it.
Transparency Isn’t Just Documentation
When clinicians ask “how does this AI work?” they’re not asking for a technical explanation of neural network architectures. They’re asking whether they can trust it.
True clinical transparency means:
Uncertainty communication. AI systems should convey confidence levels, not just recommendations. “High probability of pneumonia” is more useful than “pneumonia detected.” Some systems do this well. Many don’t.
Edge case honesty. Every AI system has situations where it performs poorly. Clinicians need to know what those are. If your chest X-ray AI struggles with certain patient positions or equipment types, that should be clearly communicated.
Failure mode visibility. When AI makes mistakes, what kind of mistakes are they? False negatives (missing findings that exist) versus false positives (flagging findings that aren’t there) have different clinical implications.
I’ve seen vendor documentation that runs to hundreds of pages without clearly communicating any of this. Length isn’t transparency.
The Consent Question We’re Avoiding
Here’s an ethical issue I don’t see discussed enough: do patients have a right to know when AI is involved in their care?
Currently, the answer in most Australian health services is effectively “no.” AI operates in the background. Radiologists use AI-assisted interpretation. Triage systems use AI scoring. Pathology labs use AI image analysis. Patients are rarely informed.
The typical argument is that AI is just a tool—like any other diagnostic technology. Patients don’t get informed consent for which brand of MRI scanner was used, so why would they need consent for AI?
I’m not entirely convinced by this argument.
There’s something qualitatively different about AI systems that make clinical recommendations. They’re not just measuring or imaging—they’re interpreting. And if that interpretation influences clinical decisions, patients might reasonably want to know.
I’m not saying we need formal consent for every AI-assisted clinical interaction. That would be unworkable. But I do think we need better disclosure. A line in patient admission information. Signage in radiology departments. Something that acknowledges AI’s role.
The ADHA has started thinking about this in the context of My Health Record, but the conversation is early.
Accountability Gaps in Practice
On paper, clinical responsibility for AI-assisted decisions remains with human clinicians. The AI advises; the human decides.
In practice, this is more complicated.
Consider a busy emergency department. An AI triage system flags a patient as low urgency. The patient waits. Their condition deteriorates. Who’s responsible?
The textbook answer: the clinician who accepted the triage recommendation. They should have exercised independent clinical judgment.
The realistic answer: in a system processing hundreds of patients with limited clinician time, AI recommendations shape what gets attention. The human oversight we assume exists may be perfunctory.
This isn’t a critique of clinicians. It’s a recognition that the accountability framework we’ve designed assumes clinical review that may not happen in practice. We’re deploying AI in contexts where human oversight is a formality.
If we’re serious about accountability, we need to either:
- Ensure genuine human oversight (which has resource implications), or
- Develop frameworks where AI systems themselves carry some form of accountability (which has legal and regulatory implications)
Neither is happening. We’re in an awkward middle ground where theoretical accountability doesn’t match operational reality.
What Should We Actually Do?
I want to end with concrete suggestions, not just critique.
Build measurement infrastructure. Before you deploy clinical AI, build the capacity to measure its performance across your patient population. Not once—continuously. Budget for this.
Require clinical uncertainty communication. Don’t accept AI systems that give binary outputs. Demand confidence levels and clear communication of limitations.
Move toward disclosure. Start informing patients that AI is part of their care. This doesn’t need to be burdensome—a standard disclosure in admission paperwork would be a start.
Honest audit of oversight. Examine whether human clinical oversight of AI recommendations is actually happening. If it’s not, either change the operational model or acknowledge the implications.
Ethics as a function, not a document. Ethics review shouldn’t happen once before implementation and then never again. Build ongoing ethical review into your AI governance structure.
None of this is easy. But I think it’s necessary.
We have an opportunity to get healthcare AI implementation right. That means being honest about the ethical challenges, not just acknowledging them in principle documents that no one reads after approval.
The technology isn’t the hard part anymore. The governance is.
Dr. Rebecca Liu is a health informatics specialist and former Chief Clinical Information Officer. She advises healthcare organisations on clinical AI strategy and implementation.