AI in Medical Imaging: Moving Beyond Detection to Prediction and Planning
When people think of AI in medical imaging, they usually think detection—finding a nodule on a CT scan, identifying a fracture on an X-ray, spotting a tumour on an MRI. Detection AI has been the focus of most development and deployment.
But the next wave of imaging AI goes further: predicting disease progression, quantifying treatment response, and supporting treatment planning. This is where things get interesting.
Beyond Detection: What’s Emerging
Risk prediction. AI that analyses imaging to predict future disease risk, not just identify current abnormalities. Examples:
- Coronary CT analysis predicting heart attack risk beyond standard calcium scoring
- Chest X-ray analysis predicting lung cancer risk in screening populations
- Brain MRI analysis predicting dementia progression
These systems don’t just tell you what’s there; they estimate what’s likely to happen.
Quantification. AI that measures things precisely rather than just detecting them:
- Tumour volume measurement over time to assess treatment response
- Cardiac function quantification from imaging (ejection fraction, wall motion)
- Brain volume measurement for neurodegenerative disease monitoring
Manual measurement is time-consuming and variable. AI quantification is faster and more reproducible.
Treatment planning support. AI that assists with planning interventions:
- Radiation therapy planning from imaging data
- Surgical planning with 3D reconstruction and anatomy identification
- Intervention guidance during procedures
These applications use imaging data to inform subsequent treatment, not just diagnosis.
Image enhancement. AI that improves image quality:
- Lower radiation dose CT with AI-reconstructed image quality
- Faster MRI scans with AI completion of incomplete data
- Image standardisation across different scanner types
These applications aren’t diagnostic but enable better diagnostics.
The Evidence Landscape
Detection AI has accumulated substantial evidence. Hundreds of studies document that AI can detect specific abnormalities with accuracy approaching or exceeding specialists.
For these newer applications, evidence is earlier:
Risk prediction evidence exists but is mostly from retrospective studies. Prospective validation—showing that predictions lead to better outcomes—is limited. The question isn’t whether AI can predict risk; it’s whether those predictions improve patient care.
Quantification evidence is stronger because measurement consistency is relatively easy to validate. AI measurement is often more reproducible than human measurement. Whether that reproducibility translates to better patient outcomes is a separate question.
Treatment planning evidence is emerging. Early studies show AI can match expert planners and sometimes identify better approaches. But treatment planning is complex and context-dependent; broad claims should be viewed cautiously.
Clinical Implications
For healthcare organisations, these emerging applications create opportunities and challenges:
Opportunity: Enhanced clinical value. Imaging that provides risk prediction, precise quantification, and planning support is more valuable than imaging that only detects. This could justify investment in AI beyond efficiency gains.
Opportunity: Better patient outcomes. If risk prediction enables earlier intervention, and quantification enables better treatment monitoring, patients benefit. That’s ultimately why this matters.
Challenge: Integration complexity. Detection AI has a simple workflow—AI flags findings, radiologist reviews. Prediction and planning AI require more complex integration with clinical pathways. A risk prediction that doesn’t trigger appropriate action has no value.
Challenge: Clinical interpretation. What does a “35% five-year risk of cardiac event” mean for clinical care? How should clinicians communicate these probabilities to patients? We don’t have well-established frameworks for acting on AI risk predictions.
Challenge: Validation requirements. Newer applications need validation in Australian clinical contexts. Evidence from US populations and healthcare systems doesn’t automatically translate.
Regulatory Considerations
TGA classification for these applications can be complex:
Detection AI typically fits well-established medical device categories with clear regulatory pathways.
Prediction AI raises questions about clinical claims. Predicting disease progression is a stronger claim than detecting current abnormalities. Regulatory scrutiny is appropriately higher.
Treatment planning AI may fall under different categories depending on how directly it influences treatment. AI that suggests radiation doses faces different scrutiny than AI that provides measurements for human planning.
Vendors should have clear TGA pathways before you invest in evaluation.
Implementation Readiness
If you’re interested in these emerging applications, consider readiness factors:
Detection AI first. Most organisations should establish detection AI before moving to more complex applications. Build capability progressively.
Clinical pathway integration. Think about how predictions would be used. If a risk prediction doesn’t trigger a defined clinical response, deployment is premature.
Evidence requirements. Be appropriately sceptical about vendor claims. Ask for evidence from similar populations and clinical contexts.
Clinical champion identification. These applications need clinical champions who understand both the technology and clinical implications. Radiologists are obvious partners, but treatment planning AI might involve oncologists, surgeons, or other specialists.
What I’m Watching
Several developments to track:
Large-scale prospective trials. We need studies showing that AI prediction improves outcomes, not just that predictions are accurate.
Regulatory clarity. TGA and international regulators are developing frameworks for AI that predicts and plans. Clearer guidance will help.
Integration standards. How AI predictions are communicated to clinical systems and clinicians will shape adoption.
Cost-effectiveness evidence. Prediction and planning AI may be more expensive than detection AI. Evidence of cost-effectiveness will determine sustainable adoption.
The Bigger Picture
Medical imaging is moving from description to prediction to prescription. AI accelerates this trajectory.
The value of imaging increases when it doesn’t just tell you what exists, but what’s likely to happen and what to do about it. That’s a fundamental shift in imaging’s role in clinical care.
Whether these capabilities improve patient outcomes depends on how well we integrate them into clinical pathways, how we train clinicians to use them, and how we govern their deployment.
The technology is developing. The clinical and organisational capabilities to use it effectively are what will determine impact.
Dr. Rebecca Liu is a health informatics specialist and former Chief Clinical Information Officer. She advises healthcare organisations on clinical AI strategy and implementation.