This research brief explores the intersection of clinical intuition and algorithmic dependence through the lens of a Consultant Radiologist. As AI moves from a theoretical tool to a functional teammate in diagnostic imaging, the psychological shift from active searcher to passive verifier creates a new landscape for potential error and miscalibrated trust.
Dr. Peter Johnson, MBBS, DM, FRCR- Consultant Radiologist: Dr. Peter Johnson is a distinguished Jamaican Radiologist with over 30 years of clinical and academic experience. A Fellow of the Royal College of Radiologists (UK) and former Head of Radiology at the University of the West Indies (UWI), Mona, he is a pioneer in Caribbean medical imaging, notably establishing the region's first double-screening mammography service.
Currently a Consultant at West Indies Radiology Outsourcing Limited (WIROL) and the University Hospital of the West Indies (UHWI), his work focuses on the intersection of diagnostic accuracy and emerging technology
1. On Observer Laziness
I asked whether AI might make radiologists lazy observers who stop hunting for anomalies because the software didn't flag them.
"It depends on how the AI is adopted: "If the AI is trusted as a stand-alone triage solution for separating 'normals' from 'abnormals...i.e. flagging an abnormality', and the radiologist won't look at the 'normals', potentially resulting in misses by the AI. Here, the understanding is that the radiologist only looks at the abnormals.....this won't cause 'laziness' but can result in significant problems if the AI is missing things.
The other scenario is that the radiologist prioritize the 'abnormals' flagged, reporting those first. He/she then goes through the 'normals' to verify their findings. In this scenario, if the radiologist finds that the AI is getting it right all of the time, he/she might just not bother to formally re-evaluate these images, and essentially 'rubber stamp' the results.....resulting in them becoming 'lazy' (i.e. dangerous) observers."
2. On Explainable AI
If the AI points to a "finding" but can’t explain the logic, do you think you would be actually diagnosing the patient or just trust-falling into a black box?
"Inherently the 'logic' AI uses isn't explainable in most cases. If it is reliably and consistently getting the answers correct, we will inevitably have to trust the process."
3. On Algorithm Aversion
Regarding how one recovers trust after the AI makes a stupid mistake that even a first-year resident wouldn't make:
"Yes. These mistakes are typically the result of training. If wrong data is introduced in the training process, mistakes will happen. The important thing is to pick these up early and re-evaluate the training data and make the necessary corrections."