Dr. Foote’s insights provide a raw look at how oncology works in a regional unit, where global data has to be grounded in local reality. She is clear that AI should be a high-sensitivity partner for catching details, but it can never replace the human presence that patients actually rely on for trust. Her perspective is that while the math might come from a machine, the accountability and the final plan have to stay with the doctor.
Question: The Gut Check: If an AI recommends a "perfect" treatment that the ‘local’ data says is best, but your clinical intuition tells you it will harm that specific patient, which one are you going with, or will you take both into consideration?
"I view AI as a powerful decision-support partner rather than a replacement for clinical judgment. If an algorithm suggested a treatment that my experience flagged as potentially harmful, I would pause to re-examine both the model and my own reasoning. In our region, clinical judgment is a non-negotiable filter that grounds global data into a plan that is safe and realistic for the individual patient. I want AI at the table, but the final decision must rest with the clinician."
This is a critical point about trust calibration. Instead of just picking a side, she uses the conflict as a trigger to audit both her own thinking and the model. It shows that in a regional setting, you can't just plug in global data and hope for the best; the clinician has to act as the final filter to make sure the plan is actually safe and realistic for the person sitting in front of them.
Question: Do you think a patient is more likely to trust a diagnosis if they know a machine helped you find it, or does it make them feel like they aren't getting your full attention?
"The doctor-patient bond is anchored in presence. If a computer becomes the 'face' of the encounter, patients may feel processed rather than cared for. There is also a generational layer; while younger patients may find reassurance in data, older patients often feel excluded if the human element fades. In our culture, where rapport is the currency of trust, AI should remain a silent partner while the clinician remains the visible, accountable author of the treatment plan."
Dr. Foote is highlighting a major cultural barrier here. If the technology becomes too visible, it risks killing the rapport that actually drives patient trust in the Caribbean. She makes it clear that while data might reassure some, the human element is what prevents patients from feeling like they're just being processed by a machine.
Question: Would you prefer an AI that catches small details you might miss in a scan, or an AI that just suggests the best treatment plan?
"I would choose an AI that catches small details over one that suggests treatment plans. In haematology and oncology, a missed lesion or a subtle change on a peripheral smear can have significant clinical implications. Having a high-sensitivity “second set of eyes” to counter human bias and fatigue is invaluable. However, catching a detail is only the first step. As a clinician, I must then weigh the risks and benefits of acting on that finding, considering everything from the invasiveness of a biopsy to the potential for overtreatment. Constructing the plan is my job, and I would rather use AI to sharpen my detection while keeping the final synthesis and accountability with the clinician."
This is a purely practical choice. She wants the AI to handle the superhuman task of spotting tiny details to fight off human fatigue, but she is strictly against letting it dictate the treatment path. For her, the "job" is the synthesis weighing the risks, the biopsies, and the potential for overtreatment. She wants the tech to sharpen her vision, not to take over the decision-making.
Sonrisa Watts // Emotion Encoded // 2026