AI in Healthcare illustration🤖 AI in Healthcare: Insights from Dr. Al Pierre, Co-Founder of GoDocta

The digital health revolution is transforming healthcare access, efficiency, and patient outcomes. At the forefront of this movement is Artificial Intelligence (AI), but adoption is not just about technology. It is about human psychology, culture, and trust.

We spoke with Dr. Al Pierre, co-founder of GoDocta, a Caribbean telehealth platform, about integrating AI into healthcare systems, overcoming psychological barriers, and the ethical responsibilities of AI in medicine.

AI in Practice: Assisted Intelligence, Not Replacement

Q: How do you see AI being integrated into healthcare systems more broadly?

Dr. Al Pierre: Our experience with GoDocta’s telehealth platform has shown that digital tools can dramatically improve healthcare access and efficiency. By connecting patients on one island to specialists on another, we reduced costs by over 80 percent just by eliminating travel without compromising quality. Our plan is to roll out this kind of digital health innovation across the Caribbean and beyond.

AI is becoming a natural next step in this digital health revolution. We integrate AI as what I like to call ‘Assisted Intelligence’ — intelligent systems that support doctors and nurses rather than replace them. For example, we’ve developed proprietary AI algorithms to assist with accurate diagnostics, flagging potential conditions or analyzing medical images as an aid to the clinician. This helps doctors on small islands get decision support on par with the best hospitals, leveling the playing field.

At Urgent Care SKN, we use AI to launch a Nurse-Led model: nurses manage non-complex cases from start to finish, empowering them while filling key healthcare gaps. Within GoDocta’s EHR, AI handles data-heavy tasks like reviewing scans or crunching patient data so clinicians can focus on patient interaction and nuanced decision-making. AI works best as a partner to the medical team.


Algorithm Aversion: Trusting Machines

Q: Many doctors and patients hesitate to trust AI after mistakes. How does this affect adoption?

Dr. Al Pierre: Algorithm aversion is real. People distrust algorithmic advice even when it outperforms humans; a single visible AI error can outweigh dozens of correct recommendations. Clinicians may abandon AI after one mistake, but no doctor is 100 percent error-free either. We’re more forgiving of humans than machines, largely due to psychology.

Overcoming this requires transparency and education. At GoDocta, we introduce AI in assistive roles, pilot it in low-risk settings, and always ensure human supervision. Clinicians have the final say, which mitigates fears of the machine hallucinating or running wild.


Radical Skepticism: Cultural and Emotional Barriers

Q: Some professionals doubt AI’s reliability in sensitive healthcare contexts. Is this justified?

Dr. Al Pierre: Healthy skepticism is good. We make important decisions about people’s lives and should always question new tools. Concerns about AI failing unpredictably, data privacy, job displacement, or erosion of the human touch are valid.

That said, outright refusal to consider AI often stems from emotion and culture rather than facts. Historically, every major healthcare innovation faced resistance — stethoscopes, X-rays, computers, even vaccines. Over time, as tools proved their worth and education increased, cultural resistance faded. I suspect AI is on a similar trajectory. Reliability is improving, and many AI systems already outperform human benchmarks in narrow tasks, but they augment clinicians rather than replace human oversight. We should channel skepticism into constructive scrutiny and deeper partnership, pushing developers to meet high standards and healthcare systems to implement AI thoughtfully.


Algorithms and Injustice

Q: Healthcare data reflects existing inequalities. How serious is the risk that AI codifies injustice in small states like St. Kitts?

Dr. Al Pierre: This is a serious concern, especially in vulnerable regions like the Caribbean. If AI is trained on biased data, it will perpetuate and even amplify biases. In the US, an algorithm used by insurers recommended less care for Black patients because historical spending was lower, falsely concluding they were healthier.

In small Caribbean states, local datasets are limited and may not capture population diversity. Imported AI might not understand our reality. Even day-to-day treatment regimens often rely on studies conducted on populations very different from ours. Historical inequalities along socio-economic or geographic lines can also be baked into the data.

At GoDocta, AI must be developed with the communities it serves. We train algorithms on diverse data and check for biased outcomes. Regional cooperation is essential so we can co-create AI models that fit local needs and reduce disparities.


Status Quo Bias: Resistance to Change

Q: Healthcare tends to stick with traditional practices. Will status quo bias slow AI adoption?

Dr. Al Pierre: Healthcare is conservative, valuing precedent and proven techniques. This can translate into ‘better the devil we know.’ Some practitioners say, ‘I’ve treated patients this way for 30 years, and it works, so why change?’

However, research suggests many clinicians are open to decision support technology if it increases efficiency, improves accuracy, and reduces burnout. Demonstrating consistent value, like earlier detection or saved time, encourages adoption. Framing AI as a tool to bolster self-sufficiency and improve equity in the Caribbean also helps. Status quo bias can be overcome with evidence, training, and time, which is why I am bullish on AI.


Automation Bias: Guarding Against Over-Reliance

Q: How should healthcare systems prevent over-reliance on AI?

Dr. Al Pierre: Automation bias is the flip side of algorithm aversion. Doctors might accept an AI recommendation without critical thinking. To guard against this, human oversight is essential. AI should assist decisions, not run them autonomously.

Systems must be transparent so clinicians understand why a recommendation is made. Providers should be trained on AI limitations and maintain practice within their qualifications. Monitoring and auditing AI decisions is critical. At GoDocta, AI is like a second pair of eyes, supporting clinicians and maintaining transparency with patients.


Future-Oriented: AI as a Companion

Q: What is the most responsible role for AI in healthcare?

Dr. Al Pierre: AI should be an indispensable partner, amplifying human capabilities. It should handle data-heavy tasks, pattern recognition, and routine monitoring while leaving empathy, critical thinking, and ethical judgment to humans.

This ‘AI Companion’ operates under strict guidance of medical professionals. At GoDocta, we use the term ‘Assisted Intelligence’ because AI is designed to support, reduce provider burnout, improve precision and personalization of care, catch errors, and ensure no patient falls through the cracks. Implemented this way, AI can help small regions adopt preventive and equitable healthcare approaches previously thought impossible.

#AIinHealthcare #DigitalHealth #Telehealth #AssistedIntelligence #HealthcareInnovation #CaribbeanHealth #MedicalEthics #AlgorithmAversion #HealthEquity #FutureOfMedicine #HealthcareTechnology #PatientCare