Tech specialist with AI visuals

🧠 Emotion Encoded’s Interview with A Tech Specialist from St. Kitts

Do we Trust AI with Our Money More Than Our Health?

In a conversation with a technology specialist from St. Kitts, several key themes emerged around the future of artificial intelligence in high-stakes fields such as law, medicine, and finance. Her responses shed light on the challenges of trust, fairness, and adoption, and reveal why human oversight remains essential even as AI grows more powerful.

Bias in Data and the Limits of “Accuracy”

She was quick to point out that “near-perfect accuracy” in AI is not as straightforward as it sounds. Law and medicine both rely on decades of historical data, but much of this record is biased. Legal decisions have long reflected racial prejudice, while medical research has often been conducted disproportionately on certain skin tones and demographics. This means that even if AI appears highly accurate, the reliability of its decisions depends on the quality and inclusivity of the data behind it. For her, this makes human oversight indispensable. AI can contribute to decisions, but it cannot be trusted as the sole authority in contexts where historical biases remain deeply embedded.

Adoption as the Greater Challenge

When asked whether it will be harder to build accurate AI or to get people to use it, she emphasized adoption. While developing AI is undeniably resource-intensive, requiring complex algorithms and massive computing power, she sees the cultural and psychological barriers to adoption as far more difficult. Building trust, changing habits, and overcoming fear of automation are challenges that cannot be solved by technical progress alone.

The Risks of Reliance

The greatest danger, she argues, is that relying too heavily on AI risks amplifying existing racial and social inequalities. If decisions in law or medicine are handed over to systems trained on flawed data, the result may be unfair and unreasonable outcomes that entrench bias instead of eliminating it. For her, this risk underlines why human involvement is not just desirable but necessary.

Explainability and Trust

She strongly believes that explanations matter. While some argue that AI explanations merely “sound convincing,” she insists that they are central to building trust between professionals and the systems they use. In fields like medicine, explanations allow practitioners to test AI-generated diagnoses against their own expertise. Rather than replacing judgment, explainability supports accountability and positions AI as a diagnostic or decision-support tool. Transparency, in this sense, is less about convincing the user and more about enabling them to responsibly integrate AI into their own decision-making.

Personal Confidence in AI

On a personal level, her trust in AI varies by domain. She is most confident in its use in finance, where the systems can handle structured data and established legal frameworks. She is more cautious in law, seeing value in AI’s ability to process large volumes of case law but remaining hesitant about full reliance. In healthcare, however, her confidence is lowest: she is reluctant to entrust life-changing medical decisions to AI at this stage. That said, she notes that her trust is gradually increasing “day by day” as the technology evolves.

Key Takeaways