
Artificial intelligence is no longer a distant concept. It is shaping decisions in medicine, law, and finance with real consequences for human lives. To understand how professionals in technology view these risks and responsibilities, Emotion Encoded interviewed Mr. Enoete Inanga, a technology professional with sharp insight into the promises and pitfalls of AI. His responses reveal the fears, tensions, and pragmatic concerns shaping the future of human-AI collaboration.
Status Quo Bias: Why Organisations Resist Change
When asked why organisations still cling to their old systems, Mr. Inanga pointed to fear as the driving force.
“The fear of having to learn something new. Adapting new tech and methods require a change in company culture, habits, etc. This can be really scary for persons, even for some who are tech savvy.”
Resistance is not just about technical debt or messy data. It is about the human difficulty of leaving behind the familiar.
Algorithm Aversion: Who Gets Blamed?
What happens when an AI makes a harmful mistake in a hospital or courtroom? According to Mr. Inanga, blame does not fall on the system alone.
“The person who gets blamed is the person who chose to put 100% faith in a system that had NO guardrails to ensure the validity of the data. The fear of liability does play a very HIGH influence in the rejection of AI and tech. If AI can mess up with the simplest of situations, there is no telling the consequences of messing up when the stakes are higher.”
Fear of liability, he argues, is one of the biggest barriers to adoption.
Automation Bias: Preventing Blind Reliance
How can professionals avoid over-relying on AI tools? Mr. Inanga proposed a striking solution.
“Hmm. I think it’s quite simple. Before committing to any document, solution, platform, have AI show similar projects, similar finished products that resulted in mistakes made with dire consequences just because the guard rails were not put in place. It’s almost the same as scaring teens into abstinence by showing them pictures of body parts infected with sexually transmitted infections.”
In other words, show the risks vividly so professionals never forget the cost of blind trust.
Illusion of Explainability: Simpler or Smarter?
When given the choice between a black-box model and a simpler, transparent one, his answer was clear.
“Simpler is better. Once you know for sure how the food is made, what ingredients went into it, then troubleshooting becomes a lot easier. Being able to tinker and tamper becomes easier too.”
Transparency, he suggests, outweighs complexity when human oversight is at stake.
Algorithms Codifying Injustice: Fix or Govern?
Bias in training data is one of the greatest threats to fairness in AI. Mr. Inanga emphasized that technical fixes are not enough.
“For sure governance and community review. Us humans are supposed to be the guardians of the last frontier. Overreliance on technical fixes again places the control and trust in the machines, which makes NO sense, seeing that’s why we ended up ‘here’ in the first place!!”
To him, the solution lies in human responsibility, not algorithmic quick fixes.
Personal Verdict: Where Would You Trust AI First?
Finally, he offered a personal view on where AI belongs today.
“I’ll trust AI first in Finance mainly because this is a numbers game and data is hardly ever subjective when it comes to finance. At the end of the day, it’s either you have the money or you dont!! ☺”
His perspective highlights a broader reality: some fields are inherently more suited for AI, while others demand greater caution.
Mr. Inanga’s insights cut through technical jargon to expose the raw human challenges of AI adoption. His answers underline a central theme of Emotion Encoded: the future of AI will not be shaped by algorithms alone, but by the people who choose when to trust them, when to question them, and when to push back.