Skip to main content
Contact us Contact us
Contact us Contact us
Interview

What is Explainable AI and Why it Matters: Expert’s Perspective

When AI makes decisions that affect people’s lives, from approving a loan to detecting a disease, one question inevitably arises: “Why did it decide that?” That’s where Explainable AI (XAI) steps in. It’s not just a buzzword or a “nice to have”, it’s a must-have for trust, adoption, and real-world impact.

In this interview, we talked with Taras Firman about:

  • why even the most accurate model can end up unused,
  • the tools that make black-box models understandable,
  • how to build AI with people, not for them,
  • and why trust often matters more than precision.
Artificial intelligence
Meet the interviewee
Taras Firman
Taras Firman
Data Science Competency Manager

Background & experience:

  • More than a decade of experience spanning computer vision engineering, mathematical modelling, data science, and consulting.
  • Focuses on Explainable AI, addressing the critical challenge of building trust in AI models for both academic and business environments.
  • Mathematician by training with a Ph.D. in Mathematical Sciences and extensive cross-industry experience across retail, logistics, banking, healthcare, and bioinformatics.

We often see situations where a model is built, the metrics look good, but it never gets implemented. Why do you think that happens?

Taras Firman: Because a model isn't just about accuracy, it's about trust and understanding. A good example: we built a demand forecasting model for an e-commerce chain. It worked well, but the team said, "This doesn't match Google Trends, we don't trust it." And the problem wasn't in the math; it was in the communication. We didn't explain why the model gave that particular forecast.

A model that isn’t trusted is dead. An explainable model is a bridge between complex algorithms and real-world action.

And how do you explain it? After all, models aren't always intuitive.

TF: That's where Explainable AI comes in. We use tools like:

  • SHAP: shows how each feature contributes to the model's prediction. For instance, in a credit risk model: "No collateral, low income, history of late payments – these are the key drivers."
  • LIME: helpful for local explanations, like "Why was this transaction flagged as suspicious?"
  • Decision trees and rule-based models: easier to explain: "If X < 5 and Y > 3, then risk is high."
  • And of course, causal models: they help us understand not just correlations but why things happen. That's especially important in bioinformatics.
  • Complex compound analysis – a set of models that help to understand the impact of different features on predicted variables.

It’s not just about picking the right tool, it’s also about how you communicate the result. A great model is one that people understand, trust, and actually use.  

We often follow a strategy like this:
  • start with a simple model,
  • then gradually increase complexity, always explaining why we’re doing it.

Because if you drop in a black-box from day one, even with stellar accuracy, you might still face resistance. 

 

Have you ever had to abandon a working model because of trust or explainability issues?

TF: Yes, several times. In one logistics case, we optimised delivery routes, but drivers refused to follow them: “It’s too complex, doesn’t make sense.”

Another example: a churn prediction model for a bank. The model was accurate, but the recommendations were not intuitive to managers. So instead of forcing it, we stepped back, reviewed the model, visualised the drivers, and co-created the explanation with the users.

Blog post
Discover AI in Supply Chain Use Case with Measurable Results
AI in Supply Chain: A Real-world Case Study on Harnessing AI's Potential
retail-blue-icon
logistics icon

So, Explainable AI is not just about “models for people,” but really about models with people. How do you apply that in your projects?

TF: We use a co-creation approach. Instead of building a model “behind closed doors,” we involve the stakeholders from the start:

  • selecting features,
  • testing hypotheses,
  • discussing model logic,
  • creating validation pipelines and fallback mechanisms.

Yes, it takes more time. But the results are more useful and sustainable. Plus, it helps detect biases or mistakes early on, which is critical in domains like healthcare or finance.

 

What core principles guide you as a data scientist?

TF: Well. I’d say:

  1. Responsibility – if a model affects people, we must know how and why.
  2. Transparency – even complex things should be explainable in plain language.
  3. Humility – a model is a hypothesis, not a fact. It can be wrong.
  4. Collaboration – a data scientist shouldn’t be a “guru in the clouds,” but a partner in decision-making.

 

What would you advise someone just getting started with machine learning in a company?

TF: Start with simple, human questions:

  • What is the real problem we are solving?
  • Who will use this model and how?
  • Will they understand what it’s telling them?

Then, try building the simplest possible model with great visualisations and clear explanations. Because if your first AI project builds trust and shows value, people will come back for more.

fintech
Data science
Skip the section

FAQs

What is an explainable AI example?

Explainable AI (XAI) are AI systems that can explain how they make decisions. Instead of being ‘black boxes’ that provide answers without explanation, XAI helps us understand how and why an AI model reaches a certain conclusion.

What is the difference between explainable AI and traditional AI?
Is ChatGPT an explainable AI?
Talk to experts
Skip the section
Contact Us
  • We need your name to know how to address you
  • We need your phone number to reach you with response to your request
  • We need your country of business to know from what office to contact you
  • We need your company name to know your background and how we can use our experience to help you
  • Accepted file types: jpg, gif, png, pdf, doc, docx, xls, xlsx, ppt, pptx, Max. file size: 10 MB.
(jpg, gif, png, pdf, doc, docx, xls, xlsx, ppt, pptx, PNG)

We will add your info to our CRM for contacting you regarding your request. For more info please consult our privacy policy
  • This field is for validation purposes and should be left unchanged.

What our customers say

The breadth of knowledge and understanding that ELEKS has within its walls allows us to leverage that expertise to make superior deliverables for our customers. When you work with ELEKS, you are working with the top 1% of the aptitude and engineering excellence of the whole country.

sam fleming
Sam Fleming
President, Fleming-AOD

Right from the start, we really liked ELEKS’ commitment and engagement. They came to us with their best people to try to understand our context, our business idea, and developed the first prototype with us. They were very professional and very customer oriented. I think, without ELEKS it probably would not have been possible to have such a successful product in such a short period of time.

Caroline Aumeran
Caroline Aumeran
Head of Product Development, appygas

ELEKS has been involved in the development of a number of our consumer-facing websites and mobile applications that allow our customers to easily track their shipments, get the information they need as well as stay in touch with us. We’ve appreciated the level of ELEKS’ expertise, responsiveness and attention to details.

samer-min
Samer Awajan
CTO, Aramex