Description

Explainable Artificial Intelligence deals with how AI models can be explained and understood by humans to improve the interaction, usability and trust.


Within TNO, we work on two different aspects of explainable AI (XAI): the ‘technical’ explainability and the ‘communicative’ explainability. Technical explainability focuses on various techniques to offer insights into the inner working of (black box) algorithms, so that a human observer can understand how the machine has produced its outputs. TNO is using techniques such as contrastive explanations and counterfactual fairness to achieve this. Communicative explainability, on the other hand, focuses on the role of AI as a facilitator in transferring information to people, in a communicative process (like a dialog). It does not aim to explain AI itself, but to use AI to explain other phenomena. Conversational agents are one way to achieve this communicative aspect of AI.

Conversational Agents are being addressed in a growing number of TNO projects, for stakeholders like the Dutch National Police, insurance companies, and health and governmental organizations.

Contrastive explanations

Recent methods developed for XAI visualize or show an ordered list about how much data elements contribute to an outcome. If there are many input data elements, this kind of explanation can become difficult to comprehend when there are many contributing data elements or features. Humans address this issue by concentrating on the main points and try to give the simplest explanation for the outcome that is consistent with the data. Contrastive explanations achieve this simplicity and conciseness by only giving the information that causes some data point to be classified as some class instead of another. A contrastive explanation explains the cause of an actual outcome (the fact) relative to some other counterfactual outcome that was not predicted (the foil).

What does TNO offer on XAI?

Generally, TNO offers different methods and tooling to provide insights into black box models. TNO also offers knowledge on and implementations of conversational agents.

With a focus on contrastive explanations, TNO offers a perceptual-cognitive explanation (PeCoX) framework that addresses both the perceptual and cognitive foundations of an agent’s behaviour, distinguishing between explanation generation, communication and reception. TNO additionally offers a method that utilizes locally trained one-versus-all decision trees to identify the disjoint set of rules that causes the tree to classify data points as the foil and not as the fact.


Contact

  • Jasper van der Waa, Scientist Specialist, e-mail: jasper.vanderwaa@tno.nl