Recent methods developed for explainable Artificial Intelligence (XAI) visualize or show an ordered list about how much data elements contribute to an outcome. If there are many input data elements, this kind of explanation can become difficult to comprehend when there are many contributing data elements or features. Humans address this issue by concentrating on the main points and try to give the simplest explanation for the outcome that is consistent with the data. Contrastive explanations achieve this simplicity and conciseness by only giving the information that causes some data point to be classified as some class instead of another. A contrastive explanation explains the cause of an actual outcome (the fact) relative to some other counterfactual outcome that was not predicted (the foil).
WHAT DOES TNO OFFER ON CONTRASTIVE EXPLANATIONS?
- TNO offers a perceptual-cognitive explanation (PeCoX) framework that addresses both the perceptual and cognitive foundations of an agent’s behaviour, distinguishing between explanation generation, communication and reception.
- TNO offers a method that utilizes locally trained one-versus-all decision trees to identify the disjoint set of rules that causes the tree to classify data points as the foil and not as the fact.
- TNO EPCE paper “Using Perceptual and Cognitive Explanations for Enhanced Human-Agent Team Performance” (https://doi.org/10.1007/978-3-319-91122-9_18)
- TNO WHI 2018 paper “Contrastive Explanations with Local Foil Trees” (https://arxiv.org/abs/1806.07470)
- Jasper van der Waa, Scientist Specialist, e-mail: email@example.com
- Mark Neerincx, Principal Scientist, e-mail: firstname.lastname@example.org