Recent methods developed for explainable Artificial Intelligence (XAI) visualize or show an ordered list about how much data elements contribute to an outcome. If there are many input data elements, this kind of explanation can become difficult to comprehend when there are many contributing data elements or features. Humans address this issue by concentrating on the main points and try to give the simplest explanation for the outcome that is consistent with the data. Contrastive explanations achieve this simplicity and conciseness by only giving the information that causes some data point to be classified as some class instead of another. A contrastive explanation explains the cause of an actual outcome (the fact) relative to some other counterfactual outcome that was not predicted (the foil).


  • TNO offers a perceptual-cognitive explanation (PeCoX) framework that addresses both the perceptual and cognitive foundations of an agent’s behaviour, distinguishing between explanation generation, communication and reception.
  • TNO offers a method that utilizes locally trained one-versus-all decision trees to identify the disjoint set of rules that causes the tree to classify data points as the foil and not as the fact.



  • Jasper van der Waa, Scientist Specialist, e-mail:
  • Mark Neerincx, Principal Scientist, e-mail: