Description

In this project, we designed the ASCENT (Ai System use Case Explanation oNTology) framework for annotation XAI solutions.

Problem Context

The field of eXplainable Artificial Intelligence (XAI) is rapidly progressing towards a large variety of technologies that generate human-interpretable explanations from AI systems. This momentum is boosted by the establishment of national and international regulations and legal frameworks that stress the importance of trustworthy and responsible AI. Given this progression, multiple survey papers have aimed to provide an overview of XAI technologies through taxonomies. Most of these taxonomies, however, categorize XAI solutions based on model characteristics and explanation types. They have little focus on the end-user and typically don’t show the relation between XAI solutions and use case aspects (e.g., explanation goals or task contexts). Consequently, the taxonomies mainly serve data scientists. In order to better connect the field of XAI research with use cases in industry, we argue that it is beneficial to also explicitly model the implications of an XAI solution for different use case aspects according to user evaluations, alongside categorizations of the AI model and the explanation that is desired.

Solution

Because there are many different configurations of use cases, AI models, and explanation algorithms, we propose to model XAI solutions as instances of an ontology. In this project, we designed the ASCENT (Ai System use Case Explanation oNTology) framework, which consists of an ontology and corresponding metadata standard for annotating XAI solutions. We use three ontology modules, namely AI System, Use Case, and Explanation Algorithm. The AI System module aligns with the technology-centered approach within XAI, as it describes the properties that signal what explanation can be generated (e.g. the type of data and model used). The Use Case module aligns with the growing human-centered approach to XAI and describes the important use case properties, which should be taken into account when searching for suitable XAI methods (e.g. the goal of an explanation). Finally, the Explanation Algorithm module describes the aspects of an explanation that need to be considered given these use case and AI system properties (e.g. the explanation generating method and the explanation’s modality).

Results

To highlight how the ASCENT framework can be used in practice, we show how it could be used (by experts) to tag the well-known LIME algorithm. By integrating use cases into our framework (in addition to AI Systems and Explanation Types), we hope the ASCENT framework can pave the way to a communal repository of XAI solutions that are described with metadata relevant to industry applications. The overall aim of the ASCENT framework is to support the FAIR usage of XAI solutions by providing rich metadata to said solutions, online access to the underlying ontology, modelling the ontology in a commonly used standard, and by being extendable.

As future work, we are planning to develop a shared tool with an interface for adding, managing, and filtering ASCENT descriptions, including evaluation results of XAI solutions. Given this knowledge base, the tool could provide validated recommendations of suitable XAI solutions, given specific use case needs.

Affiliations

This work was part of the FATE 2021 project which is funded by the Appl.AI program within TNO.

Contact

  • Ajaya Adhikari, Data Scientist, TNO, e-mail: ajaya.adhikari@tno.nl
  • Ioannis Tolios, Data Scientist, TNO, e-mail: ioannis.tolios@tno.nl