Description

The FATE flagship develops AI capabilities for a digital assistant that acquires and extends its expertise through continuous learning from multiple potentially confidential and biased (subject) data sources and from human experts who add to and reflect on the AI-outcomes. The system provides decision support for multiple user roles, such as a researcher, consultant and subject.

Problem Context

The idea behind the FATE flagship is to implement responsible human-machine teaming across a variety of use cases. Such responsible human-machine teaming can be distinguished through four core values: Fair AI, Explainable AI, Co-learning, and Secure learning. Over a period of four years, we worked on each of these topics, making various developments. For Fair AI, we created a fairness module capable of measuring and communicating biases. The chosen measurements were based on domain expertise, though we also learned and acknowledge that fairness encompasses more than what is measurable. In Explainable AI, we worked on three capabalities and two methods. The capabalities allow an AI system to explain why it made one decision instead of another, how confident it is in its decision and why, and finally, it allows users to contest the AI system and its decisions. The methods created included an experimental setup to measure the effectiveness of explanations, and how explanations can and should be integrated in human-machine teaming. In terms of Secure learning, we worked on ensuring our contrastive explanations preserve privacy and on privacy-presering topic modelling. Finally, in the Co-learning topic, we worked on the creation of adaptive knowledge models (in decision-support systems). These models can integrate user feedback and evaluate new knowledge so that the decision-support systems stays up-to-date. In 2023, the first iteration of Fate (Fate 1.0) came to an end. We made progress om all our topics of interest but each topic was researched indepedently from the others. In Fate 2.0, we will focus on integrating the topics and the developed technologies on fairness, explainability, confidentiality, and adaptivity.

Solution

In Fate 1.0, each year a new use case was adopted. In Fate 2.0, we aim to use one use case for the whole 4-year period, in which we can research integration of the various subtopics. Specifically, we will focus on integrating fairness with explainability, explainabilty with confidentiality, and adaptivity and interaction. The goal is to create a hollistic system that helps the human-machine teaming make fair decisions, that allows for explainability that preserves privacy, and that is adaptive to the environment and its users, creating a space of continuous co-learning. As with Fate 1.0, we will create an integrated demo that visualises our results.

In Fate 2.0, we also adopt a human-centred approach. We will involve users and stakeholders in multiple phases of the project. We will conduct user studies, specify key performance indicators and expected impact pathways, and develop storyboards to illustrate the FATE AI capabilities. We will also use user studies to evaluate the intended and unintended effects of the developed FATE AI capabilities on a user.

Results

Contact

  • Milena Kooij-Janic, Sr Project Leader, TNO, e-mail: milena.kooij@tno.nl
  • Joachim de Greeff, Sr Consultant, TNO, e-mail: joachim.degreeff@tno.nl