Hybrid AI – Explainability
Description
Integrating Human Knowledge in the form of Attention in a Deep Learning Model
Problem Context
Deep learning models have proven their worth in multiple classification tasks such as detecting cars from images. They are able to crunch and learn patterns from millions of training examples and provide good classification performance. They are, however, data hungry, hard to explain and hardly able to generalize to new contexts (e.g. the new tesla cyber truck might not be detected as a car because it has not seen those types of cars during training). On the other hand, humans are good at generalizing using less examples and becoming an expert in some field.
Solution
Our approach to tackle the mentioned downsides of deep learning models is to combine the strengths of both data driven AI models and human knowledge based models by creating new types of hybrid AI systems. These systems should be able to both crunch and learn patterns from big data and generalize to new contexts even with less data. This approach makes the new AI system more understandable and transparent because the human knowledge encoding is transparent. Moreover, involving expert users in the learning process, can increase the trust in the system.
In this work, we investigated the integration of human knowledge in the form of attention within a convolutional neural network (CNN) model. First, we looked into different types of attention extraction methods, noting the strength and weaknesses of each. Further, we created a new learning process in which attention feedback of the end-user can be integrated into the CNN model.
Results
The results show that integration is possible, guiding the model to focus its attention on the correct foreground regions of an image. The results, however, also show that the model can learn to have both a correct attention and prediction while not fully integrating the attention feedback in the decision making. Finally, we created a demo with a simple use-case in which we demonstrate the interaction pipeline with the expert end-user who can provide feedback on ungeneralizable attention explanation.
Contact
- Ajaya Adhikari, Data Scientist, TNO, e-mail: ajaya.adhikari@tno.nl
- Ioannis Tolios, Data Scientist, TNO, e-mail: ioannis.tolios@tno.nl
- Stephan Raaijmakers, Professor of Communicative AI Leiden University (LUCL) & Senior Scientist at TNO, e-mail: stephan.raaijmakers@tno.nl