Responsible decision-making between humans and machines
Research in this programme line focuses on responsible and explainable AI, with specific attention to the correct use of data. To arrive at responsible decisions, a safe environment in which humans and AI can learn from each other (co-learning and secure learning) is needed.
Prejudices in facial recognition and recruitment systems. Accidents involving self-driving cars. This type of failure shows that much remains to be done in the development of AI. The fastest way of moving that development forward is for AI and people to work closely together.
Let us not deny that there have been some major success stories thanks to artificial intelligence. For example, there are AI systems that are able to lip read or recognise tumours, with the help of deep learning. But AI systems also regularly slip up, and that can have very serious consequences. This is especially the case with ethically sensitive applications or in situations in which safety is at stake.
It is therefore time for the next step – a closer partnership between AI and people. This will enable us to develop AI systems that can assist us when taking complex decisions, and with which we can work enjoyably and safely.
More info about this research theme can be on the Responsible decision-making website.
- Cor Veenman, Senior Scientist Specialist, e-mail: email@example.com