What defence AI-based systems are acceptable from an ethical, legal, and societal point of view, and which are not? ELSA Lab defense focuses on responsible design, implementation and maintenance of military AI.

Problem Context

AI technology is needed for dealing with new challenges in both peacekeeping and warfare to improve the efficiency, effectiveness, and security of the Dutch armed forces. We must be able to deal with misleading or false information, cope with our enemies using artificial intelligence (AI), and we must handle the processing of large amounts of data. AI, therefore, has a crucial role to play. The introduction of new technology in defense offers opportunities, yet also creates risks. Introducing AI technology raises ethical, legal, and social issues. How can AI-driven systems remain under human control? How can control and dignity be maintained when machines get autonomy? How are we working within all the legal frameworks?


The ELSA Lab Defence offers a three-fold solution. Firstly, the lab monitors global technological, military, and societal developments that could influence attitude towards the use of military AI-based applications. Secondly, it studies how society and defence personnel perceive the use of military AI, how this perception evolves over time and, how it changes in various contexts. Thirdly, it will develop a methodology for context-dependent analysis, design and evaluation of ethical, legal, and societal aspects of military AI-based applications. This methodology will build upon existing methods for value-sensitive design, explainable algorithms, and human-machine teaming.


Results from the ELSA Lab Defence can be found on the ELSA Lab Defence website in the publications overview.


ELSa Lab Defence is financed by NWO as part of the Call for Proposals Synergy theme Artificial Intelligence: Human-centred AI for an inclusive society – towards an ecosystem of trust.


  • Jurriaan van Diggelen, Senior Research AI, TNO, e-mail: