Description

In this project we focus on the key factors of success and failure of implementing risk assessment tools in the justice domain, and identify technical, socio-technical, and social solutions to common problems.

The use of AI in the justice domain is on the rise. The National Police in the Netherlands use a nation wide predictive policing tool. In the United States they are using a system to predict recidivism, and together with the Openbaar Ministerie we have developed and are now testing a risk assessment tool to determine the risk of fleeing of convicted criminals. However it is much easier to develop such risk assessment tools, than to do that correctly.

In this project we have therefore focused on the question: what is needed to build a legitimate, fair and correctly working (also on the long run) risk assessment tool? And how can you keep users both trustful and critical?

To assess the correctness on the long run of such systems, we have performed experiment to evaluate the risk of the self-fulfilling prophecy and tunneling. Both are generic problems caused by one-sided feedback learning. For example: if you detain someone for life, you will never be able to assess it recidivism.

Furthermore, we have performed experiments together with the Openbaar Ministerie to assess if users are able to identify flaws in the reasoning, when presented with the result of the risk assessment. For example: do users start to over-trust a system if it is constantly providing assessment that match their own?

We have identified several key factors of success and failure, that are generic, but often extra important in the justice domain, and present techical, socio-technical and social solutions to these problems.

Contact

  • Selmar Smit, Senior Scientist Integrator, TNO, e-mail: selmar.smit@tno.nl