AI ‘judge’ doesn’t explain why it reaches certain decisions

The Guardian reports on a recent paper by University College London researchers that are using artificial intelligence to predict the outcome of trials at the European Court of Human Rights.

Their approach employs natural language processing (NLP) to build a machine learning model, using the text records from previous trials. As such, it demonstrates the power of modern NLP techniques. Given enough relevant text in a particular area, NLP can discover complex underlying patterns. These patterns are then used to predict an outcome using new case texts.

However, the biggest obstacle to it being used in courts is that it is totally unable to explain why it has made the prediction it has. This problem plagues many machine learning implementations. The underlying mathematics of machine learning models is understood, but not well enough to be able to say for certain why a given input determines a given output. Unlike a human judge.

So for the moment, an AI won’t be determining if you really are innocent or guilty in a court of law.

Source: Artificial intelligence ‘judge’ developed by UCL computer scientists | Technology | The Guardian

Paper: https://peerj.com/articles/cs-93/