Artificial Intelligence at the service of Justice
The objective of this new technology is to optimize all works for law practitioners, lawyers or judges, in a general move towards an “uberization” of justice for attaining best profitability with minimal cost.
However, this approach is guided by technological advances that necessarily involve open and big data. Artificial intelligence, which was born in the United States in the late 1950s, is already 60 years old and quickly became involved in science of justice. The tactic of this innovator is to change the practices from the outside, with a new outlaw knowledge, as a guarantee. The predictive function isn’t new in the tactic which embedded the predictability principle in law’s nature.
Let’s take the Californian start-up Lex Machina as an example. It’s capable of producing an intelligence on cases that nobody until now, neither judges, lawyers, the law nor statistics could have achieved. A specialized judge will have to know, during the course of his or her career, hundreds of patent cases, but will never achieve to infallibly memorize the all details. Granted superhuman intelligence, Big Data can do that. It’s claiming to offer, not a juridical, but a mathematical substance to a reality that law practitioners couldn’t reach without an intuitive knowledge and that statistics grazed in a generic manner.
Furthermore, what builds the predictive justice momentum, is the known fact that today, the public feels more reassured by a mathematically as well as scientifically established truth, than a human decision, even as per surrounded by procedural avouchment.
Divorce candidates for example are more prone to accept a statistic, indicative of a certain amount of due compensation, than a judge’s rulings. Doing so, they base their trust in that sentencing by numbers.
The United States of America, pioneers in the field of predictive justice, use machines to predict recurrences in convicted felon’s criminal behaviour patterns. Indiana and Wisconsin Supreme Court magistrates insist on probably the most important aspect the evaluative algorithmic results shouldn’t be considered as a central element in the sentence delivery.
To this aspect, the spirit of these American States rulings is close to the French “ Informatique & Libertés ” article 10 law’s letter, as follows : “ No judicial decision implicating a sovereign appreciation on a person’s behavior can have as premise an automatized data processing of personal information aimed at evaluating part of his personality. No other judicial decision generating judicial effects toward a person can be taken on the sole basis of an automatized data processing aimed at profiling the person concerned or evaluating part of his personality”.
The users of French jurisdictions mustn’t have any fear about the machine replacing human agents.