Liberal democracies long have struggled to minimize or eliminate elements of bias in their systems of criminal justice. Maybe justice is just too difficult for humans to administer alone. Thankfully, new advances in artificial intelligence could help to balance the scales.
In some jurisdictions, basic AI-driven systems already help courts to assess various risks, from the likelihood that a defendant will skip bail to the likelihood that a potential parolee will reoffend. As these tools become more sophisticated, they have the potential to alleviate the massive congestion facing our state and federal justice systems, while improving fairness and safety.
Of course, such developments are not without controversy. When computer algorithms serve to inform judges’ decisions, there are concerns about privacy, agency and accountability. While these issues deserve serious consideration, and in some cases may warrant new laws and regulations, they shouldn’t deter the development of systems with such enormous potential to produce better outcomes for society.
Human judgment, human failings
Professionals in the criminal justice system have a seemingly impossible task. They must weigh the probability that a criminal defendant will show up to trial, whether they are guilty, what the sentence should be, whether parole is deserved and what type of probation ought to be imposed. These decisions require immense wisdom, analytical prowess, and evenhandedness to get right. The rulings handed down will change the course of individuals’ lives permanently.
But human judgment brings humans failings. Not only are there racial disparities in the sentencing process, but research suggests that extraneous factors like how recently a parole board member ate lunch or how the local college football team is doing can have significant effects on the outcome of a decision. It may be that the tasks we ask judges and parole boards to carry out are simply too difficult for internal human calculus.
While humans rely on inherently biased personal experience to guide their judgments, empirically grounded questions of predictive risk analysis play to the strengths of machine learning, automated reasoning and other forms of AI. One machine-learning policy simulation concluded that such programs could be used to cut crime up to 24.8 percent with no change in jailing rates, or reduce jail populations by up to 42 percent with no increase in crime rates. Importantly, these gains can be made across the board, including for Hispanics and African-Americans.
Supervision still needed
Humans would still need to guide the process. Policymakers would need to prioritize competing goals, such as whether it is more important to reduce the crime rate or the jail population, and whether to emphasize restitution or rehabilitation. In any given case, a judge may want to consider additional factors outside the scope of the AI’s analysis and the variables it weighs. Understanding the data underlying these algorithms is essential to understanding how they work. To test their validity, particularly where machine learning is involved, you would also need access to these datasets. That underscores the importance of transparency into the inner workings of risk-assessment software. Understanding why you were denied bail or how your sentence was calculated is a fundamental aspect of justice.
A recent case in Wisconsin, which was just turned down by the Supreme Court, saw defendant Eric Loomis sentenced to six years in prison, partly due to his “high-risk” status, as assigned by a proprietary risk-assessment software called Compas. Loomis appealed the case after he and his lawyers argued they should be able to review Compas’ algorithm and challenge the validity of its methodology. A subsequent ProPublica report on Compas found that black defendants in Broward County, Florida “were far more likely than white defendants to be incorrectly judged to be at a higher rate of recidivism.” While the company objects to the analysis, the fact remains that no one can be sure, so long as the algorithms remain opaque trade secrets.
There’s currently no federal law setting standards for implementing risk-assessment software or subsequent validity testing, and states do little more. There are good reasons generally not to mandate auditing and disclosure of an AI’s underlying code or training data by default. The proprietary nature of these algorithms gives companies an incentive to invest and develop innovations. But in the criminal justice context, due process concerns must take precedence when an individual’s freedom is on the line. Policymakers need to adopt more open, transparent and consistent rules for the use of AI in this context if they want to ensure just outcomes.
Open source software and training data
One step policymakers could take would be to have state and federal governments require, as part of their procurement contracts, that AI applications deployed in the criminal justice system are built on free and open-source software. Robust open-source AI projects already exist, such as Google’s TensorFlow or Microsoft’s DMTK. And the federal governmentalready utilizes open-source software in a variety of contexts. Projects like Linux and the free software movement helped shape computing. Similarly, open source AI for criminal justice could be organized through a nonprofit foundation with broad input from the public and civil society.
However, making the source code available isn’t enough, in light of the proliferation of machine learning. Because these algorithms train themselves, to understand how they work and to test their validity, you would also need access to the data on which they were trained. While much of the relevant data is already in the public record, there should be better ways to make it available in a bulk, machine-readable format. We may also have to consider ways to anonymize or limit access to sensitive information. Everything from the training data to the final variables would then be open to public scrutiny by civil society groups, who could independently work with the public and private sectors to improve how AI is used in the criminal justice context. Monetary prizes also could be used as incentives for private parties to make meaningful contributions to develop the algorithms and the datasets that underlie them, optimizing for different outcomes.
Finding the right balance of AI and human interaction in the criminal justice system will be a difficult task. Judges may be resistant to change, and we will need systems and institutions that ensure proper transparency and due process. But we can’t just abandon the project because it is hard. After all, the humans we already rely on to dispense justice are flawed, as well.
Commentary
It’s time for our justice system to embrace artificial intelligence
July 20, 2017