Our criminal justice system has been under heavy scrutiny this year, particularly after events in Ferguson, Baltimore, Chicago, and other cities made police brutality front-page news. Citizens, experts, and policymakers are demanding to see the many inequities and inefficiencies fixed.
But how do we construct a better system? Is there an app for that? Maybe.
A growing variety of high-tech tools are available to law enforcement, including body-worn cameras, gunshot sensors, DNA profiling, and predictive policing. Unfortunately these tools are typically untested, and come with financial and privacy costs. With so much uncertainty about what their effects will be, how do we move forward? As we look toward 2016 and beyond, we should be clear about technology’s goals, encourage experimentation and rigorous evaluation at the local level, and consider opportunity costs before rolling anything out nationwide.
As a first step, we should set clear objectives for any new technology before we try it out—it is always a means to an end, and we need to keep our eyes on that end goal. Are we trying to reduce violent crime? Decrease mass incarceration? Protect police officers? Increase trust between law enforcement and the communities they serve? It’s important to have an ideal outcome measure (or two or three) in mind so that we don’t get distracted by easily-available metrics that aren’t important. Counting gigabytes of video footage uploaded as a signal of camera use is easy; evaluating body-worn cameras’ effects on police misconduct or community trust is hard. Similarly, the ultimate goal of “predictive policing” technologies—which direct police resources based on past crime patterns—is not simply to predict where crime will occur next, it’s to catch offenders and reduce criminal activity. Such distinctions often get lost in the excitement over all the “cool” stuff new technology makes possible.
Second, we should insist on rigorous evaluations of policy by implementing new tools in a way that allows us to test whether they achieve their goal(s). A randomized controlled trial (RCT) is often ideal, but we need to be careful to keep the treatment and control groups as separate as possible, which can be difficult because these groups can affect each other in subtle ways. For instance, one of the goals of body-worn cameras is to change the way community residents and police officers interact, which will probably happen at the community level. Existing research on body-worn cameras tends to randomly assign cameras to officers or shifts, within a single department. If cameras improve community-level trust, it will affect interactions with all officers, across the full day. Such a study could underestimate effects on use of force, citizen complaints, and so on, because the control group was also affected by the treatment. Alternatively, randomizing by officer or shift could overestimate effects if officers have discretion to hand off difficult situations to colleagues who aren’t wearing cameras, or if they’re careful to behave professionally during treatment shifts (when they’re wearing cameras) but that extra effort increases mental fatigue and improper conduct during control shifts (without cameras). While within-department RCTs are informative, they risk under- or overestimating the effect of implementing body-worn cameras full-time, department-wide. A better design would be to randomize across departments: give cameras to every officer in some departments, and none in other, similar departments.
Often, RCTs are not possible to implement. (Randomly assign convicts to prison or parole? Unlikely.) In these cases, quasi-experimental designs are a good substitute. As long as there’s a way to divide a treatment group from a (very similar) control group, there’s a good opportunity for clean evaluation of a new policy. Sharp date- or score-cutoffs could define eligibility for a program: for instance, it could be that everyone arrested after August 1 is added to the DNA database, while those arrested before August 1 are not. We can then compare the long-run outcomes of those arrested before August 1 with those arrested after. If the two groups are otherwise fairly similar, then we can attribute any differences in outcomes (such as recidivism rates) to the policy change.
Phased implementation across police districts, cities, or states, can also provide useful quasi-experimental variation. In this case, the places that adopt a new program later can serve as a control group for places that adopt it earlier (as long as those groups are similar). Rolling out a new technology quickly at large scale (state- or nation-wide) makes it very difficult to evaluate its effects. (The Obama administration has been working to improve data availability in this area and supports evidence-based policy. But it’s also investing heavily in body-worn cameras before we know they’re the right call, which is a problem.)
There’s so little evidence about the effects of any high-tech tools – and effects are sufficiently likely to vary from place to place – that there’s no way any department can know up front whether a particular option will work well for them. Being open to innovative solutions requires a heavy dose of humility. We should be suspicious of any department or tech firm that isn’t supportive of an arm’s-length, rigorous evaluation.
Third, what’s the opportunity cost? What is the next-best use of the funding we might spend on, for example, body-worn cameras (and the expensive data storage services they require)? It could be that even though cameras are the sexier solution, training officers on implicit bias achieves the same reduction in police misconduct for far less money. It’s important to keep our big-picture goals in mind: implicit bias training might reduce police misconduct, but will never generate video uploads. Focusing on the latter might prevent us from thinking creatively about alternative, more cost-effective solutions.
Political pressure is a powerful instrument for reform, but it is a blunt one that will keep hammering at policy-makers until there are changes in outcomes—safer streets, fairer policing, lower rates of incarceration, etc. Police departments will be tempted to appease the public by adopting policies that sound promising, that local advocates demand, or that nearby departments have already chosen. Even if these strategies are reasonable for individual police departments, they are probably not best for the country as a whole. We should diversify our portfolio of policy experiments, incentivize innovation, and focus on rigorous evaluation. Once we have a better idea of what works, we can expand the most successful policies—whether high- or low-tech—which is the fastest way to satisfy public demands for a fairer, more efficient criminal justice system.