Artificial Intelligence is not coming; it’s already here. AI tools are becoming an integral part of organizational decisionmaking in both the public and private sector by evaluating large amounts of data to quickly arrive at conclusions.
What’s already here
Public agencies are leveraging AI tools to become smarter, more efficient, and more responsive. For instance, the Cincinnati Fire Department is using data analytics to optimize medical emergency responses. The new analytics system recommends the dispatcher an appropriate response to a medical emergency call – whether a patient can be treated on-site or needs to be taken to the hospital – by taking into account several factors such as the type of call, location, weather, and similar calls. With this new system in place, the department – which responds to 80,000 medical emergencies on average in a year – can place its emergency response team strategically to reduce the number of runs and reduce response times.
Increasingly organizations are sharing their AI tools as open-software. Microsoft, Google, Amazon, and Facebook are open-sourcing their AI tools for people to explore, engage, use, and modify. The U.S. Department of Defense’s research program DARPA has also created the XDATA program to build a public library about machine learning tools and technologies. Citizens can download, customize, and modify AI tools based on their preference at free of cost. Moreover, OpenAI, a non-profit organization, shares AI tools to generate conversations and build value for diverse stakeholders. The open-sourcing of AI tools are likely to spur rapid innovation in this area, where people are continually updating and learning from each other.
Given the potential of AI tools to solve social problems, it is not surprising that Dave Weinstein, Chief Technology Officer for the state of New Jersey, commented that his state might become the first in the nation to hire a Chief Artificial Intelligence Officer. AI tools can be utilized not only for automation or recommendation, but also as a strategic asset that can monitor information systems within and across public agencies.
What’s coming next
While AI has the potential to transform decisionmaking processes, these tools are often promoted as one-size-fits-all policy solution. The ability of AI tools to automate these processes depends upon five key considerations.
First, it is important to understand the nature of the problem we are seeking to solve using AI tools. For example, not all policy decisions hinge on prediction; some policy problems require causal inferences, i.e. understanding the underlying mechanisms. Understanding the difference between the causation and prediction policy problem is a critical first step for developing AI tools.
Second, once the problem has been identified, it is important to consider the types of data available for addressing the problem. This has historically been a problem in government where data is fragmented, not normalized and widely disbursed. Building powerful AI tools that aid decisionmaking (or make decisions outright) depends on the availability of large volumes of data. Feeding quality data that draws from multiple sources is an essential ingredient for developing these AI tools.
Third, these AI tools need large volumes of training data. Consider the case of AI tools predicting crimes. To build these tools for monitoring and predicting occurrences of crime, the developer has to teach these tools to classify criminal versus non-criminal activities. Algorithms need sufficient training data to develop their predictive capacities before they can be deployed with confidence. In addition, the outcome dataset should be a representative sample that captures various nuances of the population set. Else, the tool will have limited practical utility and worse, might cause more harm than good if deployed.
Fourth, it is critical to evaluate the quality of data that is integrated across databases. The AI tools’ decision power depends upon the quality of data fed into these systems. In 2013, a fake tweet reported that President Barack Obama was injured at the White House, which appeared on the Associated Press (AP) Twitter account. While the AP immediately suspended its Twitter account due to a hack, the tweet spread like wildfire and was re-tweeted about 4000 times. Within seconds, the U.S. stock market responded and resulted in a free fall of stocks. Trading algorithms are designed to respond to news trends, and unchecked spread of false information can potentially deceive these tools. AI tools that support decisions are only as good as the quality of data.
Finally, as we increasingly leverage AI tools for sorting, recommending, and making decisions, we have to pay attention to protecting these tools from hackers. To combat the challenge of cyberattacks, in March 2017, the National Governors Association launched a new initiative called Meet the Threat: States Confront the Cyber Challenge. Coordinated initiatives across public agencies are a good first step towards protecting data from hackers, particularly as AI tools become a critical component for public service delivery. Public agencies also need to think about designing systems and data flow processes that have inbuilt privacy elements.
Opening the black box
AI tools are new and we do not yet fully understand their potential. Organizations – public, private, non-governmental, and universities – are increasingly leveraging these tools for improving a wide range of decisionmaking processes. As AI tools mature, we will continue to learn about their capabilities and shortcomings in solving complex social problems. Simply put, it is time to open the black box of AI and better understand their future direction.
Microsoft, Google, and Facebook are donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the author and not influenced by any donation.
Commentary
Learning from public sector experimentation with artificial intelligence
June 23, 2017