Sections

Commentary

AI for social protection: Mind the people

Iris Ortner, an analyst at the Bavarian State Criminal Police Office, works at her computer during an interview appointment for information about the police software "VeRA" at the Bavarian State Criminal Police Office (BLKA), while a relationship diagram can be seen on one of her monitors. The relationship diagram shows the number of incoming and outgoing messages of a target person with other contacts (names of the contacts are alienated). In the future, the computer program "VeRA" (inter-procedural search and analysis platform) will support police investigators in the efficient analysis of large databases. large amounts of data.

The technology that allowed passengers to ride elevators without an operator was tested and ready for deployment in the 1890s. But it was only after the elevator operators’ strike of 1946—which cost New York City $100 million—that automated elevators started to get installed. It took more than 50 years to persuade people that they were as safe and as convenient as those operated by humans. The promise of radical changes from new technologies has often overshadowed the human factor that, in the end, determines if and when these technologies will be used.

Interest in artificial intelligence (AI) as an instrument for improving efficiency in the public sector is at an all-time high. This interest is motivated by the ambition to develop neutral, scientific, and objective techniques of government decisionmaking (Harcourt 2018). As of April 2021, governments of 19 European countries had launched national AI strategies. The role of AI in achieving the Sustainable Development Goals recently drew the attention of the international development community (Medaglia et al. 2021).

Advocates argue that AI could radically improve the efficiency and quality of public service delivery in education, health care, social protection, and other sectors (Bullock 2019; Samoili and others 2020; de Sousa 2019; World Bank 2020). In social protection, AI could be used to assess eligibility and needs, make enrollment decisions, provide benefits, and monitor and manage benefit delivery (ADB 2020). Given these benefits and the fact that AI technology is readily available and relatively inexpensive, why has AI not been widely used in social protection?

Limited deployment

At-scale applications of AI in social protection have been limited. A study by Engstrom and others (2020) of 157 public sector uses of AI by 64 U.S. government agencies found seven cases related to social protection, where AI was mainly used to predict risk screening of referrals at child protection agencies (Chouldechova and others 2018; Clayton and others 2019).

Only a handful of evaluations of AI in social protection have been conducted, including assessments of homeless assistance (Toros and Flaming 2018), unemployment benefits (Niklas and others 2015), and child protection services (Hurley 2018; Brown and others 2019; Vogl 2020). Most of them were based on proofs-of-concept or pilots (ADB 2020). Examples of successful pilots include automation of Sweden’s social services (Ranerup and Henriskon 2020) and experimentation by the government of Togo with machine learning using mobile phone metadata and satellite images to identify households most in need of social assistance (Aiken and others 2021).

Some debacles have reduced public confidence. In 2016, Services Australia—an agency of the Australian government that provides social, health, and child support services and payments—launched Robodebt, an AI-based system designed to calculate overpayments and issue debt notices to welfare recipients by matching data from the social security payment systems and income data from the Australian Taxation Office. The new system erroneously sent more than 500,000 people debt notices to the tune of $900 million (Carney 2021). The failure of the Robodebt program has had ripple effects on public perceptions about the use of AI in social security administration.

In the United States, the Illinois Department of Children and Family Services stopped using predictive analytics in 2017, based on warnings by staff that the poor quality of the data and concerns about the procurement process made the system unreliable. The Los Angeles Office of Child Protection terminated its AI-based project, citing the “black-box” nature of the algorithm and the high incidence of errors. Similar problems of data quality marred the application of a data-driven approach to identifying vulnerable children in Denmark (Jørgensen 2021), where a project was halted in less than a year, even before it was fully implemented.

The human factor in the adoption of AI for social protection

Research on the use of AI in social protection draws at least four cautionary tales of the risks involved and the consequences for people’s lives of algorithmic biases and errors.

The accountability and “explainability” problem: Public officials are often required to explain their decisions—such as why someone was denied benefits—to citizens (Gilman 2020). However, many AI-based outcomes are opaque and not fully explainable because they incorporate many factors in multistage algorithmic processes (Selbst et al. 2018). A key consideration for promoting AI in social protection is how AI discretion fits within the welfare system’s regulatory, transparency, grievance addressal, and accountability frameworks (Engstrom 2020). The wider risk is that without adequate grievance redressal systems, automation may disempower citizens, especially minorities and the disadvantaged, by treating citizens as analytical data points.

Data quality: The quality of administrative data profoundly affects the efficacy of AI. In Canada, the poor quality of the data created errors that led to subpar foster placement and failure to remove children from unsafe environments (Vogl 2020). The tendency to favor legacy systems can undermine efforts to improve the data architecture (Mehr and others 2017).

Misuse of integrated data: The applications of AI in social protection require a high degree of data integration, which relies on data sharing across agencies and databases. In some instances, data utilization could morph into data exploitation. For example, the Florida Department of Child and Family collected multidimensional data on students’ education, health, and home environment. However, this data has since been interfaced with the Sheriff’s Office’s records to identify and maintain a database of juveniles who are at risk of becoming prolific offenders. In such cases, data integration creates new opportunities for controversial overreach, deviating from the intentions under which data was originally collected (Levy 2021).

Response of public officials: The adoption of AI should not presume that welfare officials can easily transform themselves from claims processors and decisionmakers to managers of AI systems (Renerup and Henrisksen (2020) and Brown et al. (2019). The way public officials respond to the introduction of AI-based systems may influence such system performance and lead to unforeseen consequences. In the U.S., police officers have been found to disregard recommendations of the predictive algorithms or use this information in ways that can impair system performance and violate assumptions about its accuracy (Garvie 2019).

Public response and public trust: Using AI to make decisions and judgments about the provision of social benefits could exacerbate inclusion and exclusion errors because of data-driven biases and ethical concerns around accountability for life-altering decisions (Ohlenburg 2020). Thus, building trust in AI is vital to scaling up its use in social protection. However, a survey of Americans shows that almost 80 percent of respondents have no confidence in the ability of governmental organizations to manage the development and use of AI technologies (Zhang and Dafoe 2019). These concerns fuel growing efforts to counteract AI-based systems’ potential threats to people and communities. For example, AI-based risk assessments are challenged on due-process-related grounds, as in denying housing and public benefits in New York (Richardson 2019). Mikhaylov, Esteve, and Campion (2018) argue that for governments to use AI in their public services, they need to promote its public acceptance.

Future of AI in social protection

Too few studies have been conducted to suggest a clear path for scaling the use of AI in social protection. But it is clear that the system design must consider the human factor. Successful use of AI in social protection requires an explicit institutional redesign, not mere tool-like adoption of AI in a pure information technology sense. Using AI effectively requires coordination and evolution of the system’s legal, governance, ethical, and accountability components. Fully autonomous AI discretion may not be appropriate; a hybrid system in which AI is used in conjunction with traditional systems may be better to reduce risks and spur adoption (Chouldechova and others 2018; Ranerup and Henrikson 2020; Wenger and Wilkins 2009; Sansone 2021).

The international development institutions could help countries address the people-centric challenges within the public sector as part of new technology adoption. That is their comparative advantage over the tech sector. Investments in research on the bottlenecks in utilizing AI for social protection could yield high development returns.

Authors