On June 26, 2019, Brookings Fellow Nicol Turner Lee of the Center for Technology Innovation testified before the House of Representatives Financial Services Committee Taskforce on Artificial Intelligence on the topic “Perspectives on Artificial Intelligence: Where We Are and the Next Frontier in Financial Services.” Turner Lee argued that without thoughtful guardrails and protections, biased algorithms can replicate and amplify stereotypes that have historically harmed people of color and other vulnerable populations.
Turner Lee began her testimony with cautionary examples of algorithmic decisionmaking that resulted in longer prison sentences and higher bails for people of color, sexist hiring practices, and microtargeting of higher-interest credit cards to African American users. With insights from her recent Brookings report, Turner Lee underscored that even algorithms intended to eliminate human prejudices can manifest bias implicitly and exacerbate historical inequalities.
Turner Lee then explained the history of discriminatory credit lending practices and cited current statistics illustrating that vulnerable populations are still underbanked at disproportionately high rates. Referencing Brookings Fellow Andre Perry’s work on the devaluation of assets in black neighborhoods and Brookings Fellow Makada Henry-Nickie’s work on the over-reliance on AI-driven financial services, Turner Lee argued that AI can facilitate a new form of digital redlining. Turner Lee also cited Brookings Fellow Aaron Klein’s research that widespread data gathering enables AI models to draw inferences about applicants, making it increasingly difficult to discern the reasons for deniability.
Turner Lee concluded her oral statement by presenting four recommendations for congressional action and industry self-regulation:
- Congress must modernize civil rights laws—such as the 1968 Fair Housing Act and 1974 Equal Credit Opportunity Act—to safeguard protected classes from discrimination. Consumer protection laws also need to be applied to online practices to ward off unfair and deceptive practices.
- Companies that design and deploy algorithms must exercise some level of algorithmic hygiene, which involves the creation of a bias impact statement, regular auditing, and more human involvement in risk-adverse decisions like credit and lending.
- Congress should support the use of regulatory sandboxes and safe harbors to curb online biases.
- The tech sector must be more deliberate and systematic in the recruitment, hiring, and retention of diverse talent to avert and address the mishaps generated by online discrimination, especially algorithmic bias.
To read her full testimony, click here. Watch video of the oral testimony below: