Sections

Commentary

Podcast

Can AI developers be incentivized to debias their algorithms? | The TechTank Podcast

May 14, 2019, Tokyo, Japan - Japan's computer giant Fujitsu unveils a 300mm wafer which has the CPU chips with 48-core and 2-assistant core Arm architecture "A64FX" processor at Fujitsu's high-tech exhibition Fujitsu Forum 2019 in Tokyo on Tuesday, May 14, 2019. Fujitsu and Riken are developing the government-backed next generation supercomputer called Post-K computer, 100 times greater performance of K computer.    (Photo by Yoshio Tsunoda/AFLO) No Use China. No Use Taiwan. No Use Korea. No Use Japan.

The prevalence and technical relevance of machine learning algorithms have increased over the years, making predictive decisionmaking tools part of the everyday lives of online users. Today, it is harder to discern what decisions are made by humans, and the others that rely upon the cognition of machines. Most users are unaware of the widespread and normalized use of automated decisionmaking, making them completely oblivious to when machines start, and humans take over, or vice versa. Equally concerning are when online decisions make determinations about one’s eligibility for credit, housing, employment, health care, and educational opportunities.

On this new episode of the Tech Tank podcast, Darrell West is joined by Nicol Turner Lee, senior fellow and the director of the Center for Technology Innovation at Brookings who authored a new chapter that is part of the forthcoming book, “AI Governance Handbook” (Oxford University Press, 2022). The compiled edition of the handbook offers various perspectives on the current state, and future of governance of AI and related technologies.

Responding to the current debates around the trustworthiness and fairness of AI systems, Dr. Turner Lee’s chapter, “Mitigating Algorithmic Biases through Incentive-Based Rating Systems,” explores how to improve upon informed consumer choice in the use of machine learning algorithms. Given that AI systems can sometimes mimic and often amplify existing systems of inequalities, there is a need to bring consumers more agency over their trust in and engagement with these models. The chapter explores the need for greater governance and accountability, suggesting a proposed Energy Star rating, or incentive-based rating system that is more risk-aversive and reliant on increased consumer feedback to improve the performance and optimization of these online tools. Dr. Turner Lee also shares a checklist of questions that developers and companies that license and distribute these models should use to ensure more responsible and inclusive tech.

You can listen to the episode and subscribe to the TechTank podcast on Apple, Spotify, or Acast.

Authors