Artificial intelligence (AI) systems are revolutionizing everyday tasks and functions from commerce to education and employment. As these systems further advance, they are also amplifying some of the external biases that exist within society, and in some cases decreasing access to economic opportunities for certain users. The U.S. and other countries are seeking clarity on how to develop responsible AI to reduce consumer harms. For example, Germany has attempted to improve the classification of AI, incorporating high-risk categories such as the degree of environmental, economic, and societal impact, as well as the probability of consumer harm. As efforts to standardize AI principles and processes proceed, how do we ensure more responsible outcomes and model performance? Should policymakers and other stakeholders look to AI certifications, rating systems, labeling, or other tools to surface potential blind spots in the technology’s performance? What roles should computer scientists, policymakers, and civil society leaders play in generating more trustworthy AI systems? How can civil society be more engaged in providing feedback on the AI’s design and outputs?
On October 1, the Center for Technology Innovation at Brookings hosted a panel of experts to discuss potential markers and best practices to increase consumer trust in AI performance. The conversation also explored Brookings Senior Fellow Nicol Turner Lee’s preliminary development of Energy Star-style consumer ratings system that could potentially result in more lucid, inclusive, and responsible AI systems.