The notions of ethical and accountable artificial intelligence (AI)—also referred to as “responsible AI”—have been adopted by many stakeholders from government, industry, civil society, and academic institutions. Making AI systems transparent, fair, secure, and inclusive are core elements of widely asserted responsible AI frameworks, but how they are interpreted and operationalized by each group can vary. Further, there is some debate on whether responsible AI frameworks can address the explicit and implicit biases embedded within systems to ensure equity in predictive decisions, especially when applied to employment, health care, financial services, and criminal justice.
On May 10, the Center for Technology Innovation at Brookings hosted a webinar to unpack what is meant by “responsible AI” and how different sectors are building corollary frameworks to increase the technology’s accountability. Panelists also discussed the roles of self-regulation, public policies, and consumer feedback.