What exactly is ‘responsible AI’ in principle and in practice?
The notions of ethical and accountable artificial intelligence (AI)—also referred to as “responsible AI”—have been adopted by many stakeholders from government, industry, civil society, and academic institutions. Making AI systems transparent, fair, secure, and inclusive are core elements of widely asserted responsible AI frameworks, but how they are interpreted and operationalized by each group can vary. Further, there is some debate on whether responsible AI frameworks can address the explicit and implicit biases embedded within systems to ensure equity in predictive decisions, especially when applied to employment, health care, financial services, and criminal justice.
On May 10, the Center for Technology Innovation at Brookings hosted a webinar to unpack what is meant by “responsible AI” and how different sectors are building corollary frameworks to increase the technology’s accountability. Panelists also discussed the roles of self-regulation, public policies, and consumer feedback.
Viewers submitted questions for speakers by emailing firstname.lastname@example.org or via Twitter at @BrookingsGov by using #AIBias.
Nicol Turner Lee
Senior Fellow - Governance Studies
Director - Center for Technology Innovation
Chief Responsible AI Officer - Microsoft
The Hon. Will Hurd
Former Representative (R-Texas) - United States Congress
Assistant Professor - NYU Tandon School of Engineering
To subscribe or manage your subscriptions to our top event topic lists, please visit our event topics page.