Growing concern over how much bias is embedded in algorithmic systems has led some to call for more civil and human rights guardrails to protect certain groups. However, academics, policymakers, and civil society organizations still lack agreement on the scope of the problems, whether they are related to technical features or historical realities, and methods for identifying and mitigating online biases—from flawed facial recognition systems to discriminatory health care algorithms.
On November 12, the Center for Technology Innovation at Brookings hosted a panel of experts in a conversation that unpacks how biases within machine learning algorithms play out differently for online users and what policies are currently being introduced to address these concerns.
After the session, panelists answered questions from the audience.
To subscribe or manage your subscriptions to our top event topic lists, please visit our event topics page.