Exploring Mars and the moon

LIVE

Exploring Mars and the moon
Sections

Commentary

Senator Klobuchar “nudges” social media companies to improve content moderation

Senator Amy Klobuchar

Senator Amy Klobuchar’s new bipartisan social media bill, which she introduced on February 9 in conjunction with Republican Senator Cynthia Lummis, is actually two bills in one. It is a thoughtful and promising attempt to craft content-neutral ways of reducing “social media addiction and the spread of harmful content.” It is also by far the most ambitious attempt in the United States to require detailed transparency reports from the larger social media companies. As such, it deserves the careful consideration of lawmakers on both sides of the aisle, including a review of important First Amendment issues, followed by prompt Congressional action.

Nudges and interventions

S. 3608, the ‘‘Nudging Users to Drive Good Experiences on Social Media Act’’ or the ‘‘Social Media NUDGE Act,” requires the National Science Foundation and the National Academies of Sciences, Engineering, and Medicine to conduct an initial study, and biennial ongoing studies, to identify “content-agnostic interventions” that the larger social media companies could implement “to reduce the harms of algorithmic amplification and social media addiction.” After receiving their report on the initial study, due a year after the enactment of the law, the Federal Trade Commission would be required to begin a rulemaking proceeding to determine which of the recommended social media interventions should be made mandatory.

What interventions are the bill’s authors thinking of? The bill lists examples of possible content-neutral interventions that “do not rely on the substance” of the material posted, including “screen time alerts and grayscale phone settings,” requirements for users to “read or review” social media content before sharing it, and prompts (that are not further defined in the bill) to “help users identify manipulative and microtargeted advertisements.” The bill also refers approvingly to “reasonable limits on account creation and content sharing” that seem to concern circuit breaker techniques to limit content amplification.

In addition, the bill goes into great detail in mandating that social media companies publish public transparency reports every six months, and with a distinct focus on correcting some of the weaknesses of the current transparency reports that critics have noted. For instance, it requires the larger social media companies to calculate “the total number of views for each piece of publicly visible content posted during the month and sample randomly from the content.” It would also require information about content posted and viewed that was reported by users, flagged by an automated system, removed, or restored or labeled, edited otherwise moderated. This focus on the details of reports is a welcome addition to other approaches that remain at a higher level of generality.

Critics blame algorithms for many of the ills on social media, and policymakers around the world are seeking to hold social media companies responsible for the online harms they algorithmically amplify. But no one at this point really knows how social media algorithms affect mental health, nor political beliefs and actions. More importantly, no one really knows what changes to algorithms would improve things.

Skeptics of an algorithmic fix to the ills of social media focus on the difficulty of disentangling cause and effect the social media world. “Is social media creating new types of people?” asked BuzzFeed’s senior technology reporter Joseph Bernstein in a 2021 Harper’s article, “or simply revealing long-obscured types of people to a segment of the public unaccustomed to seeing them?”

Other skeptics point to a genuine weakness in an algorithmic fix to the problems of disinformation, misinformation, and hate speech online. “It’s a technocratic solution to a problem that’s as much about politics as technology,” says New York Times columnist Ben Smith. He adds, “the new social media-fueled right-wing populists lie a lot, and stretch the truth more. But as American reporters quizzing Donald Trump’s fans on camera discovered, his audience was often in on the joke.”

Even though cause and effect are hard to discern in social media, it is undeniable that algorithms contribute to hate speech and other information disorder on social media. The problem is not that algorithms have no effect and we are imagining a problem that doesn’t exist. Nor is the problem that nothing works to counteract the effect of misinformation and hate speech online, or that we know nothing about effective interventions. The problem is that we do not know enough to mandate algorithmic solutions or require specific technical or operational interventions, especially those that overly surveil certain populations.

Until a lot more is known about the extent and causes of the online problems and the effectiveness of remedies, legislators should not be seeking to mandate specific techniques in legislation. The matter is one for experimentation and evidence, not one for intuitions about what is most likely to work.

The NUDGE bill takes this evidence-based approach. It requires the government’s science agencies that rely on the academic community for expertise to take the lead in generating the recommendations for algorithmic interventions. To prevent the agency from improvising on its own, it explicitly prevents the agency from mandating any intervention that has not been addressed in the reports from the national academies.

Some needed improvements

Several improvements in the bill seem important to me. The first is to give the researchers working with the national science agencies full access to all the information they need to conduct their studies. The bill improves on existing public transparency reports but it does not provide for needed access to internal social media data for vetted researchers. What the bill’s mandated transparency reports make available to the public might not be enough for the researchers to determine which interventions are effective. They should be allowed broad, mandated access to internal social media data including internal studies and confidential data about the operation of content moderation and recommendation algorithms. Only with this information will they be able to determine empirically which interventions are likely to be effective.

The bill is prudent to require the science agencies to conduct ongoing studies of interventions.  A second improvement in the bill would be to require the FTC to update its mandated interventions in light of these ongoing studies. The first set of mandated interventions will be almost certainly only moderately effective at best. Much will be learned from follow-on assessments after the first round of interventions have been put into practice. The FTC should have an obligation to update the rules in light of the new evidence it receives from the science agencies.

The cloud on the horizon

As promising as it is, there is a cloud on the horizon that threatens the entire enterprise. The bill’s objective of reducing harmful content is in tension with its mechanism of content-neutral interventions. How can the science agencies and the regulatory agency determine which interventions are effective in reducing harmful content without making content judgments? As Daphne Keller has noted, it is actually not all that hard to slow down the operation of social media systems through the insertion of circuit-breakers such as limits on the “number of times an item is displayed to users, or an hourly rate of increase in viewership.” Such rules would restrict all speech exceeding these limits: both important breaking news such as the videos documenting the death of George Floyd, as well as the latest piece of viral COVID misinformation.

But the more fundamental concern is that policymakers do not want rules that are neutral in their effect. They want interventions that allow the rapid distribution of real breaking news and new insightful commentaries on issues of public importance while impeding hate speech, terrorist material, and content that is harmful to children’s health. They want, in other words, technical proxies for harmful speech, not interventions that slow everything down.

Keller rightly worries whether neutral circuit breaker rules “would have neutral impact on user speech” because she feels that the First Amendment might frown on rules that have a disproportionate effect on certain content, even if the rules do not assess the content itself. For this reason, it is important for the policy community to engage in a thoughtful assessment of the First Amendment implications of the NUDGE bill. My own instinct is that just as the Courts have permitted race-and-gender neutral proxies to achieve disproportionate gains for minorities and women in affirmative action cases, the Courts will allow a similar reliance on content-neutral proxies to filter out harmful online content. But proponents of this bill need to consider how to position it for an inevitable First Amendment challenge even as they begin the process of moving it through the legislative process.