Elon Musk’s takeover of Twitter raises the issue of social media content moderation in an especially urgent form. Despite the regulations looming in the United Kingdom and the European Union, to which Musk’s Twitter must conform, no legal requirement will prevent Musk from running Twitter according to whatever editorial policy he chooses to adopt. It’s his candy store.
How is this possible? Can it really be true that the content moderation policies of such a powerful forum for public discourse should depend on the whims of its new billionaire owner? Evan Greer, a political activist with Fight for the Future, speaks for a lot of us when she says, “If we want to protect free speech online, then we can’t live in a world where the richest person on Earth can just purchase a platform that millions of people depend on and then change the rules to his liking.”
But this is the way television, newspapers, and radio function in liberal democracies. The owners of media outlets determine the political line of the news stories and commentary they distribute. When NBC, CNN, ABC, or the New York Post change owners, as they have frequently in the past, their new owners dictate operational rules and editorial policy. Social media is media, and the same ownership prerogatives apply. Content moderation is their editorial policy, and it is determined by their owners. No liberal democracy will mandate what owners can do or what their editorial policy should be.
Of course, certain speech is unlawful, and increasingly social media companies will be expected to keep their systems free of illegal material. The UK and the EU create new liability regimes for illegal speech in their pending legislation, and Musk has promised to comply with these legal requirements.
But most of the hate speech, misinformation, and racist invective on social media are legal both here in the U.S. and in Europe. Musk will have to abide by the new EU and UK laws that relate to harmful but legal speech; that will mean more risk assessments, transparency reports, audits, access to data for researchers, publication of content moderation standards, and due process requirements.
These new laws will impose the vital public protection of transparency. It would be desirable to adopt significant elements of them here in the U.S. But they will not dictate Elon Musk’s approach to content moderation on Twitter. They still allow him to let his system fill up with noxious material if he wants.
So, what is Musk likely to do with Twitter? He presents himself as a philanthropic custodian of a public resource. In an onstage interview at the TED2022 conference, Musk said, “this isn’t a way to make money. My strong intuitive sense is that having a public platform that is maximally trusted and broadly inclusive is extremely important to the future of civilization. I don’t care about the economics at all.”
He appears to want to allow all legal speech on the platform and this has prompted concerns that he will weaken content moderation in the name of free speech. But Wall Street Journal opinion columnist Holman W. Jenkins Jr. sums up the current situation that “Twitter has crossed the river of no return in “moderating” the content that appears on its service—it can’t allow untrammeled free expression.”
The fact that someone has to moderate content on Twitter, however, does not mean that Twitter has to do it. Musk could turn the job over to Twitter users or third parties.
Influential neo-right blogger Curtis Yandex has urged Musk to adopt a user curation approach to content moderation. The new Twitter under Musk, he says, must censor “all content prohibited by law in all jurisdictions that prohibit it.” For content moderation and algorithmic recommendation of legal speech, Yandex urges Musk to seek to identify hate speech and other speech users might not want to see and then give users the tools to block it if they want. The goal should be to arrange content moderation and algorithmic recommendation to give users what they want, to make their experiences “as rich and pleasant as possible.”
This idea still leaves Twitter in charge of identifying the harmful material that users might not want to see. But there might be a way to outsource that too.
Musk says he wants to make the Twitter algorithms “open source to increase trust.”
The Twitter recommendation algorithm “should be on GitHub,” he remarked. This might mean more than allowing users to examine the algorithm to see how it works. Users could modify Twitter’s open-source algorithm in any way they choose.
This raises an interesting possibility for the future of Twitter. Musk might be thinking of adopting the approach to content moderation recommended by political scientist Francis Fukuyama. This “middleware” approach would install an “editorial layer” between a social media company and its users. It would outsource “content curation” to other organizations that would receive the platform’s entire feed and filter it according to their own criteria and then make that curated feed available to their own users.
Musk’s talk of providing all legal speech would then apply to the basic Twitter feed. Content moderation beyond that would be outsourced to users and third-party providers of content curation services.
There is no way to know at this point whether Musk intends to move toward this user-focused approach to content curation. This issue is so fraught that outsourcing content moderation might be worth the experiment. My own sense is that it seems far-fetched. It is not at all clear that it is technically feasible and there is no discernible way to generate revenue to pay for the moderation costs involved. Each middleware provider of content curation services would have to duplicate an enormous infrastructure of software and human moderators, which seems economically implausible.
Moreover, as Stanford University legal scholar Daphne Keller has noted, privacy issues need to be addressed. Does the middleware provider have access to all the material posted by a user’s friends and followers? If yes, then that intrudes on the privacy of these other users who might want nothing to do with that middleware provider. If no, then how can the middleware provider effectively filter the newsfeed?
More importantly, this idea is not a way to foster genuine interchange among citizens about issues of public importance. It is more a recipe for us to retreat to our corners, creating filter bubbles of like-minded people and excluding the rest of society.
Segregating ourselves so we do not have to listen to people who differ from us is not a remedy for the information externalities that make hate speech and misinformation so dangerous even to people who are not exposed to it. People cannot remain indifferent to what other people in society believe because what other people believe affects them. If enough people reject vaccines and other public health measures, we are all at risk from the next pandemic. If enough people become racists or intolerant of the LGBTQ community, significant parts of our community are not safe in their own society. And how are we going to agree on what to teach our children if there is no uniform public platform where we can exchange ideas?
The great advantage of Musk’s Twitter takeover is that it will focus attention on new ways to improve content moderation. The disappointment, to many, is that other than offering advice and requiring transparency, there is not much the public or policymakers can do to influence Musk’s decision about what to do with his new candy store. He owns the platform and as is the case in the business world generally, he is free to make whatever decisions he wishes.
Commentary
How Elon Musk might shift Twitter content moderation
April 29, 2022