The following is a slightly modified version of remarks delivered by Tom Wheeler at “Formats in Politics 2017,” Berlin, Nov. 10, 2017.
We carry in our pockets and purses the greatest democratizing tool ever developed. Never before has civilization possessed such an instrument of free expression.
Yet, that unparalleled technology has also become a tool to undermine truth and trust. The glue that holds institutions and governments together has been thinned and weakened by the unrestrained capabilities of technology exploited for commercial gain. The result has been to de-democratize the internet.
We have seen this new reality gnaw at our political processes. The agents that formerly curated fact-based debates have been cast off in favor of algorithms whose first loyalty is not veracity.
We exist in a time when technological capabilities and economic incentives have combined to attack truth and weaken trust. It is not an act of pre-planned perdition. Unchecked, however, it will have the same effect.
Thus far, our response has been to address this 21st-century challenge in 20th-century terms and propose 19th-century solutions. We need to do better. We must determine how to harness the new technology to protect against the very problems it has created.
The software algorithms that decide our news feed are programmed to prioritize user attention over truth to optimize for engagement, which means optimizing for outrage, anger and awe.
I believe such technology-based solutions are possible, and I will address one idea that has particular promise. But first, we should begin with some perspective.
Author Tom Standage has a gift for relating today’s technology to its historical analogs. In “Writing on the Wall,” he traces social media back to Roman and even earlier times, “in which information passes horizontally from one person to another along social networks, rather than being delivered vertically from an impersonal central source distribution.”1 Such horizontal distribution of information was a casualty of the fall of Rome.
For the next millennium, the Roman Catholic Church and the landed nobility controlled the flow of information. Telling everyone what to think was a key part of keeping the priests and princes in power. The era became the Dark Ages.
This vertical control structure held until Gutenberg. Printing’s relatively inexpensive distribution was the original information revolution. Its effect was, once again, to horizontalize the flow of information. Because the business of printing required something to publish, it helped stimulate thought and debate. Hypotheses and counter arguments flourished. The result fired the Reformation, spread the Renaissance, and set Europe ablaze in war. We celebrate this departure from the Dark Ages, but it must have been a hellish time to live through as people went from being told what to think, to not knowing what to believe.
In 1545, almost 100 years after Gutenberg’s breakthrough, Swiss scholar Conrad Gesner attempted to put his arms around the cacophony of conflicting information by cataloging all the books that had ever been published. The preface of his collection eerily sounds like what we hear today. In a world awash in confusing information enabled by a new technology, Gesner warned about the “confusing and harmful abundance of books.”2
The arrival of the telegraph about 400 years after Gutenberg set off another round of concern about the abundance of information. In 1848, in order to share the costs of telegraphic reporting from afar, the major newspapers of New York City founded the Associated Press (AP). At the same time that it expanded the flow of information, the AP also created a new concern over centralized control. During the Civil War, in fact, the AP became the Lincoln administration’s de facto censor, dispatching only news the government approved.
There was also concern the telegraph and AP would spread fake news. A 1925 Harper’s Magazine article entitled “Fake news and the public” sounds like it could have been written today. Warning about the power of the AP, the article observed: “Once the news faker obtains access to the press wires all the honest editors alive will not be able to repair the mischief he can do. An editor receiving a news item over the wire has no opportunity to test its authenticity as he would in the case of a local report.”3 Sound familiar?
For the last 40 years, I have been fortunate to have participated in the development and expansion of the new electronic networks that have bridged us from the telegraph to the internet. In my timeline, everything changed on June 1, 1980, with the launch of CNN.
It all seems so logical to us now. However, at the time—after decades of 15-minute, then 30-minute nightly news programs on TV—we didn’t even know the words to describe what was happening. As president of the National Cable Television Association, I was the speaker at the CNN launch, and I struggled.
Searching for new words to discuss a new reality, I spoke of “a telepublishing event marking a watershed in information provision.”4 It was a bit over-the-top rhetorically, perhaps, but I was straining to describe how the multi-channel capacity of cable television could bring the diversity of the print newsstand to video. Mercifully, “telepublishing” did not stick.
To maximize reach, traditional outlets curated information for veracity and balance. In stark contrast, the curation of social media platforms is not for veracity, but for advertising velocity.
Immediately after the switch was thrown, I was in the CNN control room. What I saw was the opening salvo of the information world in which we find ourselves today. There, on the monitor, was a live feed from a beach in Florida where nothing was happening. The correspondent was reporting that the anticipated hurricane had not yet hit. At that moment, the nature of news changed. Traditional TV would have waited for the hurricane to hit landfall. But with 24 hours of air time to fill, CNN had just redefined news to include what wasn’t happening.
While CNN opened the time aperture that had previously constrained video reporting, the basic paradigm of editorially curating what was news had not changed—someone still decided what news was sent out. In Roman times, curation was collectivized as information moved through social pathways. Post-Gutenberg, the book printer made the decision about what went out from his presses. The editor of the early newspapers made the same decision, often with a political bias. We’ve already seen the concern about the editorial functions at the AP, which subsequently extended to newspapers, broadcast outlets, and CNN.
The abundance of video capacity on cable TV enabled the installation of multiple curators who differentiated their product through editorial decisions. CNN founder Ted Turner had always worried about a competitor from the political right since his audience skewed older and thus more conservative. His fear, of course, came true with Fox News. Telepublishing had indeed arrived as opinion-based news created new titles on the video newsstand.
The proliferation of cable news channels demonstrated how it was profitable to target opinionated programming to specific population segments. Today’s social media news and information follows the same model. This time, however, the targeting is done with software-driven precision and a new kind of curation.
The curatorial function of cable networks may have skewed left or right, but it still subjected its content to validation. In the social media world, there is also curation, but its standards are different. The software algorithms that decide our news feed are programmed to prioritize user attention over truth to optimize for engagement, which means optimizing for outrage, anger and awe.
It didn’t start this way. The first iteration of Sir Tim Berners-Lee’s World Wide Web was a uniform system for locating and retrieving information from diverse, curated databases. Web 2.0 came along early in the new millennium and turned the internet into a publishing tool for user generated content. Suddenly, everyone had the reach, look, and feel of The New York Times or a major television network, and social media platforms became a key aggregator of this content.
When delivering something that pleases a particular group trumps the delivery of the truth, the stage is set for undemocratic exploitation.
But the economic model driving social media was wildly different from traditional content aggregators. For a century-and-a-half, the economic model for media companies was to assemble information in order to attract eyeballs for advertising. To maximize that reach, traditional outlets curated that information for veracity and balance.
In stark contrast, the curation of social media platforms is not for veracity, but for advertising velocity.
Economic success on a social media platform is determined by how long it can hold the user’s attention so that it can deliver ads. To accomplish this, the platforms accumulate information about each user and feed it to software algorithms that target the user with content he or she likes.
The famous statement on the front page of The New York Times, “All the News That’s Fit to Print,” highlights the difference with social media. The Times describes its purpose as deciphering the fit from the unfit. Mark Zuckerberg, founder and CEO of Facebook, describes his company’s purpose as building communities.
Building communities is laudable—until those communities become anti-community. Democratic institutions require social cohesion in order to function. I give Mark Zuckerberg and the other social media pioneers the benefit of the doubt. I suspect they were as caught off guard as the rest of us to discover how easily their platforms could be exploited for anti-democratic purposes.
But now they know. It is algorithm-directed content that creates the uncommunity that plays into the hands of those who profit from division. When delivering something that pleases a particular group trumps the delivery of the truth, the stage is set for undemocratic exploitation.
On average, Facebook users spend 50 minutes a day on the site.5 Commanding the consumer’s attention for 50-minutes a day through the delivery of compelling content sounds awfully like a media company to me. Yet, Facebook COO Sheryl Sandberg says, “We’re very different from a media company. … At our heart we’re a tech company. We hire engineers. We don’t hire reporters. No one is a journalist. We don’t cover the news.”6
Those engineers, however, write algorithms that select among Facebook’s posts to determine who sees what information. Like the mainstream media, Facebook aggregates content to sell advertising—only it does it with machines on a community-targeting basis.
To their credit, Facebook has recognized their responsibility as an aggregator. After her “we’re a tech company” statement, Sandberg explained, “But that doesn’t mean we don’t have responsibility for what people put on our site.” Such a recognition of accountability raises the question: Just what are concerned social media companies, public policymakers, and citizens to do?
The current focus on the impact of social media on the electoral process should serve as the stepping-off point for a broader inquiry into the accountability of internet platforms and the oversight of that accountability. The U.S. Congress, as well it should, is investigating Russian advertising purchases on social media and the use of the platforms as propaganda tools. Thus far, the result has been to propose expanding to the internet the political advertising rules that apply to broadcasters and require the identification of an advertisement’s sponsor.
Looking to solutions established for the one-to-many era of 20th-century broadcasting is insufficient for the targeted-to-the-individual world of internet platforms. I was the U.S. regulator responsible for enforcement of the political broadcast rules. They were inadequately transparent for broadcasting. Replicating them will be insufficient for the internet.
The broader question is how to deal with the exploitation of the Web as a vehicle for de-democratizing communities fueled by fact-free untruth? I would argue that it was software algorithms that put us in this situation, and it is software algorithms that can get us out of it. It is time for algorithms to start playing both sides of the street.
An algorithm is like a recipe describing how to combine various inputs to produce the desired output. For this cake, the eggs, flour, and milk are the digital information collected about individuals, the nature of the information being disseminated, and a description of the target audience.
Today, the inputs to the recipe as well as its outputs are secret. My friend and Harvard colleague Wael Ghonim has called for these inputs and outputs to be opened to the public through the common software practice of an open API. An API—an Application Programming Interface—is what allows two software programs to interact with each other.
An example of this is Google Maps. This algorithm-driven mapping software has open APIs so that other applications and their algorithms can work with it. The open API of Google Maps allows third-party access to its input and output. Uber, for instance, uses Google Maps’ open API to provide directions to the pickup point and destination. It’s not Uber’s mapping algorithm, but Google’s open API that allows Uber to then build its own location-based algorithms that harness Google’s location information for Uber’s proprietary activities.
Adoption of an open API by social media platforms would not open up the “black box” secrets of the algorithm itself or expose personally identifiable information about users. But by opening up what goes into and comes out of the algorithm, third-party programmers could create “public interest algorithms” to understand the effects of the social media distribution. For instance, knowing who purchased ads or created a post—and combining that with information about reach, engagement, and demographics–would allow a public interest algorithm to assemble a picture of what is being spread and to which communities.
As important as an understanding of what is happening is gaining that knowledge with computer speed. It takes only seconds for an ad or posting to spread throughout the world. Yet, to discover that the distribution has occurred can take hours or days. Being able to track what’s going in and coming out via a public interest algorithm would permit the kind of curation for veracity that the platforms do not perform themselves.
Truth has always required transparency; involving a new medium does not change that. Transparency about data on content, reach, engagement, demographic, and geographic groupings would remove the mask behind which social media hides. Disclosing top trending stories and identifying viral reposting could flag what another Harvard colleague, Claire Wardel, has defined as the troika of untruths: mis-information, dis-information, and mal-information campaigns as soon as they get off the ground.7 Knowing what content is being deleted, how long it was up, and how widely it was disseminated before being deleted would help assess its impact, as well as encourage the platforms to act in a timely manner to enforce their posting policies.
Let me be clear on two key points. The open API would not reveal either the secret of the platforms’ algorithms, or the personally identifiable information of the users.
We saw in our quick historical review how new information distribution technology has always created challenges for society. It is now our turn to step up to the challenges that technology has created for us.
Our solutions must embrace the new technology and eschew the solutions of the past. The digital-age reality of a powerful and distributed network of powerful and inexpensive computers must be put to work to attack the challenges those forces have created.
The internet platforms have delivered wonderful new capabilities to consumers. They have quickly grown to be the most valuable and powerful companies in the world. We should celebrate their creativity and their contributions. Yet, their responsibility to the maintenance of democratic norms is something that has not been resolved.
It was software algorithms that put us in this situation, and it is software algorithms that can get us out of it. It is time for algorithms to start playing both sides of the street.
For the last 40 years, I have lived my professional life at the intersection of new technology and the public interest. I know how hard it is for policymakers to keep abreast of new technology and for new companies to accept that anyone other than they themselves might have a say over their activities. These are difficulties that are compounded by how dramatically different 21st-century technology is from anything we’ve seen previously.
In the 19th and 20th centuries, big government arose as an offset to the practices of big business. In the 21st century, businesses utilizing smart networked software require similarly smart networked oversight. The technology that will define our future—software algorithms operating across ubiquitous network connectivity—must be harnessed to oversee the responsible operation of that technology.
With a public interest API, the protection of democratic norms can be achieved without intrusive government micro-management or bureaucracy. Open networks enabled the platform companies to flourish. Having now become pseudo-networks themselves, it is time for the platforms to open up.
Today, public interest groups of all political stripes monitor the mainstream media. With a public interest API, these same groups could also build public interest algorithms to accomplish the same result for social media. Social media platforms would be wise to provide such transparency voluntarily. If they don’t uniformly do so, then it will fall to government to require them to facilitate and require this openness.
So, will technology kill truth? The answer is: “Not if technology is allowed to respond in kind.” It was technology that enabled a business model to prioritize advertising velocity over factual veracity. The same technology, however, can also be programmed to fight for truth and trust. All that is necessary is for the social media platforms to move from secrecy to transparency.
-
Footnotes
- Tom Standage, Writing on the Wall, Bloomsbury, 2013, p.3
- Ann Blair, “Reading Strategies for Coping With Information Overload ca 1550-1700,” The Journal of the History of Ideas, 64, no. 1, p.11
- Cited in “Combatting Fake News: An Agenda for Research and Action,” Harvard Kennedy School Shorenstein Center on Media, Politics and Public Policy, https://shorensteincenter.org/information-disorder-framework-for-research-and-policymaking/
- Porter Bibb, It Isn’t As Easy As It Looks,” Crown Publishers, 1993, p. 180
- James B. Stewart, “Facebook Has 50 Minutes of Your Time Each Day. It Wants More,” The New York Times, May 5, 2016, https://www.nytimes.com/2016/05/06/business/facebook-bends-the-rules-of-audience-engagement-to-its-advantage.html?_r=0
- Shona Ghosh, “Sheryl Sandberg Just Dodged a Question About Whether Facebook is a Media Company,” Business Insider, October 12, 2017, http://www.businessinsider.com/sheryl-sandberg-dodged-question-on-whether- facebook-is-a-media-company-2017-10
- Claire Wardel and Hossein Derakhshan, “Information Disorder,” Harvard Kennedy School Shorenstein Center, https://shorensteincenter.org/information-disorder-framework-for-research-and-policymaking/