Intro
Not long ago, information technology was heralded as a tool of democratic progress. Some referred to the Arab Spring uprisings that swept the Middle East as the “Facebook Revolution” because activists used social media to organize and rally fellow citizens.1 Online platform technologies, it was believed, helped promote equality, freedom, and democracy by empowering citizens to publish their ideas and broadcast their everyday realities unconstrained by gatekeepers, communicate freely with one another, and advocate for political reform.
In recent years, however, doubts have surfaced about the effects of information technology on democracy. A growing tech-skeptic chorus is drawing attention to the ways in which information technology disrupts democracy. No country is immune. From New Zealand to Myanmar to the United States, terrorists, authoritarian governments, and foreign adversaries have weaponized the internet.2 Russia’s online influence campaign during the 2016 United States presidential election demonstrated how easily and effectively bad actors could leverage platform technologies to pursue their own interests. Revelations about Cambridge Analytica, the political consulting firm hired by Donald Trump’s presidential campaign that acquired personal data from 87 million Facebook users, exposed Facebook’s failure to monitor the information third parties collect through its platform and prevent its misuse.3
The concern extends beyond isolated incidents to the heart of the business model undergirding many of today’s large technology companies. The advertising revenue that fuels the attention economy leads companies to create new ways to keep users scrolling, viewing, clicking, posting, and commenting for as long as possible. Algorithms designed to accomplish this often end up displaying content curated to entertain, shock, and anger each individual user.4 The ways in which online platforms are currently engineered have thus come under fire for exacerbating polarization, radicalizing users, and rewarding engagement with disinformation and extremist content. While many large technology companies have underinvested in protecting their own platforms from abuse, they have designed a service that has amplified existing political tensions and spawned new political vulnerabilities.
“The ways in which online platforms are currently engineered have come under fire for exacerbating polarization, radicalizing users, and rewarding engagement with disinformation and extremist content.”
Countries around the world have responded to this growing threat by launching investigations, passing new laws, and commissioning reports. The U.S., meanwhile, has lagged behind other governments even in the face of well-documented abuses during the 2016 election. The U.S. has been slower to rein in “big tech,” in part, because of a fear of state overreach, the constitutional and cultural commitment to free speech, and a reluctance to constrain the capacity of dynamic companies to innovate.
The steps taken by governments around the world, on the other hand, can be explained by some broad principles shared across borders. A growing international consensus holds that the ways in which today’s dominant online platforms are currently designed poses an inherent threat to democracy. Across a number of countries, lawmakers share the view that the structural design of the attention economy has given rise to disinformation and its rapid spread online.5 Today’s powerful technologies, they argue, have coarsened public discourse by satiating the appetite for political tribalism, serving up information—true or false—that accords with each users’ ideological preference.6 They believe the ways in which dominant platforms filter and spread information online presents a serious political threat not only to newer, more fragile democracies but also to long-standing Western liberal democracies.7
While lawmakers in the U.S. are beginning to critique the ways in which online platforms have failed to police their own technologies, there remains a reluctance to respond to the digital economy’s negative side effects by establishing terms to regulate the flow of information and classifying certain content as unacceptable. This, many believe, would violate First Amendment free speech rights.8 Meanwhile, other countries have identified a clearer regulatory role to mitigate the threat online platforms pose to democratic societies.
A similar divide between the actions taken in Europe and the U.S. on online privacy issues has taken shape. Europe has responded forcefully to protect users’ online privacy, bolstering its already robust set of privacy laws when it passed the General Data Protection Regulation in the spring of 2016. The law is widely recognized as the toughest and most comprehensive digital privacy law on the books and is grounded in a cultural attachment to protecting the right of individuals to control access to their personal information.9
Across the Atlantic, the U.S. embraces a different concept and culture of privacy. The American privacy regime largely focuses on protecting individuals from state intrusion and companies from red tape.10 At a time when individual companies hold an unprecedented amount of personal information on their users, the U.S. currently lacks a comprehensive federal privacy law governing the collection and use of personal data by technology companies.
A shared view of the market dynamics that lead to concentration in the digital economy has also begun to develop abroad. Competition enforcement agencies across a range of countries view data as an important source of market power that has given rise to a few dominant “data-opolies” that have amassed troves of users’ personal information. Lawmakers concerned about declining competition in the technology sector have argued that the digital economy does not require a whole new set of principles to guide competition enforcement but that enforcement should home in on the ways in which large technology companies are using data to weaken competition and leverage their dominant position to strengthen their hold on the market.
The consumer welfare framework with its focus on achieving low prices has long guided American antitrust enforcement and stands in stark contrast to the nascent framework being developed abroad. U.S. antitrust authorities, for their part, are now beginning to consider modernizing enforcement to adapt to the market realities of the digital age. For many years, the predominant view held that intervening in the tech sector would make it less dynamic. But evidence is mounting that concentration in the tech sector can slow or even stifle innovation, fostering an openness to promoting greater competition in the digital economy through updated antitrust doctrines and metrics.
With a better understanding of the principles undergirding both foreign and domestic responses to the threats posed by big tech, each subsequent section in this paper will lay out the specific dimensions of the political and economic problems that have arisen in the digital age, the policy responses and proposals pursued abroad, and the ideas guiding debate in the U.S. The goal of this paper is to serve as a resource so that as U.S. lawmakers consider how improve transparency in online advertising, protect user privacy, mitigate the threat posed by harmful content, empower content creators dependent on online platforms, and ensure competition in the digital economy, they can draw on the experience of other democratic governments around the world.
Political advertising
Just a few years after its launch, Facebook announced it would begin running advertisements. The move empowered companies to target consumers with remarkable precision based on the massive amount of personal information the social media site holds on its users. Nearly twelve years later, spending on digital advertising outpaces traditional advertising, including television, radio, and newspaper.11
Digital advertising, however, has not merely enabled companies selling everything from designer clothing to groceries to reach potential customers. Online platforms have also provided a new mechanism for political campaigns, political action committees, and private citizens with their own agendas to target voters. Unlike political advertisements broadcast on television or radio which are subject to strenuous disclosure requirements, online political advertisements face few constraints. This omission has allowed bad actors to leverage the power of online platforms to individually curate to each voters’ ideological preferences and biases. Little oversight of the ads run on online platforms has compounded the problem. Facebook’s algorithm, for example, once allowed advertisers to target users interested in “How to burn jews.”12
Between June 2015 and May 2017, the pro-Kremlin Internet Research Agency was able to purchase roughly 3,000 Facebook advertisements intended to sow division and discord in the U.S. during a highly contentious presidential campaign and political transition.13 In testimony to the Senate Committee on the Judiciary’s Subcommittee on Crime and Terrorism, Colin Stretch, Facebook’s General Counsel at the time, estimated that advertisements linked to the IRA’s fake accounts reached approximately 126 million Facebook users, none of whom knew their source.14
After Moscow’s successful online influence campaign in 2016, some social media sites have introduced new requirements for those trying to purchase online political advertisements. This attempt at self-regulation has produced unsatisfactory results, however. During the 2018 midterm election, those who paid for online political advertisements on Facebook were able to remain anonymous despite the social media company’s requirement that purchasers verify and disclose their identity.15
Lawmakers around the world are beginning to push for public disclosure rules that would subject online platforms to new requirements to maintain a record of all online political advertisements and inform users as to who paid for the political advertisements they are shown. Lawmakers in the United Kingdom have proposed creating a publicly searchable database of online advertisements that would detail who paid for the ad, the issue covered, the period online, and the demographic groups targeted. In December, Canada modified its federal election laws to include new online advertisement transparency rules requiring online platforms to create a publicly accessible registry of any political advertisements they publish detailing who paid for the ad. While Facebook plans to run political advertisements ahead of Canada’s upcoming election, Google has announced it will not sell political advertisements in Canada after the law’s implementation.16
While some U.S. states have passed regulations to govern online ad transparency, federal proposals to do the same have stalled. Major online platforms are not currently required by law to publicly disclose who purchased online advertisements, how much an individual or organization paid, the targeted audience, and the number of views received.
“While some U.S. states have passed regulations to govern online ad transparency, federal proposals to do the same have stalled.”
In October 2017, Sens. Amy Klobuchar, Mark Warner, and John McCain introduced the Honest Ads Act that proposes new disclosure requirements for online political advertisements. The bill, if passed, would require online platforms with more than 50 million unique monthly visitors to keep a publicly accessible record of advertisements purchased by an individual or group spending more than $500. The record would include a digital copy of the advertisement, who paid for it, a description of the targeted audience, the number of views received, the dates displayed online, and the rate charged. The Honest Ads Act was incorporated in HR 1, House Democrats’ sweeping anti-corruption and voting rights bill, and passed the House in March, but it is unlikely the bill will be brought to the Senate floor for a vote.17 Since HR 1’s passage, Sens. Klobuchar, Warner, and Lindsey Graham have reintroduced the Honest Ads Act. Senate Majority Leader Mitch McConnell, however, worries advertising disclosure requirements might raise First Amendment concerns.18 While there is a bipartisan appetite to push for online advertising transparency, lawmakers might fail to improve upon the status quo before the 2020 presidential election.
User privacy
Online platforms that rely on targeted advertising to generate revenue are in the business of amassing as much personal information on their users as possible. For years, tech companies have been able to collect, use, and share users’ data largely unconstrained. A New York Times investigation found that Facebook gave a number of large technology companies access to users’ personal data, including users’ private messages.19 In another investigation, the Wall Street Journal found that smartphone apps holding highly sensitive personal data, including information on users’ menstrual cycles, regularly share data with Facebook. While Facebook users can prohibit the social media site from using their data to receive targeted advertisements, users are unable to prevent Facebook from collecting their personal data in the first place.20
Meanwhile, high-profile data breaches have highlighted the inability of some of the largest tech companies to protect users’ information from misuse. Cambridge Analytica, a political-data firm linked to Donald Trump’s presidential campaign targeted voters in the run-up to the 2016 presidential election by successfully collecting private information from as many as 87 million Facebook users, most of whom had not agreed to let Facebook release their information to third parties.21 The campaign used this data to target personalized messages to voters and “individually whisper something in each of their ears,” as whistleblower Christopher Wylie described.22 Just months after the Cambridge Analytica story, hackers successfully broke into Facebook’s computer network and exposed nearly 50 million users’ personal information.23
While users enjoy free access to many tech platforms, they are handing over their personal information with little understanding of the amount, nature, or application of the data tech companies hold on them and little ability to stop its collection. The Cambridge Analytica scandal revealed that entire political systems and processes, not just individual users, are vulnerable when large tech companies fail to properly handle users’ data and leave the door open to those interested in exploiting social and political rifts.
The European Union has made online user privacy a top priority, establishing itself as a global leader on the issue after it passed its General Data Protection Regulation. The law sets out new requirements for obtaining user consent to process data, mandates data portability, requires organizations to notify users of data breaches in a timely fashion, and allows steep fines to be imposed on organizations that violate the regulation. Less than a year after GDPR’s passage, French officials levied a hefty $57 million fine against Google for failing to inform users about its data-collection practices and obtain consent for targeted advertising.24 After confronting pressure from the European Commission, Facebook agreed to make clear to users that it offers its services for free by utilizing personal data to run targeted advertisements.25 In Ireland, Facebook is facing several investigations into its compliance with European data protection laws.26 These moves signal Europe’s commitment to tough enforcement under its new privacy regime.
Lawmakers in Australia and Canada are considering adopting a privacy framework similar to the EU’s GDPR. The Australian Competition & Consumer Commission recently called for amending the country’s Privacy Act to strengthen the notification requirements for those collecting consumers’ personal information, require the explicit consent of consumers for platforms to collect their data, allow consumers to withdraw consent and erase personal information, and increase the penalties for those that violate consumer privacy.27 Meanwhile, Canada’s House of Commons’ Standing Committee on Access to Information, Privacy and Ethics has called on the Canadian government to implement a new data privacy law which would give the Privacy Commissioner the authority to impose fines for noncompliance.28 Canada’s privacy commissioner has already announced its plans to take Facebook to court for violating Canadian Facebook users’ privacy during the Cambridge Analytica data breach.29 The move follows the decision by the United Kingdom Information Commissioner’s Office to issue a $645,000 fine against Facebook for failing to prevent Cambridge Analytica from accessing users’ information without their consent.30
In the U.S., the Federal Trade Commission is currently investigating Facebook’s failure to prevent the Cambridge Analytica data breach. Facebook expects a $5 billion fine for violating its consent decree with the FTC in which Facebook agreed not to share users’ personal data without their consent.31 If the FTC issues a $5 billion fine against Facebook, it would be the largest penalty a U.S. regulator has ever levied against a technology company.32 The investigation may also result in structural remedies, including a requirement that Facebook create new positions within the company devoted to strengthening user privacy and company compliance overseen by an independent committee established by the FTC. Commissioners, however, disagree over whether Facebook CEO Mark Zuckerberg should be held personally liable for any future violations.33 Some argue, however, that individual enforcement cases cannot be a substitute for comprehensive regulation.34
As such, federal lawmakers have begun drafting online privacy legislation. In the past two years, four Senate bills regarding user privacy have been introduced, and senators on the Committee on Commerce, Science and Transportation are currently drafting their own proposal which would preempt California’s GDPR-style consumer privacy law. Both Democratic and Republican lawmakers have expressed concern over platforms’ treatment of users’ data and agree that a law addressing data privacy is needed. At a time of partisan gridlock, there may be a unique opportunity for a bipartisan push to pass a national privacy law.
“At a time of partisan gridlock, there may be a unique opportunity for a bipartisan push to pass a national privacy law.”
Harmful content
While innovative tech companies have enabled new ways for individuals to develop meaningful connections with one another, online platforms have also created the largest forum for bad actors to post and disseminate violent and extremist content, fake news, and disinformation. Facebook’s automatic detection system failed to stop a gunman who opened fire at two mosques in Christchurch in March from livestreaming the massacre.35 The livestream ran for nearly 30 minutes before the social media company removed it.36 In 2017, neo-Nazis used Facebook to organize the Unite the Right rally in Charlottesville, Virginia, which drew several hundred white nationalists to the college town where a counter-protester was brutally murdered.37
For a long time, online platforms have argued that they are not publishers and therefore not responsible for any harmful content their users post and circulate. Rather than policing their own platforms, tech companies have largely relied on users to flag inappropriate content. Recently, however, online platforms have taken steps to reduce the spread of harmful content, acknowledging they have a responsibility to more closely monitor the content they host. Facebook, for example, has announced new features and product updates it plans to roll out to monitor harmful content, including restrictions on its live video service.38 In May, the company banned a number of high-profile anti-Semites and provocateurs and extended the ban to Instagram, which it owns.
Meanwhile, an international push to stipulate tech companies’ responsibilities for harmful content has gained steam.
In 2018, Germany implemented a new law prohibiting hate speech on online platforms. Cognizant of the important role online platforms play in informing voters, French lawmakers recently passed a law that empowers judges to order tech platforms to remove disinformation during election campaigns, and French lawmakers will soon consider whether its own online hate speech laws need to be updated.39
In the wake of the Christchurch mosque massacres, Australian lawmakers passed a law that subjects online platforms to huge fines and tech executives to jail time if violent material is not removed from platforms in a timely manner.40 By contrast, New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron have worked together to coordinate an international response, drafting the “Christchurch Call,” a non-binding pledge between governments and tech companies that sets out expectations regarding the removal of violent and extremist content.41While tech companies will face no penalties if they fail to comply, the move signals growing international pressure for platforms to do a better job policing themselves, especially when it comes to content connected to real-world violence. Eighteen countries and five American tech companies—Amazon, Facebook, Google, Microsoft, and Twitter—signed onto the accord.42
The most aggressive regulatory efforts to date to rein in harmful content comes from the United Kingdom, where leading consumer protection regulators have called for establishing new government powers to regulate harmful content online, including extremist and violent content, disinformation, and material that exploits children.43 The U.K. government proposes a new regulatory body funded by tech companies and empowered to issue fines, block access to websites, and hold top executives of tech companies liable for the content on their platforms. The House of Commons’ Digital, Culture, Media and Sport Committee will soon hold hearings on the government’s proposal. In an interview with Business Insider, the U.K.’s digital minister urged “other governments [to] follow our lead.”44
Driven by a concern that regulating harmful content online might violate Americans’ constitutional right to free speech, U.S. lawmakers are reluctant to consider introducing any measures to rein in harmful content online. In fact, the Trump administration declined to sign onto the “Christchurch Call,” citing free speech concerns. In a statement explaining its decision, the administration notes, “the best tool to defeat terrorist speech is productive speech.”45 The administration’s move offers yet another illustration that the U.S. understanding of how best to manage the threat posed by harmful content online is increasingly out of step with the path pursued by other countries.
Nonetheless, U.S. lawmakers may soon force platforms to accept greater liability for the content they host. In April 2018, lawmakers amended Section 230 of the Communications Decency Act to allow prosecutors and sex trafficking victims to take websites to court if advertisements or posts facilitate trafficking.46 The move indicated that the legal regime that has long insulated online platforms from liability for the content they host may confront future challenges.
“While tackling online privacy has attracted bipartisan support among U.S. lawmakers, the debate over reining in harmful content online is rife with partisan division.”
While tackling online privacy has attracted bipartisan support among U.S. lawmakers, the debate over reining in harmful content online is rife with partisan division. Prominent Republicans, including the president, argue that big tech is suppressing conservative speech. Just recently, the Senate Judiciary subcommittee held a hearing on “Technological Censorship and the Public Discourse,” in which Republican senators claimed that Facebook, Google, and Twitter stifle conservative speech. (Facebook’s recent move to ban extremists, which fell heavily on white nationalists as well as anti-Semites, may add fuel to this charge.) Democrats believe such claims distract from a bigger problem: Platforms have failed to aggressively police hate speech and disinformation. The growing bipartisan chorus to rein in big tech may mask significant differences in how each party views the threat posed by big tech.
Balance of power between content creators and platforms
While dominant online platforms have created a new home on the internet for fake news and conspiracy theories, they have become an indispensable tool for the circulation of legitimate content. One-in-five U.S. adults regularly consumes news on social media.47 Online platforms have undoubtedly come to occupy a significant place in the information ecosystem but in doing so, they have also threatened the economic viability of media organizations whose content they circulate.
Big tech dominance in digital advertising has hurt news outlets’ advertising revenue. Facebook and Google alone accounted for 60% of total digital advertising spent in 2018.48
As a result, print and online media have struggled to sustain themselves financially.49 The economic realities confronting the news industry are dismal. Newsroom employment declined 23% between 2008 and 2017,50 and the U.S. lost 1,800 newspapers between 2004 and 2018.51 While newsrooms across the country have laid off reporters or ceased production, journalistic content has been used by online platforms to attract and engage users.
A number of countries have grown increasingly concerned about the future of journalism in the digital economy. The Australian Competition and Consumer Commission argues that digital platforms pose a serious challenge to the provision of journalism and has called for establishing a regulatory authority to oversee and monitor how digital platforms display news and identify how algorithms affect the production of news.52 Meanwhile, a report commissioned by the British government proposes creating a code of conduct to govern the relationship between news publishers and online platforms, investigating the online advertising industry to ensure fair competition, and providing tax relief to publishers to support public-interest news.53
The European Union has already taken steps to level the playing field between content creators and online platforms. A recently passed copyright directive requires tech companies to enter into licensing agreements with content creators (including media companies) in order to share their content on the platform.54 The directive also hold platforms liable for any copyrighted content displayed without the proper rights and licensing.
Some U.S. lawmakers have expressed concern about the future of journalism in the digital age. In 2009, the Senate Committee on Commerce, Science, and Transportation held a subcommittee hearing on “The Future of Journalism.”55 That same year, Sen. Ben Cardin introduced the Newspaper Revitalization Act which would have enabled newspapers to operate as nonprofits, making advertising and subscription revenue tax exempt and enabling tax-deductible contributions to support coverage.56 On the campaign trail entrepreneur-turned Democratic presidential candidate Andrew Yang has proposed creating a government program that would place experienced journalists in local newsrooms around the country and establishing a Local Journalism Fund that would provide grants to local news outlets.57 While a number of organizations and coordinated efforts have sprung up in the intervening years focused on sustaining local and investigative reporting, the journalism industry’s health has not yet become a primary focus for U.S. lawmakers.
Competition
This imbalance of power between newsrooms and social media sites is mirrored throughout the digital economy, from third-party vendors selling on Amazon Marketplace to app developers making their products available for download on the App Store. Content creators and businesses hold little power to extract fairer terms from the large online platforms that have become indispensable business partners.
This reflects, in part, the inherent nature of the digital economy which is made up of highly concentrated markets that favor dominance. A platform becomes more valuable to each individual user as the total number of users increases. The greater the number of connections a user builds on a platform, the greater the switching costs a user incurs. This network effect makes it incredibly difficult for potential competitors to enter the market.
The massive amount of personal data platforms hold on their users also tips tech markets toward concentration. The more data a platform holds on its users, the more effectively it can customize the articles, photos, and posts an individual user is likely to enjoy, creating a feedback effect that has allowed a few platforms to dominate.
While these market dynamics inherently constrain competition, some tech companies have deliberately undermined competition to entrench their dominance. Big tech companies have bought up hundreds of start-ups, depriving the market of potential competitors. Google alone has acquired more than 200 start-ups since its founding. As a result, many venture capitalists and entrepreneurs have internalized a strategy of trying to be bought out by dominant tech companies instead of trying to compete against them. Major online platforms have also used their platforms to unfairly prioritize their own products and services and priced products below cost to out-discount competitors.
Margrethe Vestager, the European Union Commissioner for Competition, has set the standard for tough enforcement against tech companies that weaken competition. In 2017, the European Commission levied a record $2.7 billion fine against Google for prioritizing its own online shopping service in search results.58 In 2018, the Commission broke this record when it brought a $5 billion antitrust fine against Google for using its mobile operating system to entrench the dominance of its other services (like Search and Chrome).59 This March, the Commission hit Google with a third fine of $1.7 billion for blocking advertising rivals.60 Vestager recently launched a probe into Amazon to examine whether the e-commerce giant uses data on third-party merchants to compete against them in its own marketplace and will decide in the next few months whether to formally investigate Amazon.61 EU lawmakers have already agreed to new regulation that requires platforms to be more transparent with the businesses and content creators that rely on platforms to reach consumers.62
Meanwhile, Germany’s competition authority recently ruled that Facebook’s efforts to combine user data across its social platforms without their consent violates competition law. This is the first time that a major competition enforcer in the EU has found a company in violation of competition law for failing to comply with data protection principles.63
At the heart of these enforcement actions is an emerging international consensus that data is a new, under-examined source of market power in the tech sector. This understanding led the Canadian House of Commons’ Standing Committee on Access to Information, Privacy and Ethics to suggest moving competition enforcement in the tech sector “away from price-centric tools” and toward evaluating the value of data at stake between merging companies.64 Acknowledging the ability of data collection to weaken competition, the committee has also recommended establishing principles of data portability and system interoperability.
An emerging view also holds that antitrust and regulatory action may be needed to rein in big tech. A government-appointed Digital Competition Expert Panel in the United Kingdom recently concluded that neither antitrust action to ensure markets operate freely and competitively nor government intervention through regulation will be sufficient on its own.65 The panel calls for modernizing competition enforcement in the digital age by establishing a pro-competition digital markets unit, resetting merger assessment in digital markets, prioritizing scrutiny of mergers in digital markets (including assessing harm to innovation and potential impacts on competition), and performing retrospective evaluation on previously approved mergers. The panel also recommends developing principles for data mobility, identifying certain companies as having “strategic market status” and prohibiting them from prioritizing their own products and services on their platform, and creating open standards for user data to ensure consumers can easily transition to using another platform.66 In a similar vein, a recently released European Commission report on competition policy states “there is no general answer to the question of whether competition law or regulation is better placed to deal with the challenges arising from digitisation of the economy.” The report goes on to note that “competition law enforcement and regulation are not necessarily substitutes, but most often complements and can reinforce each other.”67
The U.K. expert panel and the European Commission report both take the view that ensuring competition in the digital economy does not require wholesale reform to competition law. The digital economy does not require changing the fundamental aims of competition law, but simply modernizing enforcement. For instance, the European Commission report warns that under-enforcement could threaten consumer welfare and argues that “even where consumer harm cannot be precisely measured, strategies employed by dominant platforms aimed at reducing the competitive pressure they face should be forbidden in the absence of clearly documented consumer welfare gains.”68 An emerging view holds that the digital age may not require revising the goals of competition law but instead challenging enforcers to rethink how they apply existing principles.
“A consumer welfare standard exclusively focused on low prices has failed to capture concentration in the tech sector as dominant technology companies have evaded scrutiny by offering their services for free or at a low cost.”
In the U.S., a consumer welfare standard exclusively focused on low prices has failed to capture concentration in the tech sector as dominant technology companies have evaded scrutiny by offering their services for free or at a low cost. American antitrust enforcers, however, are beginning to re-evaluate this approach.
The Federal Trade Commission recently set up its own task force to examine competition in the technology sector. The task force will assess previously approved acquisitions and study antitrust enforcement in technology markets. Bruce Hoffman, the director of the FTC’s bureau of competition has said the FTC could use its lookback authority to reverse mergers if necessary.69 The FTC also has plans to use its authority to collect non-public information from tech firms to study the inner workings of tech companies and their privacy and competition practices.70
Meanwhile, some U.S. lawmakers have already proposed ways to modernize antitrust enforcement for the digital age. Sen. Klobuchar, for example, introduced the Consolidation Prevention and Competition Promotion Act in September 2017, which would amend the Clayton Antitrust Act by banning acquisitions by any company with a market cap higher than $100 billion. The bill also calls for placing the burden of proof on companies to demonstrate that consolidation won’t limit competition.71
Others have called for regulating tech companies like utilities. Trump’s former chief strategist Steve Bannon, for example, has argued that online platforms, like cable, have become a kind of modern necessity and should therefore be regulated in a similar manner.72 Opponents of this approach contend that regulating tech companies like utilities represents a concession to their dominance and to the market realities that make it difficult for innovative competitors to dethrone today’s dominant players.
On the campaign trail, Sen. Elizabeth Warren released her own plan to rein in big tech which calls for both tougher antitrust enforcement and utility-style regulation.73 The first part of her plan focuses on strengthening antitrust enforcement by appointing regulators who would reverse anti-competitive tech mergers. The second part of her plan calls for enacting legislation that would designate large tech companies such as Amazon, Facebook, and Google as “platform utilities” and ban those companies from selling their own products and services on the platform they operate. Under this proposal, Amazon Marketplace and Amazon Basics, which currently sells products on Amazon Marketplace, would be split into two separate companies banned from sharing data with one another.
Just recently, Facebook co-founder Chris Hughes penned a 6,000-word op-ed calling on lawmakers to break up Facebook.74 A concern that major online platforms have become too big is now under serious consideration in mainstream policy circles. Whether their breakup will be supported by an antitrust framework focused on consumer welfare remains an open question.
Conclusion
Around the world, governments are experimenting with how best to confront the problems posed by the digital economy, from the ways in which online platforms empower bad actors to dominant technology companies shaping our personal and economic lives in profound ways. But not every challenge the digital economy has introduced may be effectively managed by passing new laws, levying steep fines, or imposing structural mandates. There is only so much regulatory action can do to address the fact that catering to our appetite for political tribalism has become so profitable.
While Zuckerberg calls Facebook a “global community,” it has become increasingly clear to many that the algorithms that fuel the attention economy stoke polarization rather than quell it, creating negative externalities that threaten a healthy democracy.75 As a Washington Post headline on recent policy changes at Facebook reads, “Facebook is trying to stop its own algorithms from doing their job.”76
A massive redesign could discourage the ideological echo chambers that currently proliferate online or deprioritize incendiary posts, but it is unlikely platforms will be able to entirely prevent harmful content from making its way online. While Facebook has received criticism for failing to invest in more employees who can monitor harmful content on the platform and verify the factual legitimacy of articles shared, no number of humans can reasonably evaluate the volume of content posted every second on the largest platforms, and current technology alone is not advanced enough to sufficiently monitor disinformation and hate speech.77
Lawmakers in several countries, including Canada, France, and the United Kingdom have proposed implementing digital literacy initiatives that would help citizens identify disinformation online and not spread it themselves. Initiatives would also teach users the value of stopping to think before publishing a post or comment online. Platform technologies have long aimed to remove “friction” from the user experience, but this design principle has been criticized by the Center for Humane Technology for creating addictive technologies that discourage thoughtful engagement online.78
While many countries have identified a critical role for government to develop new rules and promote competitive markets in the digital economy, some also see a role for government in helping citizens and democracy maintain power at a time of rapid technological change. France’s Policy Planning Staff and Institute for Strategic Research argues that “citizens concerned with the quality of public debate” are in charge of its protection.79 “[I]t is the duty of civil society to develop its own resilience,” they note in a report on information manipulation.80 They argue, however, that governments “cannot afford to ignore a threat that undermines the foundations of democracy” and “should come to the aid of civil society.”81 Similarly, the authors of the United Kingdom House of Commons’ report on disinformation and fake news contend that citizens who engage thoughtfully online and “regulation to restore democratic accountability” can “make sure the people stay in charge of the machines.”82
The global push to craft rules for the digital economy is well on its way, and in the U.S., lawmakers who were once cheerleaders of Silicon Valley are now declaring the end of an era of self-regulation.83 With a federal response to online political advertising transparency, user privacy, and competition in technology markets in its nascent stage, the U.S. can look to the rest of the world for ideas.
It may well be that the challenge of the digital information sector is beyond the ability of single nations working by themselves to meet. If so, it is time to start thinking about the kinds of international agreements and institutions that could do collectively what individual countries cannot do for themselves.
In the first instance, anyway, these arrangements would have to be restricted to countries that share a broad understanding of the importance of individual liberty and a free civil society. An international regime that encompassed authoritarian governments as well as liberal democracies would be a cure worse than the disease. It is not clear, for example, that the People’s Republic of China would ever be willing to embrace the kinds of individual and civic rights for information technologies that liberal democracies consider nonnegotiable. Still, it would make sense to begin exploring what the governments of free societies might be able to agree on across their differences of law and civic cultures.
The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.
Amazon, Facebook, Google, and Microsoft provide general, unrestricted support to The Brookings Institution. The findings, interpretations, and conclusions posted in this piece are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.
Appendix
Australia
- Australian Competition & Consumer Commission: “Digital Platforms Inquiry”
Canada
- Standing Committee on Access to Information, Privacy and Ethics: “Democracy Under Threat: Risks and Solutions in the Era of Disinformation and Data Monopoly”
European Union
- European Commission: “Competition policy for the digital era”
France
- Policy Planning Staff in the Ministry for Europe and Foreign Affairs and the Institute for Strategic Research in the Ministry for the Armed Forces: “Information Manipulation: A Challenge for Our Democracies”
United Kingdom
- The Cairncross Review: “A Sustainable Future for Journalism”
- Digital Competition Expert Panel: “Unlocking digital competition”
- House of Commons’ Digital, Culture, Media and Sport Committee: “Disinformation and ‘fake news’: Final Report”
- Secretary of State for Digital, Culture, Media & Sport and the Secretary of State for the Home Department: “Online Harms White Paper”
-
Footnotes
- David Tolbot, “Inside Egypt’s ‘Facebook Revolution,’” MIT Technology Review, April 29, 2011, https://www.technologyreview.com/s/423884/inside-egypts-facebook-revolution/.
- Kevin Roose, “A Mass Murder of, and for, the Internet,” New York Times, March 15, 2019, https://www.nytimes.com/2019/03/15/technology/facebook-youtube-christchurch-shooting.html. Paul Mozur, “A Genocide Incited on Facebook, With Posts From Myanmar’s Military,” New York Times, October 15, 2018, https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html. Scott Shane and Sheera Frenkel, “Russia 2016 Influence Operation Targeted African-Americans on Social Media,” New York Times, December 17, 2018, https://www.nytimes.com/2018/12/17/us/politics/russia-2016-influence-campaign.html.
- Cecilia Kang and Sheera Frenkel, “Facebook Says Cambridge Analytica Harvested Data of Up to 87 Million Users,” New York Times, April 4, 2018, https://www.nytimes.com/2018/04/04/technology/mark-zuckerberg-testify-congress.html.
- Zeynep Tufekci, “Russian Meddling Is a Symptom, Not the Disease,” New York Times, October 3, 2018, https://www.nytimes.com/2018/10/03/opinion/midterms-facebook-foreign-meddling.html. Zeynep Tufekci, “YouTube, the Great Radicalizer,” New York Times, March 10, 2018, https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html.
- For instance, a report on the threat posed by disinformation from the Standing Committee on Access to Information, Privacy and Ethics in Canada’s House of Commons, states that “the structural problems inherent in social media platforms serve to fuel the attention economy and help in the promotion of disinformation.” (“Democracy Under Threat: Risks and Solutions in the Era of Disinformation and Data Monopoly,” Canada House of Commons’ Standing Committee on Access to Information, Privacy and Ethics, December 2018, https://www.ourcommons.ca/Content/Committee/421/ETHI/Reports/RP10242267/ethirp17/ethirp17-e.pdf).
- The Digital, Culture, Media and Sport Committee in the United Kingdom’s House of Commons recently released a report on “Disinformation and ‘fake news’” which observes “people are able to accept and give credence to information that reinforces their views, no matter how distorted or inaccurate, while dismissing content with which they do not agree as ‘fake news.’” (“Disinformation and ‘fake news’: Final Report,” United Kingdom House of Commons’ Digital, Culture, Media and Sport Committee, February 14, 2019, https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf).
- A French report on “Information Manipulation,” for example, notes that the “largest Western democracies are not immune.” (“Information Manipulation: A Challenge for Our Democracies,” Policy Planning Staff in the Ministry for Europe and Foreign Affairs and the Institute for Strategic Research in the Ministry for the Armed Forces, August 2018, https://www.diplomatie.gouv.fr/IMG/pdf/information_manipulation_rvb_cle838736.pdf).
- Tony Romm, “Facebook and Google to be quizzed on white nationalism and political bias as Congress pushes dueling reasons for regulation,” Washington Post, April 8, 2019, https://www.washingtonpost.com/technology/2019/04/08/facebook-google-be-quizzed-white-nationalism-political-bias-congress-pushes-dueling-reasons-regulation/?utm_term=.d7bc3a78c5e6).
- James Q. Whitman, “The Two Western Cultures of Privacy: Dignity versus Liberty,” The Yale Law Journal 113, no. 6 (April 2004): 1151-1222): http://heinonline.org/HOL/Page?handle=hein.journals/ylr113&div=52.
- Whitman, “The Two Western Cultures of Privacy.”
- Kurt Wagner, “Digital advertising in the US is finally bigger than print and television,” Vox, February 20, 2019, https://www.vox.com/2019/2/20/18232433/digital-advertising-facebook-google-growth-tv-print-emarketer-2019.
- Julia Angwin, Madeleine Varner, and Ariana Tobin, “Facebook Enabled Advertisers to Reach ‘Jew Haters,’” ProPublica, September 14, 2017, https://www.propublica.org/article/facebook-enabled-advertisers-to-reach-jew-haters.
- Scott Shane and Vindu Goel, “Fake Russian Facebook Accounts Bought $100,000 in Political Ads,” New York Times, September 6, 2017, https://www.nytimes.com/2017/09/06/technology/facebook-russian-political-ads.html.
- Dylan Byers, “Facebook estimates 126 million people were served content from Russia-linked pages,” CNN, October 31, 2017, https://money.cnn.com/2017/10/30/media/russia-facebook-126-million-users/index.html.
- Kevin Roose, “In Virginia House Race, Anonymous Attack Ads Pop Up on Facebook,” New York Times, October 17, 2018, https://www.nytimes.com/2018/10/17/us/politics/virginia-race-comstock-wexton-facebook-attack-ads.html.
- Bill Curry, “Elections Canada puts Facebook, Google, other tech giants on notice over political ads,” The Globe and Mail, April 24, 2019, https://www.theglobeandmail.com/politics/article-elections-canada-puts-tech-giants-on-notice-over-political-ads/.
- “Republican Leaders on Campaign Finance, Voting Rights, and Ethics Legislation,” C-SPAN, March 6, 2019, https://www.c-span.org/video/?458565-1/republican-leaders-hold-news-conference-campaign-finance-voting-rights-ethics-legislation.
- Niels Lesniewski, “McConnell Skeptical of Mandatory Disclosures for Facebook, Twitter Ads,” Roll Call, November 4, 2017, https://www.rollcall.com/news/politics/mcconnell-skeptical-of-mandatory-disclosures-for-facebook-and-twitter-ads.
- Gabriel J.X. Dance, Michael LaForgia, and Nicholas Confessore, “As Facebook Raised a Privacy Wall, It Carved an Opening for Tech Giants,” New York Times, December 18, 2019, https://www.nytimes.com/2018/12/18/technology/facebook-privacy.html.
- Sam Schechner and Mark Secada, “You Give Apps Sensitive Personal Information. Then They Tell Facebook.” Wall Street Journal, February 22, 2019, https://www.wsj.com/articles/you-give-apps-sensitive-personal-information-then-they-tell-facebook-11550851636.
- Matthew Rosenberg, Nicholas Confessore, and Carole Cadwalladr, “How Trump Consultants Exploited the Facebook Data of Millions,” New York Times, March 17, 2018, https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html.
- CNN transcript Amanpour March 26, 2018 episode, http://transcripts.cnn.com/TRANSCRIPTS/1803/26/ampr.01.html.
- Mike Isaac and Sheera Frenkel, “Facebook Security Breach Exposes Accounts of 50 Million Users,” New York Times, September 28, 2018, https://www.nytimes.com/2018/09/28/technology/facebook-hack-data-breach.html.
- Tony Romm, “France fines Google nearly $57 million for first major violation of new European privacy regime,” Washington Post, January 21, 2019, https://www.washingtonpost.com/world/europe/france-fines-google-nearly-57-million-for-first-major-violation-of-new-european-privacy-regime/2019/01/21/89e7ee08-1d8f-11e9-a759-2b8541bbbe20_story.html?utm_term=.f85c34719ab6.
- Sam Schechner, “Facebook Bends to EU Pressure on ‘Misleading’ Fine Print,” Wall Street Journal, April 9, 2019, https://www.wsj.com/articles/facebook-bends-to-eu-pressure-on-misleading-fine-print-11554804346.
- “Irish regulator opens inquiry into Facebook over password storage,” Reuters, April 25, 2019, https://www.reuters.com/article/us-facebook-cyber/irish-regulator-opens-inquiry-into-facebook-over-password-storage-idUSKCN1S11YV.
- “Digital Platforms Inquiry: Preliminary report,” Australian Competition & Consumer Commission, December 2018, https://www.accc.gov.au/system/files/ACCC%20Digital%20Platforms%20Inquiry%20-%20Preliminary%20Report.pdf.
- “Democracy Under Threat,” Canada House of Commons’ Standing Committee on Access to Information, Privacy and Ethics.
- Tony Romm, “Canada accuses Facebook of breaking privacy laws, promises to take the company to court,” Washington Post, April 25, 2019, https://www.washingtonpost.com/technology/2019/04/25/canada-accuses-facebook-breaking-local-privacy-law-threatening-take-company-court/?utm_term=.eaa8f1f70238.
- Jim Waterson, “UK fines Facebook £500,000 for failing to protect user data,” The Guardian, October 25, 2018, https://www.theguardian.com/technology/2018/oct/25/facebook-fined-uk-privacy-access-user-data-cambridge-analytica.
- Mike Isaac and Cecilia Kang, “Facebook Expects to Be Fined Up to $5 Billion by F.T.C. Over Privacy Issues,” New York Times, April 24, 2019, https://www.nytimes.com/2019/04/24/technology/facebook-ftc-fine-privacy.html.
- Jeff Horwitz, “Facebook Sets Aside $3 Billion to Cover Expected FTC Fine,” Wall Street Journal, April 24, 2019, https://www.wsj.com/articles/facebook-sets-aside-3-billion-to-cover-expected-ftc-fine-11556137113.
- Tony Romm, “Facebook CEO Mark Zuckerberg said to be under close scrutiny in federal privacy probe,” Washington Post, April 19, 2019, https://www.washingtonpost.com/technology/2019/04/19/federal-investigation-facebook-could-hold-mark-zuckerberg-accountable-privacy-sources-say/?utm_term=.fdd15b0a8d35.
- Tony Romm, “Can Washington keep watch over Silicon Valley? The FTC’s Facebook probe is a high-stakes test.” Washington Post, February 20, 2019, https://www.washingtonpost.com/technology/2019/02/20/can-washington-keep-watch-over-silicon-valley-ftcs-facebook-probe-is-high-stakes-test/?utm_term=.1905d8f22a2d.
- Hamza Shaban, “Facebook to reexamine how livestream videos are flagged after Christchurch shooting,” Washington Post, March 21, 2019, https://www.washingtonpost.com/technology/2019/03/21/facebook-reexamine-how-recently-live-videos-are-flagged-after-christchurch-shooting/?utm_term=.937ec58bec89.
- Meagan Flynn, “No one who watched New Zealand shooter’s video live reported it to Facebook, company says,” Washington Post, March 19, 2019, https://www.washingtonpost.com/nation/2019/03/19/new-zealand-mosque-shooters-facebook-live-stream-was-viewed-thousands-times-before-being-removed/?utm_term=.97391dac2c45. Shaban, “Facebook to reexamine how livestream videos are flagged after Christchurch shooting.”
- Joe Heim, “Recounting a day of rage, hate, violence and death,” Washington Post, August 14, 2017, https://www.washingtonpost.com/graphics/2017/local/charlottesville-timeline/?utm_term=.f085d315848b. Tony Romm and Elizabeth Dwoskin, “Facebook says it will now block white-nationalist, white-separatist posts,” Washington Post, March 27, 2019, https://www.washingtonpost.com/technology/2019/03/27/facebook-says-it-will-now-block-white-nationalist-white-separatist-posts/?utm_term=.0d69befd6f63.
- Elizabeth Dwoskin, “How Facebook is trying to stop its own algorithms from doing their job,” Washington Post, April 10, 2019, https://www.washingtonpost.com/technology/2019/04/10/how-facebook-is-trying-stop-its-own-algorithms-doing-their-job/?utm_term=.c698665d6f0a. Cade Metz and Adam Satariano, “Facebook Restricts Live Streaming After New Zealand Shooting,” New York Times, May 14, 2019, https://www.nytimes.com/2019/05/14/technology/facebook-live-violent-content.html.
- “Justice : lutte contre la haine sur internet,” Assemblée nationale, May 2, 2019, http://www.assemblee-nationale.fr/dyn/15/dossiers/lutte_contre_haine_internet.
- Hillary Leung, “Australia Has Passed a Sweeping Law to Punish Social Media Companies for Not Policing Violent Content. Here’s What to Know,” TIME, April 5, 2019, http://time.com/5564851/australia-social-media-law-violence/.
- Charlotte Graham-McLay, “New Zealand and France to Seek Pact Blocking Extreme Online Content,” New York Times, April 24, 2019, https://www.nytimes.com/2019/04/24/world/asia/ardern-social-media-content.html.
- Tony Romm and Drew Harwell, “White House declines to back Christchurch call to stamp out online extremism amid free speech concerns,” Washington Post, May 15, 2019, https://www.washingtonpost.com/technology/2019/05/15/white-house-will-not-sign-christchurch-pact-stamp-out-online-extremism-amid-free-speech-concerns/?utm_term=.d0b7673c7578.
- Adam Satariano, “Britain Proposes Broad New Powers to Regulate Internet Content,” New York Times, April 7, 2019, https://www.nytimes.com/2019/04/07/business/britain-internet-regulations.html. Tony Romm, “British unveil tough social media proposal,” Washington Post, April 7, 2019, https://www.washingtonpost.com/business/economy/2019/04/07/97a1c4c6-5979-11e9-a00e-050dc7b82693_story.html?utm_term=.a06c397dc065.
- Jake Kanter, “Facebook and Google will be punished with giant fines in the UK if they fail to rid their platforms of toxic content,” Business Insider, February 28, 2019, https://www.businessinsider.com/facebook-and-google-fined-for-harmful-content-margot-james-2019-2.
- Adam Satariano, “Trump Administration Balks at Global Pact to Crack Down on Online Extremism,” New York Times, May 15, 2019, https://www.nytimes.com/2019/05/15/technology/christchurch-call-trump.html.
- Steven Overly and Ashley Gold, “How tech lost on the sex trafficking bill,” POLITICO, March 22, 2018, https://www.politico.com/story/2018/03/22/how-tech-lost-on-the-sex-trafficking-bill-423364.
- Elisa Shearer, “Social media outpaces print newspapers in the U.S. as a news source,” Pew Research Center Fact Tank, December 10, 2018, https://www.pewresearch.org/fact-tank/2018/12/10/social-media-outpaces-print-newspapers-in-the-u-s-as-a-news-source/.
- “US Digital Ad Spending Will Surpass Traditional in 2019,” eMarketer, February 19, 2019, https://www.emarketer.com/content/us-digital-ad-spending-will-surpass-traditional-in-2019.
- Keach Hagey, Lukas I. Alpert, and Yaryna Serkez, “In News Industry, a Stark Divide Between Haves and Have-Nots,” Wall Street Journal, May 4, 2019, https://www.wsj.com/graphics/local-newspapers-stark-divide/.
- Elizabeth Grieco, “Newsroom employment dropped nearly a quarter in less than 10 years, with greatest decline at newspapers,” Pew Research Center Fact Tank, July 30, 2018, https://www.pewresearch.org/fact-tank/2018/07/30/newsroom-employment-dropped-nearly-a-quarter-in-less-than-10-years-with-greatest-decline-at-newspapers/.
- Penelope Muse Abernathy, “The Expanding News Desert,” University of North Carolina School of Media and Journalism’s Center for Innovation and Sustainability in Local Media, 2018, https://www.cislm.org/wp-content/uploads/2018/10/The-Expanding-News-Desert-10_14-Web.pdf.
- “Digital Platforms Inquiry: Preliminary report,” Australian Competition & Consumer Commission.
- “The Cairncross Review: A Sustainable Future for Journalism,” February 12, 2019, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/779882/021919_DCMS_Cairncross_Review_.pdf.
- Laura Kayali and Laurens Cerulus, “Europe revamps copyright rules to help creative industries face tech giants,” POLITICO, February 14, 2019, https://www.politico.eu/article/copyright-google-facebook-europe-revamps-rules-to-help-creative-industries-face-tech-giants/. Matthew Karnitschnig, “Europe puts American tech on leash,” POLITICO, April 19, 2019, https://www.politico.eu/article/silicon-valley-copyright-antitrust-google-europe-puts-american-tech-on-leash/.
- “The Future of Journalism,” Hearing before the U.S. Senate Committee on Commerce, Science, and Transportation Subcommittee on Communications, Technology, and the Internet, May 6, 2009, https://www.govinfo.gov/content/pkg/CHRG-111shrg52162/pdf/CHRG-111shrg52162.pdf.
- “Senator Cardin Introduces Bill That Would Allow American Newspapers to Operate As Non-Profits,” Ben Cardin U.S. Senator for Maryland, March 24, 2009, https://www.cardin.senate.gov/newsroom/press/release/senator-cardin-introduces-bill-that-would-allow-american-newspapers-to-operate-as-non-profits.
- Josh Wood, “Andrew Yang, the most meme-able 2020 candidate, also wants to save journalism,” Nieman Journalism Lab, April 24, 2019, https://www.niemanlab.org/2019/04/andrew-yang-the-most-meme-able-2020-candidate-also-wants-to-save-journalism/.
- Mark Scott, “Google Fined Record $2.7 Billion in E.U. Antitrust Ruling,” New York Times, June 27, 2017, https://www.nytimes.com/2017/06/27/technology/eu-google-fine.html.
- Sam Schechner and Natalia Drozdiak, “The Woman Who Is Reining In America’s Technology Giants,” Wall Street Journal, April 4, 2018, https://www.wsj.com/articles/the-woman-who-is-reining-in-americas-technology-giants-1522856428?mod=article_inline.
- “European Union fines Google $1.7B for antitrust violation,” CNBC, March 20, 2019, https://www.cnbc.com/video/2019/03/20/european-union-google-1point7b-for-antitrust-violation.html.
- Silvia Amaro, “A full EU probe into Amazon could come in the next few months, top official says,” CNBC, April 3, 2019, https://www.cnbc.com/2019/04/03/eus-vestager-says-a-full-probe-into-amazon-could-come-before-october.html.
- “Online platforms required by law to be more transparent with EU businesses,” European Parliament, February 14, 2019, http://www.europarl.europa.eu/news/en/press-room/20190214IPR26425/online-platforms-required-by-law-to-be-more-transparent-with-eu-businesses.“Increased transparency in doing business through online platforms,” European Council, February 20, 2019, https://www.consilium.europa.eu/en/press/press-releases/2019/02/20/increased-transparency-in-doing-business-through-online-platforms/.
- Isobel Asher Hamilton, “Facebook was clobbered by a landmark EU ruling that could mean major changes to the way it does business,” Business Insider, February 7, 2019, https://www.businessinsider.com/facebook-landmark-bundeskartellamt-data-ruling-in-germany-2019-2. Simon Van Dorpe, “Germany hits Facebook at heart of its business model,” POLITICO, April 19, 2019, https://www.politico.eu/article/germany-hits-facebook-at-heart-of-its-business-model/.
- “Democracy Under Threat,” Report of the Standing Committee on Access to Information, Privacy and Ethics.
- Greg Ip, “In Britain, a Middle Way for Reining In Big Tech,” Wall Street Journal, March 12, 2019, https://www.wsj.com/articles/in-britain-a-middle-way-for-reining-in-big-tech-11552435261.“Unlocking digital competition,” Report of the Digital Competition Expert Panel, March 2019, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/785547/unlocking_digital_competition_furman_review_web.pdf.
- Ip, “In Britain, a Middle Way for Reining In Big Tech.”
- Jacques Crémer, Yves-Alexandre de Montjoye, Heike Schweitzer, “Competition policy for the digital era,” European Commission, 2019, http://ec.europa.eu/competition/publications/reports/kd0419345enn.pdf.
- Crémer, Montjoye, Schweitzer, “Competition policy for the digital era.”
- Kadhim Shubber and Kiran Stacey, “New FTC task force to tackle competition in tech sector,” Financial Times, February 26, 2019, https://www.ft.com/content/2801ced2-39f7-11e9-b856-5404d3811663.
- Nancy Scola and John Hendel, “Agency probing Facebook plans broad review of tech data practices,” POLITICO, March 20, 2019, https://www.politico.com/story/2019/03/20/ftc-tech-industry-1284243.
- “S.307 – Consolidation Prevention and Competition Promotion Act of 2019,” Congress.gov, accessed May 17, 2019, https://www.congress.gov/bill/116th-congress/senate-bill/307.
- Ryan Grim, “Steve Bannon Wants Facebook and Google Regulated Like Utilities,” The Intercept, July 27, 2017, https://theintercept.com/2017/07/27/steve-bannon-wants-facebook-and-google-regulated-like-utilities/.
- Elizabeth Warren, “Here’s how we can break up Big Tech,” Medium, March 8, 2019, https://medium.com/@teamwarren/heres-how-we-can-break-up-big-tech-9ad9e0da324c.
- Chris Hughes, “It’s Time to Break Up Facebook,” New York Times, May 9, 2019, https://www.nytimes.com/2019/05/09/opinion/sunday/chris-hughes-facebook-zuckerberg.html.
- “Democracy Under Threat,” Canada House of Commons’ Standing Committee on Access to Information, Privacy and Ethics.
- Dwoskin, “How Facebook is trying to stop its own algorithms from doing their job.”
- Tony Romm, “A flood online of hate speech greets lawmakers probing Facebook and Google about white nationalism,” Washington Post, April 9, 2019, https://www.washingtonpost.com/technology/2019/04/09/flood-online-hate-speech-greets-lawmakers-probing-facebook-google-about-white-nationalism/?utm_term=.1643e34f9648. Gary Marcus and Ernest Davis, “No, A.I. Won’t Solve the Fake News Problem,” New York Times, October 20, 2018, https://www.nytimes.com/2018/10/20/opinion/sunday/ai-fake-news-disinformation-campaigns.html.
- Betsy Morris, “A Silicon Valley Apostate Launches ‘An Inconvenient Truth’ for Tech,” Wall Street Journal, April 23, 2019, https://www.wsj.com/articles/a-silicon-valley-apostate-launches-an-inconvenient-truth-for-tech-11556046000.
- “Information Manipulation: A Challenge for Our Democracies,” Policy Planning Staff in the Ministry for Europe and Foreign Affairs and the Institute for Strategic Research in the Ministry for the Armed Forces.
- “Information Manipulation: A Challenge for Our Democracies,” Policy Planning Staff in the Ministry for Europe and Foreign Affairs and the Institute for Strategic Research in the Ministry for the Armed Forces.
- “Information Manipulation: A Challenge for Our Democracies,” Policy Planning Staff in the Ministry for Europe and Foreign Affairs and the Institute for Strategic Research in the Ministry for the Armed Forces.
- “Disinformation and ‘fake news,’” United Kingdom House of Commons’ Digital, Culture, Media and Sport Committee.
- Nicholas Confessore and Matthew Rosenberg, “Facebook Fallout Ruptures Democrats’ Longtime Alliance with Silicon Valley,” New York Times, November 17, 2018, https://www.nytimes.com/2018/11/17/technology/facebook-democrats-congress.html.