Sections

Commentary

How will bans on social media affect children?

Nicol Turner Lee, Josie Stewart, and
Josie Stewart Research and Communications Assistant
Carolina Oxenstierna
Carolina Oxenstierna Former Research Intern - Center for Technology Innovation

December 9, 2025


  • Lawmakers across the U.S. and around the world are legislating on kids’ use of social media, with several countries passing or considering bans for children under a certain age.
  • These measures are somewhat popular, though attempts in the U.S. have faced legal challenges and opposition from social media companies and civil liberties groups.
  • Bans may not solve the problems deriving from children’s social media use and raise concerns about freedom of expression. Targeting how social media platforms are designed could be more effective and provide better results for all users, including adults.
A group of vintage figures ride a circular machine, symbolising the endless, repetitive cycle of algorithmic scrolling and shallow interactions.
Nadia Piet & Archival Images of AI + AIxDESIGN / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

Australia’s first-of-its-kind social media ban for children under the age of 16 will take effect on Dec. 10, following heightened concerns about the impact of social media on teen mental health and well-being. Since the ban was passed in 2024, other countries have followed suit, introducing or considering similar legislation. Denmark plans to ban social media for users under 15, and regulators in the U.K. have signaled their willingness to advance an age-restricted ban or alternative measures such as time restrictions. 

These efforts build on growing concern from policymakers, civil society, and researchers worried about the correlation between children’s well-being and social media use. Several projects link reliance upon these platforms to increased rates of depression and anxiety, distorted feelings about body image, and low self-esteem. Beyond correlational studies, some lawmakers cite rising rates of hopelessness and loneliness among children as other issues. Additionally, it’s unclear if bans will accomplish the goal of curbing the excessive use of social media by children. 

Undergirding this debate is a host of other concerns around children’s right to free expression, access to information, online privacy, and parents’ role in policing social media use. 

Although widely debated, Australia’s ban was overwhelmingly popular, with one YouGov poll finding that 77% of Australians supported the law at the time of its passage. However, it’s unclear how enforcement might affect this support until the law takes effect. Even then, government officials said the rules won’t come into force all at once. 

No other country has yet to pass outright prohibition, though Denmark is the latest to propose a ban. In October, the prime minister announced a proposal that would apply to children under 15 but include an exception for kids who have parental consent at age 13. 

Globally, one survey found that 65% of people support banning children under the age of 14 from using social media. This included majorities in 29 out of 30 countries surveyed, with Germany as the sole outlier. 

Other countries have taken more targeted, less restrictive approaches. Germany and France have raised the age needed for parental consent to open an account, while the Netherlands and South Korea have restricted the use of cellphones in classrooms. There are also strengthened protections for children’s data, including from the EU, which requires parental consent for the processing of personal data under the age of 13 or 16, depending on the member state. 

Social media and the US

In the United States, without much action at the federal level, states have sought to address children’s use of social media through a mix of age-assurance mechanisms, additional privacy protections, and outright bans. Florida Gov. Ron DeSantis signed a ban on children under 14 using certain social media platforms last year; the law was quickly challenged, though a federal appeals court recently ruled that enforcement can begin while the lawsuit continues. Other states, such as Utah, TexasArkansas, and Louisiana, have attempted to require parental consent for minors to have social media accounts, raising the bar from the federal threshold set at 13 years old, though all faced legal challenges. California and New York have taken a different approach, with both seeking to regulate “addictive feeds” for minors without parental consent and strengthen kids’ privacy settings. 

Despite these varied strategies, legislators cite similar concerns about children’s worsening mental health, rising rates of time spent online, and harassment. These are reflected at the federal level, where Sen. Brian Schatz (D-HI) introduced the Kids Off Social Media Act (KOSMA) to prohibit children under the age of 13 from creating or maintaining social media accounts in addition to banning content recommendations based on personal data for children under 17. The bill was reintroduced this year by a bipartisan group of senators and has the support of various organizations, including the American Federation of Teachers and the American Academy of Child and Adolescent Psychiatry. 

More recently, members of Congress have expressed renewed interest in legislation on children’s online safety, examining various proposals to protect children and their data online. KOSMA was not discussed at length during a recent hearing in the House of Representatives, though the Reducing Exploitative Social Media Exposure for Teens (RESET) Act—which would prohibit social media platforms from allowing children under the age of 16 to make or hold accounts—was included in the list of proposals. 

As these efforts have grown, social media companies have announced new provisions related to kids’ and teens’ accounts. Meta has gradually rolled out “Teen Accounts” with built-in protections and parental controls, and in October, announced that teens on Instagram will only be shown PG-13 content, similar to the ratings systems already used in the motion picture industry. But soon after, the Motion Picture Association sent the company a cease-and-desist letter, suggesting that Meta had been “highly misleading” and applied the PG-13 rating without regard for the multistakeholder process that goes into creating such labels.

Will these bans solve the problem at hand?

Across the United States and abroad, lawmakers have concluded that social media bans are both necessary and urgent to protect children’s mental health and online safety. However, there is little evidence that such bans are an effective solution, and there is growing reason to believe they may raise as many challenges as they aim to solve. As governments consider their next steps, there are several considerations beyond children’s well-being, especially the legal, technical, and social implications of pursuing age-based approaches. 

First, tech companies themselves have maintained that outright bans are difficult to enforce, as children often find ways to circumvent restrictions. In Australia, companies have also expressed concern about the lack of a grace period and the heavy penalties imposed on platforms rather than users who subvert the restrictions. 

Even if bans could be successfully implemented, barring kids from social media might not meaningfully reduce screen time. One study concludes that restricting access to certain platforms does not necessarily encourage kids to spend more time offline, but rather pushes them toward other activities like watching television, playing video games, or even exploring “darker corners of the internet.” 

Beyond technical feasibility, legal and constitutional concerns also pose significant hurdles. As discussed, several state-level bans have already faced legal challenges on the grounds that they restrict free expression, among additional concerns. In the U.S., civil liberties groups such as the American Civil Liberties Union and Public Knowledge argue that KOSMA would unconstitutionally infringe on minors’ rights to access and share information online. Similar critiques have been leveled against other proposals, such as the Kids Online Safety Act (KOSA), which some warn could restrict access to critical resources, especially those that can benefit children navigating questions of identity. 

At the center of this debate is the protection of children’s online data privacy. This is a notable concern, given that proposed bans often depend on new forms of age verification, where companies require an ID to ensure a user is being truthful about their age, and age assurance, where companies will estimate a user’s age to identify underage users. Many states have not specified how these checks should be performed, raising questions about how companies will collect, store, and protect sensitive data. The potential use of government-issued IDs or biometric information introduces significant risks, including data breaches, misuse, and exclusion of those who lack formal identification. 

Importantly, young people themselves are questioning the effectiveness of these blanket bans. Elena Mitrevska, an 18-year-old member of Australia’s eSafety Youth Council, has said these efforts seem “really disingenuous” to “remove entire online spaces for young people, versus just talking and trying to fix those particular issues.”  

Bans may also deprive teens of opportunities to develop digital literacy skills by navigating online environments gradually and with guidance. Teenage users will still be introduced to these platforms at some point, and shielding them entirely from social media can delay essential conversations between parents and their children about online risks, while hampering their ability to build these competencies early. Furthermore, social media often serves as a vital resource for LGBTQ+ youth, providing peer support, access to health information, and a sense of community that may be absent offline. 

Addressing core issues

The underlying problem with this regulatory trend is that social media bans target access instead of platform behavior. Policymakers’ central concern is children’s exposure to harmful content and the mental health impacts of social media, but these harms are not limited to underage users. Addictive design features, dangerous content, and negative psychological effects affect users across all age groups, indicating the need for solutions that address these issues systematically, rather than through age-based restrictions alone. This is not to say that the special attention to minors is not important, but it is critical to understand if the remedy will solve the broader problem. 

Australian eSafety commissioner Julie Inman Grant stated that regulators are “aware that delaying children’s access to social media accounts won’t solve everything, but it will introduce some friction in a system that has previously had none.” However, to effectively address the issues driving the rising popularity of social media bans around the world, working with tech companies to reform and redesign the platforms themselves is the imperative next step. Further, pursuing a self-regulatory framework can address children’s online safety while maintaining industry-wide goals. Meta’s attempt to assert ratings and its new “Teen Accounts” suggest industry proclivity toward self-regulation, though it might be coming a little too late in the broader debate about children’s safety. 

If governments aim to build a safer digital environment, the more effective path is to hold technology companies accountable for the design of their platforms and the content moderation systems that allow harmful material and behaviors to persist. Rather than banning access to online spaces that educate, connect, and empower young people, lawmakers should focus on regulating the underlying systems—data collection, algorithms, and content moderation—that power harmful dynamics online.

A regulatory approach that tackles platform design and holistic governance offers a promising path toward a digital environment that is safe, inclusive, and sustainable for users of all ages. Beyond this, we must examine why young people are addicted to social media in the first place. The answer might lie in deeper societal issues that have driven the shift toward greater technological dependence.

Authors

  • Acknowledgements and disclosures

    Meta is a general, unrestricted donor to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and are not influenced by any donation.

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).