A previous version of this article included, in reference to the Kids Online Safety Act (KOSA), the following sentence: “The bill also bans youth under 13 from using social media and seeks parental consent for use among children under 17 years old.” The draft of KOSA that was approved by the Senate Commerce Committee on July 27, 2023 provides minors (defined as individuals under the age of 17) the ability to limit the ability of other individuals to communicate with the minor and to limit features that increase, sustain, or extend use of the covered platform by the minor, but it does not prohibit individuals under the age of 13 from accessing social media. KOSA seeks parental consent for use among children, defined as an individual under the age of 13, not 17.
With 95% of teens reporting use of a social media platform, legislators have taken steps to curb excessive use. With increasing digitalization and reliance on existing and emerging technologies, youth are quick to adopt modern technologies and trends that can lead to excessive use, especially with the proliferation of mobile phones. The increasing use of social media and other related platforms—most recently, generative AI—among minors has even caught the attention of the U.S. surgeon general, who released a health advisory about its effects on their mental health. Alongside increasing calls to action, big tech companies have introduced a multitude of parental supervision tools. While considered proactive, many of the same opponents of social media use have sparked doubts over the efficacy of these tools, and a range of federal and state policies have emerged, which may or may not align with the goals of protecting minors when using social media.
Currently, regulation of social media companies is being pursued by individual states and the federal government, leading to an uneven patchwork of directives. For example, some states have already enacted new legislation to curb social media use among minors, including Arkansas, Utah, Texas, California, and Louisiana. However, individual state efforts have not gone unchallenged. Big tech companies, such as Amazon, Google, Meta, Yahoo, and TikTok, have fought recent state legislation, and NetChoice, a lobbying organization that represents large tech firms, recently launched a lawsuit contesting Arkansas’s new law, and also challenged California’s legislation last year.
At the federal level, the Senate Commerce Committee voted out two bipartisan bills to protect children’s internet use in late July 2023—the Kids Online Safety Act (KOSA) and an updated Children Online Privacy Protection Act (COPPA 2.0) by a unanimous vote. KOSA is intended to create new guidance for the Federal Trade Commission (FTC) and state AGs to penalize companies that expose children to harmful content on their platforms, including those that glamorize eating disorders, suicide, and substance abuse, among other such behaviors. The other bill, COPPA 2.0, proposes to increase the age from 13 to 16 years old under the existing law, and establish bans on companies that advertise to kids. NetChoice also quickly responded to the movement of these bills, suggesting that companies, instead of bad actors, were being scrutinized as the primary violators of the issue. Still, others, including civil liberties groups, have also opposed the Senate’s legislative proposals, pointing to the growing use of parental tools to increase surveillance of their children, content censorship, and the potential to collect more and not less information for age-verification methods.
This succession of domestic activities emerges around the same time that the European Union (EU) and China are proposing regulation and standards that govern minors’ use of social media. This month, for example, China’s Cyberspace Administration published draft guidelines that restrict minors’ use of social media from 10 p.m. to 6 a.m., and limit its use to two hours per day for youth ages 16 to 18, one hour for those 8 to 15, and youth under 8 years old restricted to 40 minutes per day. The U.K. has become even more stringent in their policing of social media platforms with recent suggestions of pressuring behavioral changes among companies through fines and jail time for breaking laws. There are some differences in the age definitions of “minor,” with most proposals referring to individuals under the age of 18, but KOSA codifies minors as those under 17 years of age.
How states specify guidelines and the types of companies subjected to such scrutiny varies among the various state laws. In this blog, we examine five states’ laws with recent online privacy protections for minors and analyze their similarities and differences around the accountability of platforms, including applications, the role of parents, and age-verification methods as well as the opinions of minors about the proposals to curb their social media use.
The states of Arkansas, Texas, Utah, California, and Louisiana have varying definitions for what qualifies as a platform as part of their legislation, which creates some incongruencies when it comes to enforcement. Below are some comparisons of these states’ definitions of what constitutes a digital platform.
Arkansas’s laws offer the most explicit and exclusionary criteria for defining platforms. Companies focused on subscriptions, excluding social interaction, non-educational short video clip generation, gaming, cloud storage, and career development, fall under this definition. The state’s definition also excludes platforms providing email, direct messaging, streaming, news, sports, entertainment, online shopping, document collaboration, or commenting functions on news websites. These narrow criteria may result in most top social media platforms not meeting the definition. Further, short-form video clip generation exclusions could result in most major social media platforms (including WhatsApp and Signal, which both have a Stories function) being excised out of the definition.
Texas’s recently passed social media law defines such companies as “digital service providers” that facilitate social interaction through profiles and content distribution via messages, pages, videos, and feeds. Like Arkansas, it excludes services primarily focused on email, direct messaging, access to news, sports, or commerce content, search engines, and cloud storage. Under this criterion, Instagram would count as a “digital service provider,” but Facebook Marketplace would not.
Utah’s definition of a “social media platform” similarly excludes platforms where email, private messaging, streaming (licensed and gaming), non-user-generated news and sports content, chat or commenting related to such content, e-commerce, gaming, photo editing, artistic content showcasing (portfolios), career development, teleconferencing or video conferencing, cloud storage, or educational software are the predominant functions. Overall, these exclusions primarily focus on platforms enabling social interaction and content distribution.
In contrast to the other states, California employs the term “online service, product, or feature” which features content that interests minors such as “games, cartoons, music, and celebrities” and excludes broadband services, telecommunications services, and the use of physical products—criteria that align with those outlined by KOSA.
Louisiana has the broadest definition of platforms by using terms such as “account,” which encompasses a wide range of services including sharing information, messages, images/videos, gaming, and “interactive computer service.” The latter refers to any provider that enables access to the internet, including educational services. This approach may indicate an intent to encompass various platforms and services enabling interaction and content sharing without explicitly specifying exclusions.
Each of the five states also has varied guidance on which apps are subjected to the law. Some apps like Discord, Slack, Microsoft Teams, GroupMe, Loop, Telegram, and Signal allow text, photo, and video sharing through direct messaging. However, even those primarily used for direct messaging, like Signal, may raise questions due to new features like Stories where a user can narrate or provide short, time-limited responses or provide status updates. Meta’s WhatsApp, as part of a larger social media platform, also may be influenced by ownership and integration—despite it being primarily a messaging app. The curation systems of streaming platforms, like Netflix and Hulu that are without direct social interaction, may fall in or outside the regulatory scope focused on social interaction and content distribution as users expand their platforms.
Table 1 provides a preliminary classification of what online content may or may not be permitted under each state law, in accordance with their definitions of what constitutes an eligible platform.
Table 1. List of included and excluded platforms by state
State | Definition | Included | Potential Examples | Exclusions | Potential Examples |
Arkansas | “Social media company” | Availability of creating public profile or account to interact socially and view and distribute content | Facebook Newsfeed, Twitter, dating apps | Subscriptions without social interaction, non-educational short video clip generation, gaming, cloud storage, and career development | Instagram Stories, Facebook Stories, TikTok, YouTube Shorts, Snapchat, Signal, LinkedIn |
California | “Online service, product, or feature” | Games, cartoons, music, and celebrities that may appeal to minors | Roblox, YouTube Kids, Minecraft, Netflix, Spotify | Broadband services, telecommunications services, and the use of physical products | Comcast (ISPs), Google Home, Amazon Alexa
|
Louisiana | “Interactive computer service” | Sharing information, messages, images/videos, and gaming | Most major social media platforms and education platforms | Does not specify | N/A |
Texas | “Digital service” | A platform facilitating social interaction through profiles and content distribution via messages, pages, videos, and feeds | Roblox, Minecraft, Facebook Newsfeed | Services primarily focused on email, direct messaging, education, news/sports/commerce access, search engines, and cloud storage | Facebook Marketplace, Google Drive |
Utah | “Social media platform” | Profile creation, post uploading, viewing others’ posts, and interaction with other users | Instagram, Facebook | Platforms where predominant functions are email, private messaging, streaming (licensed and gaming), non-user-generated news/sports/entertainment, chat/commenting related to such content, e-commerce, gaming, photo editing, artistic content showcasing (portfolios), career development, teleconferencing/video conferencing, cloud storage, news website comments, and educational software | Reddit, Discord, Pinterest, TikTok, YouTube, Snapchat, Patreon, LinkedIn |
Source: The authors compiled these results based on an analysis of major social media apps’ main functions in July 2023.
Many of the five states differ in their approaches to age verification, particularly in the areas of platform and content access. In terms of access to platforms’ services, four out of the five states (all except Louisiana) require some form of age verification, but the laws display inconsistency over how that may be done. Arkansas mandates the use of a third-party vendor for age verification, offering three specified approaches considered ‘reasonable age verification.’ In Utah, the process is still undefined. The law specifies a requirement for age verification but does not establish the exact methods. The remaining states do not address in detail exactly how a minor may be identified other than the expectation that they will be. The ambiguities among the states around age-verification, especially as a key component for defining social media use, present both challenges and opportunities. On one hand, it creates uncertainty for social media platforms regarding compliance, while offering some room for practical and effective solutions.
Future efforts to address age verification concerns will require a more holistic strategy toward approving age-assurance methods. Some states have called for the application of existing requirements around submissions of identification that validate one’s age, which includes the requirement of government-issued IDs when minors create accounts or the option to be “verified” through parental account confirmation to protect the privacy of minors. Other ideas include deploying machine learning algorithms that can conduct age estimation by analyzing user behavior and content interaction to infer age.
Among the five states, all have some language on parental/guardian consent to minors accessing social media platforms, aside from determining how to verify that legal relationship. Like age-verification methods, validation and enforcement may prove challenging. The current methods approved by the FTC include, but are not limited to, video conference, credit card validation, government-issued ID such as a driver’s license or Social Security number, or a knowledge-based questionnaire. Recently, some groups have sought to expand parental consent methods to include biometric age estimation, which present a range of other challenges, including direct privacy violations and civil rights concerns. Generally, the process of confirming a parental relationship between the minor and adult can be intrusive, difficult to scale on large social media platforms, or even easily bypassed. Furthermore, even when parental consent is confirmed, it’s important to consider whether the parent or guardian has the child’s best interest at heart or has a good grasp of the minor’s choice and use of applications defined as social media by the respective states.
Concerns around age-verification alone may hint at the potential limitations of state-specific laws to fully address social media use among minors. With the fragmented definitions that carve out which platforms are susceptible to the laws, there are workarounds that could allow some platforms to evade public oversight. For example, with the account holder’s age verified and consent given for platform access (except in Utah), there are no restrictions on the content a minor may access on the platform. Only Arkansas and Louisiana do not require platforms to provide parental control or oversight tools for minor accounts. Consequently, these laws are insufficient and less penetrable because of their emphasis on access restrictions with limited measures for monitoring and managing content accessed by minors.
How legal changes affect minors themselves is perhaps one of the largest concerns regarding the recent bills. Employing any of the various methods of age-verification requires some level of collection and storage of sensitive information by companies that could potentially infringe upon the user’s privacy and personal security.
While the implementation of more solid safeguards could dissuade more harmful content from being shown and used by minors, the restriction to certain sites could serve to not only overly censor youth, but also create more backdoors to such content. Further, overly stringent and more mainstreamed and homogenous restrictions would inadvertently deny access to some diverse populations where being unsafe online mirrors their experiences offline, including LGBTQ+ youth who would be outed to parents if they were forced to seek consent for supportive services or other identifying online content.
Because social media can serve as a lifeline to information, resources, and communities that may not otherwise be accessible, some of the requirements of the state laws for parental control and greater supervision by age may endanger other vulnerable youth, including those in abusive households. Clearly, in the presence of a patchwork of disparate state laws, a more nuanced and informed approach is necessary in situations where the legislative outcomes may end up with far-reaching consequences.
Here is where the definition of harm in online child safety laws is significant and varies across each of the five states (with some language unclear or even left undefined). States with defined harm focus on different types and scopes of harm. Texas law defines “harmful material” according to Texas’s public indecency law Section 43.24 Penal Code, which defines “harmful material” as content appealing to minors’ prurient interest in sex, nudity, or excretion, offensive content to prevailing adult standards, and content lacking any redeeming social value for minors. HB 18 also outlines other harmful areas, such as suicide and self-harm, substance abuse, harassment, and child sexual exploitation, under criteria that are consistent with major social media companies’ community guidelines, such as Meta’s Community Standards.
Still other states do not specify the meaning of harm as thoroughly as Texas. For example, Utah’s bills (SB 152 and HB 311) do not provide a specific definition of harm but mention it in the context of the presentation of algorithmically suggested content (“content that is preselected by the provider and not user generated”) (SB 152). The ban on targeted content poses a challenge to social media companies relying on curation based on algorithmic suggestions. In the Utah bill, harm is also discussed in terms of features causing addiction to the platform. Utah’s HB 311 defines addiction as a user’s “substantial preoccupation or obsession with” the social media platform, leading to “physical, mental, emotional, developmental, or material harms.” Measures to prevent addiction are also provided with the stipulation of specific hours a minor can access social media platforms (10:30 a.m. to 6:30 p.m.). In Arkansas and Louisiana, there is also a lack of specific definitions of harm, and more of a focus on restricting minors’ access to the platform. Arkansas’s law similarly suggests that addiction can be mitigated by limiting minors’ platform access hours and prohibits targeted and suggested content, including ads and accounts. Additionally, Arkansas’s law holds social media companies accountable for damages resulting from unauthorized access by minors.
In contrast to the other states, California’s law centers on safeguarding minors’ personal data and considers the infringement of privacy rights as a form of harm, which sets it apart from the other four states.
The patchwork of legislation from states leads minors to have access to certain content in some states, but not others. For example, states outside of Utah do not stipulate restrictions on addictive features, so hooking design patterns such as infinite scrolling remain unregulated. Louisiana does not stipulate any limitation to data processing and retention. None of the state laws provide restrictions on non-textual and non-visual content that comprises most social interactions in extended reality (XR) platforms such as virtual reality (VR) apps. Moreover, the focus on access does not address how already age-restricted content will be made available to minors.
As states are enacting their own laws, Congress’s introduction of the Kids Online Safety Act (KOSA) has different emphases and approaches to the protection of children online. Table 2 reveals the differences between states and proposed federal legislation, starting with the aim of the federal bill to propose a baseline framework in a wide range of areas of interest to states, including establishing age verification, parental consent, limits to advertising, and enforcement authority. However, the state bills provide stricter and more granular protections in certain areas such as age verification (Arkansas, Texas, and Utah) and privacy protections for minors (California) than what is being proposed in KOSA, making it possible that the lack of unified standards across states may cause varying levels of protection when crossing state lines, creating legal gray areas and opportunities for circumvention.
Although the provisions on harm and duty of care outlined in KOSA are also different from the state bills, it is worth noting that KOSA does have provisions on reporting mechanisms and independent research. Further, KOSA mandates that reporting mechanisms be readily accessible for minors, parents, and schools, with specified timelines for platform responses. Such provisions demonstrate how the bill considers harm mitigation efforts, as well as preventive measures like those in the state legislation.
In Section 7, KOSA also defines “eligible researchers” as non-commercial affiliates of higher-education institutions or nonprofit organizations, suggesting that the federal bill may acknowledge the evolving nature of online harm to minors and the importance of providing open data sets and safe harbors for researchers to explore issues such as age verification and processing minor users’ data. Research and creative experimentation are not mentioned in any of the statewide bills. In Table 2, we share how KOSA compares to each respective state bill on a variety of constant variables.
Table 2. Comparison of covered areas between state laws and KOSA
KOSA
S 1409 |
Arkansas
SB 396 |
California
AB 2273 |
Louisiana
HB 61 |
Texas
HB 18 |
Utah
SB 152 |
|
Age Verification | No1 | Yes | Yes | No2 | Yes | Yes |
Parental Consent | Yes, <13 | Yes, <18 | No | Yes, <18 | Yes, <18 | Yes, <18 |
Data Collection Limits Specified | Yes | Yes | Yes | No | Yes | Yes |
Ad Restrictions | Yes | No | Yes | No | Yes | Yes |
Parental Account Access | Yes | No | Yes | No | Yes | Yes |
Enforcement Authority | FTC | Prosecutors, Attorney General | Attorney General | None Specified | Attorney General | Division of Consumer Protection |
Sources: Federal: S 1409 – Kids Online Safety Act; Arkansas: SB 396 - The Social Media Safety Act; California: AB 2273 – The California Age-Appropriate Design Code Act; Louisiana: HB 61 – Provides for consent of a legal representative of a minor who contracts with certain parties; Texas: HB 18 – Securing Children Online through Parental Empowerment (SCOPE) Act; Utah: SB 152 – Social Media Regulation Amendments
The recent wave of online child safety legislation at the state and federal levels demonstrates the urgency of concerns around the protection of minors online—many of which are being prompted by rising concerns over youth mental health, digital addiction, and exposure to inappropriate or harmful content. However, as highlighted in this analysis, the varying approaches adopted across different states have resulted in an inconsistent patchwork of regulations, and potentially those that may conflict with the bipartisan proposals of KOSA and COPPA 2.0.
In the end, the legislative fragmentation may create compliance challenges for platforms, and potentially make children and their parents or guardians even more confused when it comes to social media use. State-specific legislation can either strengthen or weaken children’s online safety when they cross state lines or penalize the parents who may be unaware of social media more generally, and how to use it more appropriately. Overall, our analysis revealed several problematic areas worth summarizing, including:
- The impact on the law’s legitimacy when there are inconsistent definitions of platforms and services, with states like Arkansas, Texas and Utah using narrower criteria, or when Arkansas excludes most major social media platforms, and Louisiana includes “any information service, system or access software.”
- Varying standards for age verification and parental consent requirements will weaken the effectiveness of state (and federal laws) as some states mandate third-party verification, while others leave the methods largely undefined.
- Limited provisions to monitor and manage exposure to content among children after the restrictions will further galvanize the need for state laws.
- Differing definitions of harm, ranging from privacy infringements (California), addiction (Utah) to various categories of inappropriate content (Texas and Utah) at the state-level will also serve to degrade the efficacy of the law’s intents.
As Congress has departed Washington, D.C. for summer recess, the committee bills are likely to advance to the full Senate with some changes. Overall, efforts to have more comprehensive federal legislation that establishes a duty of care, baseline standards, and safeguards should be welcomed by industry and civil society organizations over a patchwork of state laws. The passage of some type of federal law may also help with constructing consistent and more certain definitions around age and verification methods, parental consent rules, privacy protections through limits on data collection, and enforcement powers across the FTC and state AG offices. From here, states may build from this foundation their own additional protections that are narrowly tailored to local contexts, including California where some harmonization should occur with their privacy law.
But any attempt to unanimously pass highly restrictive children’s online safety laws will come with First Amendment challenges, especially any restrictions on platform and content access that potentially limit minors’ freedom of expression and right to information. To be legally sound, variations or exceptions amongst age groups may have to be made, which would require another round of privacy protections. Moving forward, Congress might consider the convening of a commission of experts and other stakeholders, including technology companies, child advocates, law enforcement, and state, federal, and international regulators, to propose harmonized regulatory standards and nationally accepted definitions.
Additionally, the FTC could be empowered with rulemaking authority over online child safety and establish safe harbors to incentivize industry toward greater self-regulation. These and other activities could end up being complementary to the legislative goals and provide sensible liability protections for companies that proactively engage in more socially responsible conduct.
Keeping children’s rights in mind
Finally, beyond the realm of any policy interventions is the predominant focus and interest of minors, who may consider policymakers’ efforts paternalistic and patronizing. Moreover, actions like what is being undertaken by China and even in states like Texas risk violating children’s digital rights as outlined in UN treaties, including access to information, freedom of expression/association, and privacy. Studies have already warned of the dangers that such measures could impose, such as increased censorship and compounded disadvantages among already marginalized youth.
While regulation and legislation can be well-meaning, parental oversight tools can be equally intrusive and have the potential to promote an authoritarian model at odds with children’s evolving autonomy and media literacy, as full account access enables surveillance. More collaborative governance models that involve minors might be a necessary next step to address such concerns, while respecting children’s participation rights. In June 2023, the Brookings TechTank podcast published an episode with teenage guests who spoke to increased calls for greater surveillance and enforcement over social media use. One teen shared that perhaps teens should be exposed to media literacy, including more appropriate ways to engage social media in the same manner and time allotment that they learn driver’s or reproductive education. Scaling digital literacy programs to enhance minors’ awareness of online risks and safe practices could be highly effective and include strategies for them to identify and mitigate harmful practices and behaviors.
KOSA also mentions the inclusion of youth voices in Section 12 and proposes a Kids Online Safety Council. But the language does not specify how youth will directly recommend best practices. The idea of involving youth in community moderation best practices is forward thinking. A recent study that experimented with community moderation models based on youth-centered norms within the context of Minecraft showed that youth-centered community approaches to content moderation are more effective than deterrence-based moderation. By empowering problem-solving and self-governance, these approaches allowed minors to collectively agree on community rules through dialogue. As a result, younger users were able to internalize positive norms and reflect on their behavior on the platform. In the end, finding ways to involve instead of overly penalizing youth for their technology adoption and use may be another way forward in this current debate.
Beyond social media, the issue of privacy is at the center of how and why social media is best optimized for younger users. The lack of regulation of pervasive data collection practices in schools potentially undermines child privacy. Emerging learning analytics utilizing algorithmic sorting mechanisms have also been cited to have the potential to circumscribe students’ futures based on biased data. Minor-focused laws must encompass such non-consensual practices that affect children’s welfare.
The U.K.’s Age-Appropriate Design Code, based on UNCRC principles, supports a child’s freedom of expression, association, and play—all while ensuring GDPR compliance for processing children’s personal information. The code emphasizes that a child’s best interests should be “a primary consideration” in cases where a conflict between a child’s interests and commercial interests arises. However, it also recognizes that prioritizing a child’s best interests does not necessarily mean it conflicts with commercial interests. This attempt shows how the legal code aims to harmonize the interests of different actors affected by online child safety legislation. Moreover, the U.K.’s code classifies different age ranges according to developmental stages of a minor, allowing for services to develop more granular standards.
Current minor safety regulations embody assumptions about children requiring protection but lack nuance around the spectrum of children’s rights. Moving forward necessitates reconsidering children not just as objects of protection. Instead, minors should be rights-bearing individuals where resolves to online harms require a centering of their voices to champion their digital rights. As these discussions evolve, the goals to support children’s well-being must also respect their personal capacities and the diverse nature of their lived experiences in the digital age.
-
Acknowledgements and disclosures
Google, Meta, and Microsoft are general, unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and are not influenced by any donation.
The authors thank Mark MacCarthy, Nonresident Senior Fellow at Brookings, for his constructive feedback.
-
Footnotes
- There is no provision explicitly requiring age verification, but provisions that require covered platform companies to take specific actions when they reasonably believe a user is a certain age may create an implicit requirement of age verification.
- There is no explicit mention of age verification, but platform companies are prohibited from entering a contract or other agreement with a minor without obtaining the consent of the legal representative of the minor, which may create an implicit requirement of age verification.
Commentary
The fragmentation of online child safety regulations
August 14, 2023