Appendix B
Helping Kids Thrive Online: Health, Safety, & Privacy
Summary Of Request For Comment Responses
The National Telecommunications & Information Administration (NTIA) issued a Request for Comment on Kids Online Health & Safety (RFC) in September 2023 to gather information from experts, industry, and the general public about social media and online platforms’ positive and negative impacts on minors; current industry practices; and ways in which the private sector, caregivers, and the U.S. government may improve young people’s health and well-being online. The responses to the RFC were sought to help inform the Kids Online Health and Safety Task Force’s work in developing voluntary guidance; policy recommendations; best practices on safety-, health-, and privacy-by-design for industry to apply in developing digital products and services; and questions asked in Task Force listening sessions. Below is a summary of the responses received and the entities that provided comments.
In total, NTIA received more than 500 written comments in response to its Request for Comment from a mix of industry, academic, civil society, and individual contributors. Comments have been publicly posted on Regulations.gov, under the docket NTIA-2023-0008.
In addition to comments from individuals, NTIA received comments from entities such as these below:
Government
- California Privacy Protection Agency; eSafety Commissioner, Australia.
Industry and Industry Associations
- ACT | The App Association; Association of National Advertisers, Inc. (ANA); BBB National Programs; Chamber of Progress; Computer & Communications Industry Association (CCIA); Discord Inc.; Engine (start-up association); Entertainment Software Association (ESA); Google; Information Technology Industry Council (ITI); Gaggle (student surveillance software); The LEGO Group; NetChoice; Match Group; Meta, Microsoft, Network Advertising Initiative (NAI); PBS; Pinterest, Roblox Corp.; Software & Information Industry Association (SIIA); U.S. Chamber of Commerce.
Nonprofits/Civil Society
- 5Rights Foundation; American Consumer Institute; Bipartisan Policy Center, Center for Countering Digital Hate; Center for Democracy & Technology (CDT); Common Sense Media, Center for Digital Democracy, and Fairplay (CSM et al); Electronic Privacy Information Center (EPIC); END Online Sexual Exploitation and Abuse of Children Coalition; Family Online Safety Institute (FOSI); Future of Privacy Forum (FPF); National Hispanic Media Coalition; The Phoenix Center, Public Knowledge; R Street Institute; TechFreedom; The Trevor Project (LGBTQI+ focus).
Medical and Educational Associations
- American Academy of Child and Adolescent Psychiatry; American Academy of Pediatrics; American Federation of Teachers; National Education Association (NEA).
Academics
- Digital Mental Health Research Group at the University of Cambridge; Yale University – Digital Economy Project; Strategic Training Initiative for the Prevention of Eating Disorders (STRIPED) at the Harvard T. H. Chan School of Public Health and the Michigan State University College of Law; The Center for Growth and Opportunity at Utah State University; and numerous other academics in their individual capacities (from New South Wales experts on body-image issues to Cato Institute fellows).
Summary of Key Points from comments
- Most commenters expressed concerns about harms to kids online and cited existing studies, often related to specific types of harms.
- In addition to harms noted in the Surgeon General’s Advisory on Youth Mental Health and Social Media, commenters included items such as:
- The loss of time kids need for other important skill development;
- Contribution to obesity crisis/unhealthy eating;
- Self-harm, disruption, and danger (including a “slap a teacher” challenge);
- Digital stress for kids, including fear of missing out and the pressure to remain online/available;
- Distress for parents;
- Child identity fraud;
- Peer pressure for students—no matter the income level—to purchase items for multiplayer games.
- In addition to harms noted in the Surgeon General’s Advisory on Youth Mental Health and Social Media, commenters included items such as:
- Some parents and youth also detailed their personal experiences, including parents of youth who died by suicide after sextortion, from drug overdoses after exposure to online drug dealers, and eating disorders after sustained exposure to certain content
- Some medical and scientific experts highlighted the complexity of identifying harms and determining causality with scientific certainty.
- This included the difficulty of disentangling online and offline factors, and comparing online consumption to eating very different types and quantities of food.
- There were specific comments from outside the medical community, in particular, challenging causal links between youth and mental health concerns.
- Commenters described the need for more research and the barriers to getting data.
- This included a lack of transparency preventing understanding of scale and impact on mental health, the need for data about algorithmic practices, and the high cost to obtain data. Privacy for individuals and company proprietary data were raised as areas of concern.
- Some parties suggested different models for improving data access, such as the EU’s Digital Services Act provisions or a U.S. task force on the opioid crisis.
- Commenters provided an overview of current industry practices and technology.
- Commenters discussed existing tools and measures to varying degrees, from general references to detailed descriptions by companies of their own efforts. Some commenters commented generally on a lack of efficacy of tools but there was little discussion of the measurement or evaluation of tools’ efficacy.
- Commenters referred to well-known tools, such as: review and age-rating of content for age-appropriateness; reporting tools; parental controls; privacy by default; CSAM and other image detection efforts; policies against targeting kids with advertising; review of chat; quiet mode and tools to limit unwanted direct messages; separate product offerings for kids; and reminders to take a break.
- Platforms can choose not to provide support for some features on accounts for kids.
- Other items included interventions to help users self-report and receive behavior coaching; using AI “to make everyone feel reflected and represented”; policies, teams, and technology aimed at harms such as radicalization, CSAM, and mental health harms; and a “viral circuit breaker” to minimize amplification of content.
- The adequacy of specific tools and company efforts was questioned by experts on kids and technology as well as others. Examples include: Examples include:
- Ineffective blocking/reporting for abuse/exploitation and the failure to remove hate-related content and harassment/threats.
- At the same time, some commenters raised concern about LGBTQI+ content, in particular, being improperly removed as sexual, leading to kids abandoning content filters altogether and, therefore, increasing risks they would otherwise have helped to avoid.
- Some commenters expressed concern about the burden on parents/caregivers/kids to find and use tools and connected this to evidence of lower overall efficacy of those tools.
- Another concern raised was balancing parental controls with privacy for teens, including risks of parental surveillance.
- Some commenters provided information about existing laws.
- Commenters discussed laws in the United States and abroad aimed at kids online generally, and privacy specifically. Some said laws protecting kids’ privacy might, alone, address many of these issues.
- California’s privacy laws, which include specific measures related to kids, were noted, as were broader child-safety measures, such as in Australia, the UK (Age-Appropriate Design Code), and relevant provisions in the EU (Digital Services Act restrictions on targeted marketing, French and Italian laws).
- The global nature of online services warrants some international coordination and initiatives for consistent approaches and requirements, some noted.
- Some suggested ways to adjust existing U.S. law to broaden coverage or spread best practices, including considering a centralized approach to child safety issues. These included:
- Only allowing platforms to take advantage of protections from liability under Section 230 of the Communications Act if they meet a new safety-by-design requirement.
- Considering centralized solutions, such as controls and access mediated by operating systems and devices or app stores (subject to privacy laws) or adding age-verification requirements to app stores.
- Adopting a national safety standard (that preempts state law)
- Requiring industry standards for ad targeting and delivery (such as limiting ad targeting to age and location).
- There were many calls for national privacy legislation.
- Commenters also proposed other privacy-protective approaches, including guidance and privacy by design measures. California’s privacy laws, which contain special provisions for kids, were described.
- Interventions should address specific harms yet be flexible and provide some general standards.
- Many commenters highlighted the need for specificity in the harms to be addressed in any legislation or other measures (including adherence to voluntary frameworks). For example, incremental and small and medium-sized solutions were championed or offered.
- Some commenters highlighted that there are different risks that exist on different platforms (e.g., direct messaging more a risk for grooming and exploitation, while cyberbullying more a risk on platforms with items like publicly visible comments).
- Many, including those skeptical of legislative or regulatory action, urge that remedies be precise, not one-size-fits all, and include some general standards. For example, some commenters emphasized ways to tackle different risks with proportional responses and suggested to avoid being overly prescriptive and to promote interoperability.
- The need for age-appropriate and differentiating approaches were stressed by many commenters.
- Commenters noted both the challenges and the potential benefits associated with age verification/age assurance.
- In addition to noting the difference that age brings to the risks of online platforms, many also highlighted the challenges of determining the age of people online. Some described the different techniques and technical challenges, while others only focused on the harms that could come from limiting access to services online by kids—or adults who cannot or will not pass age-assurance checks—and data collection risks. Others raised concerns about state laws.
- Some proposed or reviewed risk-based approaches to age assurance to address tradeoffs and avoid unnecessary data collection.
- Some suggested the adoption of a flexible approach, such as that used in the UK Age-Appropriate Design Code implementation, and others pointed to examples of where differences in approaches are clearer, such as assuring access for adults only for pornography and gambling sites but not necessarily from general-use platforms.
- Many described existing efforts and approaches to identify users who are kids, including existing frameworks.
- Some called for the adoption of centralized age verification efforts, in part to address privacy and accuracy concerns. That included using existing infrastructure for device-level age verification such as mobile or credit card providers (for those kids who have their own devices or accounts) or app store measure; or an international certification regime with technical standards for third-party age assurance providers.
- About a dozen commenters raised general concerns about privacy risks associated with ID verification methods, and some challenged the constitutionality of age verification requirements that apply to all. A few commenters raised First Amendment concerns, including recognizing rights of older minors. Others highlighted security risks from increased collection and use of user data.
- At least one commenter raised concerns about the burdens on start-ups to implement age-verification mechanisms—including using pre-packed technology.
- Commenters discussed existing and potential frameworks to guide interventions in favor of kids’ online health and safety.
- Commenters said that companies can leverage existing privacy and data management practices to develop best practices in this area.
- Commenters discussed different general standards, such as the “best interests of the child” standard found in data privacy laws and regulations around the world, and a “duty of care standard.”
- Some companies noted industry efforts to develop best practices, while others suggested their own frameworks or use of technology, arguing that self-regulatory efforts, certification programs, and safe harbors can move companies along faster than legislation mired in legal challenges.
- Commenters also highlighted the value of creating a positive environment online.
- Commenters recommended guidance for parents, guardians, and kids.
- Some commenters recommended investment in digital literacy training and messaging, for example, highlighting Florida’s new digital literacy curriculum in schools.
- Some noted the importance of different platforms streamlining words and phrases, menus, and design/layout, including mirroring the UK Age-Appropriate Design Code (AADC) Transparency Standard.
- Commenters proposed a variety of other solutions:
- Some noted that advertising-supported social media is not—and should not be—the only business model.
- One commenter suggested stakeholders develop databases of a) social media harms to students and schools; and (b) industry practices designed for age-appropriate content; (c) regular research and reviews; (d) accountability; and (e) funding/opportunity for long-term impact studies. Another suggested the U.S. government establish “a dedicated government office to distribute funding, conduct research, and/or oversee regulations” specific to technology, digital media, and children.
- Regarding the role of AI and emerging technologies, a few commenters noted such technologies’ role in exacerbating bullying, harassment, and other endemic challenges facing youth online.
- One commenter noted that it was important to look at specific products or service, not at AI as a distinct item, while recognizing that AI can exacerbate harms by automating and finessing problematic items.
- Commenters noted specific examples concerning uses of AI and other emerging technologies, including:
- AI images of naked female students being circulated by their male classmates.
- Challenges in virtual reality chats.
- Targeted AI marketing promoting unhealthy eating.
- AI-generated labels and applications in multiplayer games.
- Reports of nearly 3,000 AI-generated CSAM images in the UK.
- AI-driven content moderation, which can lead to more heavy-handed moderation than traditional content.
- Some cautioned generally that safeguards should be weighed against benefits.
- Some warned against regulation of online platforms, often related to speech concerns. Commenters noted that:
- Regulation could trigger legal challenges or harm innovation and free expression.
- While harms exist, not all the problems require a regulatory or legislative solution.
- Vagueness in requirements about safe content, in particular, could disproportionately harm the LGBTQI+ community.
- Regulation could damage the online experience for kids by making it too sterile, interfere with teens’ abilities to discuss sensitive topics, or lead to other repressive measures.
- Some warned against regulation of online platforms, often related to speech concerns. Commenters noted that:
- Other items of note:
- Commenters raised concerns related to the lack of data about different demographic groups use of online platforms.
- One commenter challenged the traditional economic tenet that more consumption is better for consumer welfare as not accurate in the social media context and analyzed competition concepts applied to addictive aspects of technology. Another commenter suggested that competition and reputation are the surest forms of accountability for companies.
- The more than 400 comments from individuals expressed a wide array of views.
- Commenters expressed concern about the harms—including depression and anxiety—that adolescents who consistently use social media experience.
- Several commenters echoed concerns about the negative consequences that proposed safety measures in some legislation could have on free expression and, in particular, on the benefits that marginalized communities, including LGBTQI+ youth and youth of color, have derived from online platform access.
- Other comments voice concerns about, and accountability for, algorithmically promoted content and targeted advertising, as well as surveillance advertising and the loss of privacy. Some comments asked for a government database tracking algorithms and regular research on social media’s effects.
- Some commenters championed the role of parental choice and control, but some noted that even with that, kids are exposed to what their friends and peers see.