In 2025, lawmakers and enforcement agencies around the globe have kept one issue firmly in the spotlight: the privacy and safety of minors online. This heightened focus shows no sign of abating, with early indications that companies should expect to see more legislative and regulatory initiatives in the year ahead.
In this alert, we identify some of the key developments over the last 12 months and outline our predictions about what the next year may bring.
Trend #1. Age Assurance
Age assurance technologies can be an important tool in helping to identify minors on a service, and tailor their experience. Laws requiring some form of age assurance or age verification—beyond a self-declaration—have been in focus over the last 12 months.
Our Prediction: In 2026, we can expect to see further movement in this space globally. In the U.S., states will likely draft copy-cat laws if statutes survive constitutional scrutiny, either in whole or in part. U.S. Congress will also consider minors’ privacy and online safety legislation again. And the Federal Trade Commission (FTC) will host a workshop regarding the use and implementation of age assurance technologies. The EU has shown early indications that it may introduce new rules requiring 13-16-year-olds to obtain parental consent before accessing social media platforms, video-sharing platforms, and certain artificial intelligence (AI) companions. It is also actively working towards a harmonized age-verification standard. Broadly, we can expect increased discussion on the potential benefits and risks of implementing these technologies.
Trend #2. “Addictive” Features
Increased attention has been paid globally to how products and services directed at minors are designed, especially with respect to so-called “addictive” features.
Our Prediction: In 2026, we can expect continued discussions around product design and its effects on minors. In the U.S., courts will continue to examine the issue of whether certain design features can cause actionable harms and whether companies can be held liable for those designs. Discussions in Europe around the introduction of a DFA will be watched closely, particularly given the body’s broader push to simplify certain elements of its regulatory framework for digital services. Broadly, we can expect that more jurisdictions will start examining how design choices affect minors in particular.
Trend #3. Uptick in Investigations, Enforcement, and Claims
Enforcement actions can provide key insights into government priorities. Absent enforcement, enacted laws may not seem like a true government priority. By contrast, announced actions signal to all the importance of compliance and highlight how regulators are viewing the law.
Our Prediction: In 2026, we expect to see a continuing regulatory focus on this area, with some of the longer-running investigations coming to a conclusion across jurisdictions. In the U.S., the FTC will likely announce new actions under COPPA. The agency will also scrutinize companies under its new statutory authority, the TAKE IT DOWN Act, as discussed further below. At the state level, state attorneys general will continue to be active in bringing actions to protect minors online, either under standalone legislation or their states’ unfair and deceptive practices frameworks. In particular, we can expect increased collaboration among states attorneys general, resulting in increased multi-state settlement announcements. In the UK, we expect to see Ofcom broaden its focus beyond services that host adult content. The regulator has indicated that it will focus on topics including risk assessments, prevention of child sexual abuse material (CSAM), the online experiences of women and girls, and the removal of terror and hate content. In Europe, regulators will likely release their findings from child safety investigations, and we expect regulators to continue issuing information requests, potentially from smaller online platforms. Broadly, we expect an uptick in enforcement from global regulators related to minors’ privacy and safety.
Trend #4. Addressing Potentially Harmful Content for Minors
One of the motivating factors animating increased regulation and enforcement to protect minors online stems from their access to certain content perceived as harmful to minors. This content can include sexually explicit material or content that promotes goods or services that are illegal for minors.
Our Prediction: In 2026, we can expect this trend to solidify its place in the minors’ online safety playbook. In the U.S., states may begin enforcing these laws, especially given the Free Speech Coalition v. Paxton ruling. Congress may also consider whether to implement such a restriction at the federal level. Additionally, we may see more scrutiny around how companies enforce content moderation for minors in light of recent app store accountability acts. In Europe, despite the potentially broad application of laws such as the DSA and OSA, we expect to see a continued focus on the availability of content judged to be most harmful to minors, including adult content and disinformation. Broadly, we can expect more countries to address concerns about minors accessing age-appropriate experiences online, possibly requiring some form of age assurance when certain content is illegal for minors or perceived as harmful.
Trend #5. AI-Generated CSAM and Deepfakes
Increasingly, bad actors are seeking to use generative AI technologies to generate harmful content. We expect regulators to continue to wrestle with this issue.
Our Prediction: In 2026, this trend will continue to be a focus. In the U.S., the FTC will likely focus on enforcing the TAKE IT DOWN Act, perhaps with a focus on protecting minors. In Europe, we expect to see ongoing debate about how best to tackle these harms, with CSAM and deepfake-related issues remaining high on regulatory enforcement agendas. Broadly, we can expect that other jurisdictions will consider implementing or expanding existing frameworks aimed at addressing NCII or CSAM to encompass AI-generated or modified images.
Trend #6. AI Companions
When considering minors’ online safety, regulators have begun expressing concerns about how minors interact with AI services.
Our Prediction: In 2026, we expect to see continued conversations about how minors engage with these technologies. In the U.S., the Senate will discuss the Guidelines for User Age-verification and Responsible Dialogue Act (known as the GUARD Act), which currently contemplates requiring reasonable age verification of account holders, banning AI chatbots for minors, and criminalizing making available AI chatbots to minors that solicits or produces sexually explicit content. Even if passed, if the bill does not preempt state laws. States may continue to regulate in this space, possibly following either the New York or California model. In Europe, regulators may begin issuing information requests to AI chatbot providers that have not addressed or mitigated risks identified under laws including the GDPR, DSA, and the OSA. Broadly, we can expect increased focus on how these increasingly popular technologies implement safeguards to ensure that minors using their services receive age-appropriate and safe material.
The upshot of all of these initiatives is clear. Safety and privacy issues for minors are top of mind for regulators around the world. Companies providing services to minors will want to pay close attention to developments in these areas in the coming year.
Wilson Sonsini Goodrich & Rosati routinely advises clients on minors’ privacy and online safety laws and regulations and counsels companies facing enforcement actions. For more information, please contact Chris Olsen, Cédric Burton, Tom Evans, Claudia Chan, Rebecca Weitzel Garcia, or another member of the firm’s Data, Privacy, and Cybersecurity practice.