Just after midnight on December 10, something quietly changed for teenagers across Australia. TikTok, Instagram, YouTube, and several other platforms were suddenly unavailable.  Australia was the first nation to really prohibit anybody under the age of sixteen from using social media. A first in history. The waves began to spread within weeks.

Similar legislations are also being drafted by Germany, Denmark, France, Spain, and a dozen more countries. There are 25 states in the United States that are struggling with their own limitations. And whether any of this is effective is a point of contention among parents, scientists, legislators, and even the teenagers themselves. Do social media bans safeguard youth, or are we merely sidestepping more difficult issues?

A Rapid Shift From Debate to Law

Malaysia rolled out its ban in January. In late January, France enacted a law prohibiting children under the age of 15. Proposals are being worked on in Germany, Slovenia, Spain, Italy, Greece, and the UK. The Kids Off Social Media Act was proposed in January by Senator Brian Schatz in the United States. It would prohibit accounts for under-13s and ban algorithmic recommendations for anyone under 17. Somehow, it’s drawn support from both parties.

Virginia took a different route. Governor Glenn Youngkin signed a law in early 2025 that caps teens under 16 at one hour of social media daily, unless parents approve more. As of January 1, 2026, it’s in effect. Not quite a full ban, but close.

The urgency here is real. Even people sceptical of these laws acknowledge it: teen mental health in developed countries has gotten worse. Anxiety and depression rates have climbed for over a decade. Eating disorders are rising. Self-harm is up. Healthcare workers on the front lines talk about parents and teenagers in genuine crisis over the psychological fallout from online life.

Meta’s own internal documents threw gasoline on this fire. An internal study found that 32 per cent of teen girls said Instagram made them feel worse about their bodies. The company basically admitted the platform was driving anxiety and depression in its teenage users. When Big Tech acknowledges it’s hurting kids, the political momentum becomes impossible to stop.

What do Parents Think

According to a poll, more than 90% of parents are in favour of social media age restrictions. The concerns are always the same: the constant culture of comparison, algorithms that prioritise engagement above well-being, and the time drain itself. Many parents claim that parenting is really more difficult now than it was twenty years ago, and many blame social media for this.

But here’s where it gets interesting. The science backing all this up is shakier than the policy language suggests.

A University of Manchester study in 2025 found that more time on social media doesn’t automatically cause mental health problems in teens. Previous generations panicked about TV and video games rotting young minds; turns out that didn’t happen. Some researchers think we’re watching the same pattern play out again: moral panic running ahead of evidence. Most studies can’t prove that online time causes mental illness. They show correlation. Not causation. And there’s basically no evidence that blanket bans, rather than specific interventions or screen time limits, actually improve teen mental health or school performance.

The Problem No One Likes to Talk About: Enforcement

Australia learned this fast. Within days of the December 10th rollout, teenagers posted videos showing how they bypassed age verification. VPNs, cheap, everywhere, instantly killed the geographic restrictions. Older siblings’ accounts. Parental approval through basic social engineering. Faulty facial recognition that misclassifies ages. The Australian eSafety Commissioner acknowledged the system wouldn’t be perfect, comparing it to liquor laws. Which is honestly a bad sign. If liquor laws for minors basically tolerate non-compliance, what makes us think social media bans will do better?

Researchers like Tama Leaver at Stanford hope other countries will learn from Australia’s mess. Early signs suggest they will, but it’ll take time.

Platforms Play Along

Meta, TikTok, Snap, YouTube, they’ve officially complied with Australia’s ban while simultaneously saying compliance doesn’t guarantee teen safety. Evan Spiegel at Snap wrote in the Financial Times that the law “does not guarantee that Australian teens will be safer or better off.” These companies clearly decided that cooperating with governments is smarter than fighting them. Better to comply domestically than get crushed by global tech regulation.

But they’re also covering themselves. Meta launched “Teen Accounts” with built-in protections and announced Instagram would restrict teens to PG-13 content. That one backfired, though. The Motion Picture Association sent a cease-and-desist letter, accusing Meta of misusing their rating system.

The Teen Perspective

One voice getting buried in this whole debate is the teens themselves. Australian surveys showed teenagers opposed the ban 70 to 25. And their reasons made sense. Social media is not only a “mental health minefield” anymore. For LGBTQ+ adolescents, it provides a community and knowledge that they are unable to obtain in the real world. Social media-driven support networks are helping children with mental illness. Social media is used by marginalised and minority groups to organise, learn, and feel less alone.

Advocates of digital literacy claim that parents and teenagers may miss important discussions about internet safety if they are totally protected from social media. Furthermore, they will not learn the skills required for a world that is becoming more and more digital.

UNICEF even cautioned that prohibitions may have the opposite effect. The organisation said what kids actually need is platforms redesigned to prioritise child safety and parents who have meaningful support using technologies they don’t fully understand. Age restrictions alone won’t cut it.

The Middle Ground

This has turned into an either-or fight: ban it completely or let the market and parents handle it. But the actual evidence doesn’t support either extreme.

Virginia’s one-hour-per-day cap avoids the nightmare of age verification. But it creates its own mess. How do parents give consent for more screen time? Do teens ask permission every time? The law’s mechanisms are still fuzzy even to the people who wrote it.

What research actually points toward is more targeted. Focus on specific harms rather than blanket restrictions. Remove the algorithmic amplification pushing extreme content. Require real content moderation. Give parents actual parental control tools that work. Restrict the design features engineered to be addictive, infinite scroll, and notification manipulation. Teach digital literacy in schools instead of hoping bans make up for parental involvement.

The Kids Off Social Media Act kind of tries this. It bans personalised recommendations for under-17s while keeping chronological feeds and direct searches available. Harm reduction without total prohibition. Whether that survives the tech lobby’s assault in Congress is anyone’s guess.

What Happens Next

Governments everywhere are watching Australia figure this out. A few things feel pretty clear. Yes, teen mental health is getting worse. Yes, social media probably plays a role. Yes, parents need real solutions. But complete bans aren’t magic wands. They’re politically convenient, they look tough, they calm down anxious parents, and they let everyone dodge the harder work.

That harder work, redesigning platforms, regulating specific features, investing in family-based solutions, actually teaching kids how to navigate digital life, gets less attention but would probably help more. Whether that wins out while the bans spread depends largely on whether Australia’s experience changes anyone’s mind. So far, it’s just accelerating the momentum toward restrictions.

For teenagers everywhere, the message is landing: your access to these platforms is negotiable now. Whether that ends up protecting them or just isolating them further won’t be clear for years.