
X is one of the social media sites that will soon be X-cluded for a certain age group.
Australia is preparing to implement a world-first restriction barring children under 16 from holding social media accounts.
Beginning December 10, platforms will be required to take what the government calls “reasonable steps” to prevent under-16s from creating accounts and to remove or deactivate accounts already belonging to minors.
Officials argue the ban is necessary to limit the “pressures and risks” children face online, including extended screen time and exposure to harmful content. The policy follows a government-commissioned study showing that 96% of children aged 10–15 use social media and that seven in 10 have encountered disturbing or dangerous material ranging from misogynistic content to videos promoting self-harm. One in seven reported experiences resembling grooming by adults or older children, while more than half said they had been cyberbullied.
The policy, while popular with concerned parents, marks a major shift in how governments regulate children’s access to online spaces. It will also act as a test case for other countries debating stricter protections for minors.
Which platforms?
According to the BBC, 10 platforms have been named so far: Facebook, Instagram, Threads, Snapchat, TikTok, X, YouTube, Reddit, and the streaming services Kick and Twitch.
The government is also under pressure to extend the ban to gaming platforms, as many games allow messaging, voice chat, or user-generated content. Platforms such as Roblox and Discord have recently added age checks to certain features, possibly in an attempt to avoid being included.
In determining whether more platforms will be added, the government will consider whether a service primarily facilitates social interaction, whether users can interact broadly with others, and whether they can post content. These criteria have excluded YouTube Kids, Google Classroom and WhatsApp. Children will still be able to view most content on YouTube because it does not require an account for passive watching.
How the ban will be enforced
The burden for compliance falls entirely on social media companies, not on parents or children. Firms that fail to take adequate measures face fines of up to $49.5 million for serious or repeated breaches.
Platforms are required to use “age assurance technologies,” though the government has not endorsed any particular method. Options under discussion include verification using government IDs, face or voice recognition, or age inference systems that estimate a person’s age from online behaviour or interactions.
The government encourages platforms to combine multiple methods and explicitly bans relying only on self-declared ages or parental approval.
Several companies have already announced compliance plans. Meta will begin removing teen accounts from December 4. Those incorrectly removed could verify their age using a government ID or a video selfie. Snapchat says it may rely on bank accounts, photo IDs, or selfies for age estimation. Other companies have not yet detailed their approaches.
However, age-verification technology remains imperfect. The government’s own study found that facial assessment tools were least accurate for the age group the ban is targeting, creating the possibility that some teens may be wrongly removed while younger users successfully evade detection.
Will the ban achieve its goals?
Experts are divided on whether the policy will meaningfully reduce harm. Critics note that the rules exclude dating sites, online gaming environments, and AI chatbots—all places where harmful interactions or inappropriate content can also occur. Recent controversies around AI systems that engaged in “sensual” conversations with minors and encouraged self-harm have intensified these concerns.
Others warn that young people who rely on social media for community, identity exploration, or mental health support could be left more isolated. There are also concerns about whether fines are meaningful deterrents; as former Facebook executive Stephen Scheeler told AAP, “It takes Meta about an hour and 52 minutes to make $50 million in revenue.”
Some child-safety advocates believe education and digital literacy would better prepare young people to handle online environments. Communications Minister Annika Wells has acknowledged the policy’s limitations, saying, “It’s going to look a bit untidy on the way through. Big reforms always do.”
Data protection and privacy concerns
Implementing widespread age verification will require the collection and storage of sensitive personal data. Critics argue that this introduces heightened privacy risks, particularly given Australia’s recent history of large-scale breaches in sectors like telecommunications, health insurance and retail.
The government says the legislation includes “strong protections,” ensuring that personal information collected for age verification cannot be used for other purposes and must be destroyed once verification is complete. Penalties apply for misuse. It also requires that platforms provide alternatives to government IDs, given concerns about excessive intrusion.
Still, privacy experts caution that any centralised system for verifying ages creates new attack surfaces and commercial incentives for data retention, regardless of legal safeguards.
Industry reactions
The announcement was met with significant backlash from major platforms. Companies argued the ban would be cumbersome to implement, intrusive for users, and easily circumvented. Some warned it could push children into more dangerous online spaces or limit positive social interactions. Snap and YouTube also argued that they should not be classified as social media companies.
Google, YouTube’s parent company, is reportedly weighing a legal challenge to its inclusion.
Even companies that have reluctantly agreed to comply continue to criticise the policy. Meta said the ban would create “inconsistent protections across the many apps they use.” TikTok and Snap stated at parliamentary hearings that while they still oppose the ban, they will abide by it. Kick—the only Australian company affected—says it will introduce “a range of measures” and remain in “constructive” dialogue with authorities.
Global momentum toward stricter controls
Although Australia is the first country to completely ban social media for under-16s, similar debates are occurring worldwide. The UK introduced strict rules in 2024 requiring companies to protect minors from harmful content or face major fines and potential jail time for executives. France is considering barring under-15s from social media, while Denmark and Norway have proposed bans for under-15s. Spain recently introduced legislation requiring parental approval for under-16s.
In the US, Utah attempted to restrict under-18s from using social media without parental consent, but the measure was blocked by a federal judge on constitutional grounds.
Many governments are watching Australia’s rollout closely, viewing it as a test of whether age verification at scale is feasible, effective, and socially acceptable.
Will children try to evade the rules?
Teenagers interviewed by the BBC say they are already creating accounts with fake ages. Online forums and TikTok videos share tips for bypassing controls. Some teenagers, including influencers, have begun sharing accounts with parents as a workaround. Analysts expect a surge in VPN use, which would hide a user’s location and potentially complicate platform enforcement.
The government says it expects platforms to detect and eliminate such attempts, though doing so may prove difficult without collecting even more user data.
If you want to learn more about social media, Kara Sherrer and TechRepublic Premium present 10 must-read books to add to your shelves.
Source of Article