Breaking down the Social Media Age-Appropriate Users Bill
Change My Mind
Please note, as usual, that these are my personal opinions and not those of any organisation I am affiliated with.
What is the Social Media Age-Appropriate Users Bill?
Her justifications as printed in the New Zealand Herald are that “we aren’t managing the risks” associated with young people and social media use. Christopher Luxon has said it is a matter of protecting young people from bullying, inappropriate content, and social media addiction.
The Bill, they say, will “put the onus on social media companies to verify that someone is over the age of 16 before they access social media platforms.”
Wedde points to similar laws in other jurisdictions to support her Bill including Australia’s Online Safety Amendment (Social Media Minimum Age) Bill and Texas’ ban on under 18s on social media. She says similar work is under way in the UK, Europe, and Canada.
While there are lots of arguments both for and against this kind of legislation that are unrelated to free speech, for the purposes of this piece I will focus on the impact and implications on speech rights. However, here is a brief summary of the arguments for the ban:
Given that children and young people are still developing cognitively and this impacts their ability to regulate their emotions and assess risks, and their identity is still fragile, social media presents a threat to their wellbeing. Social media platforms are engineered for engagement, often exploiting psychological vulnerabilities (eg. variable reward loops, social comparison). There is increasing evidence of correlation between early, intensive social media use and poor mental health outcomes in adolescents (eg. anxiety, depression, body image issues, suicidal ideation). Children under 16 are especially susceptible to peer pressure, addictive behaviours, and identity instability. Social media reduces face-to-face interactions and may hinder social skill development or promote performative relationships over authentic ones.
An attack on free speech?
It is fairly obvious that this proposed legislation would constitute an impingement on the speech rights of people under the age of 16. Social media is one of the main public forums for speech, self-expression, and association today. For teens, it's often the locale of most of their social interactions and a place to share. A ban would exclude them from these public forums, potentially violating their right to freedom of expression and access to information.
What is in question is not if this breach of rights exists; rather it is whether it is a justified breach.
In order to establish a contextual foundation for a justification, let’s look at some axioms about the role of governments in relation to children and young people:
Children are a vulnerable class deserving special protection. They lack physical, emotional, and cognitive maturity and cannot make fully informed decisions. It is accepted that their vulnerability justifies proactive state intervention in many areas (eg. education, health, safety).
Governments have a moral and legal responsibility to safeguard children against real harm. This duty includes protecting children from violence, exploitation, neglect, and conditions detrimental to their development. This relates to the parens patriae doctrine that says the state has a paternalistic role to play when individuals (children) cannot protect themselves. The UN Convention on the Rights of the Child recognises that while children have rights, governments may limit those rights if it's in the child’s best interest.
The best interests of children must be a primary consideration in policymaking. We have recently grappled with this issue in regards to the removal of Section 7aa from the Oranga Tamariki Act. Public sentiment seemed to strongly back this idea, but the media and opposition parties were vehemently opposed.
The right to protection can override absolute personal freedoms. Children’s rights to safety, health, and development often take precedence over unrestricted freedom (eg. limitations on labour, contracts, or alcohol use). States regularly intervene where parents or markets fail to provide adequate protection.
The last point is the one that I have to grapple most with as a free speech advocate. We accept that there are times when it is acceptable to limit speech freedoms. For adults, these are very rare and restricted to matters like real incitement to violence. However, for children we must take into account more context as we do in other law and policy that affects them.
What I cannot stress enough is that there must be a high threshold for all justifications of breaches of free speech. There must be a compelling interest. The restrictions must be narrowly prescribed. And, they should be the least restrictive means of achieving the goal. Let’s break this down:
Compelling interest: there is a great deal of research and evidence, most prominently espoused by Professor Jonathan Haidt, that social media use in young people is causing a myriad of social and psychological harms. I can address this more substantially in another Substack but I point you to Haidt’s book The Anxious Generation and the plethora of commentary online he has engaged with.
Narrowly prescribed: this is, of course, subjective. However, if the legislation defines what a social media platform is in the tightest terms this ban can be said to be restricted in its application. That is because, for example, there is likely no restriction on them using email services, online shopping, or generally researching. It is also narrowly applied in that it is a temporary ban that is lifted after the age of 16.
The least restrictive means of achieving the goal: this is the hardest hurdle to overcome as a blanket ban on social media can be argued to be most restrictive, but the lack of less restrictive options, could be said to necessitate it. There is no way to partially remove social media access nor to excise the harmful elements from the experience of utilising the platforms.
Restrictions on the freedoms and rights of children and young people are not unprecedented. These lean heavily on John Stuart Mill’s Harm Principle: that freedom can be limited to prevent harm to others (or oneself). Here are some examples of how we already restrict fundamental rights in order to protect children:
Children can't legally access pornography or other adult content, even though these are often forms of protected speech for adults.
Children can't legally work full-time, sign contracts, or consent to medical procedures on their own.
Governments compel children to attend school and prohibit parents from denying education.
Children under a certain age cannot legally buy or view certain content (eg. R-rated movies and games) without parental approval.
Children are prohibited from getting tattoos or piercings without parental consent - age differs from jurisdiction to jurisdiction.
Children are restricted from driving until aged 16 or older and we have a graduated system that limits driving with passengers, night driving, etc.
Children cannot purchase alcohol, smoke, gamble, or vote until the age of 18 in New Zealand.
The principle behind all of these is that children and young people are not considered fully autonomous individuals, so their rights can be lawfully limited in areas where harm, exploitation, or long-term negative consequences are likely.
Therefore, since children are not developmentally equipped to manage the psychological manipulations of social media and this predictably results in harm, and society already restricts access to harmful substances or activities based on age, it is possible to argue that it is consistent and justified to restrict harmful social media access to under 16s.
Addressing the Digital ID in the room
The next step is to consider if there is an appropriate method or mechanism with which to justifiably restrict under 16s from accessing social media. It is important to consider this as a separate issue because even if the justification for breaching rights exists the means by which to do it could come with unforeseen consequences or prove to be harmful in other ways.
On X, there has been a knee-jerk and immediate reaction to the announcement of the Member’s Bill. It ignited an existing anxiety and suspicion of governments bringing in digital identification systems.
There is good reason to be suspicious of systems like the Social Credit System in the People’s Republic of China that allow the Chinese government to track and control people through all of their interactions and commerce. People are right to be cautious about surveillance creep. Mandating identification for basic internet use would set a troubling precedent. Privacy and freedom online are foundational democratic values in a modern society and any policy that undermines them should be scrutinised.
However, this concern is not a reason to do nothing, but a reason to design the policy correctly. There is a real danger that worry about Digital ID systems is going to prevent a much needed conversation about protecting children under 16 from the utter misery being wreaked by social media.
First, all that has been announced is a Member’s Bill. This is a Bill that an individual member of Parliament puts in The Biscuit Tin which is a quaint tradition of ours. On Member’s Days, a set number of Bills are pulled from the Tin randomly and these are then added to the Order Paper. It is luck of the draw and many Bills (good and bad) languish for a long time with crumbs at the bottom of the tin. So we can take a breath. Additionally, ACT has emphatically ruled out supporting the Bill so if it is drawn it will not be passed by the Coalition Government. Labour has, however, indicated tentative support for the Bill - probably because it polls well with middle New Zealand. It could be a rare instance where the two legacy parties team up, but as I say, we are a long way from that.
Secondly, while there may be some overseas precedents or signals that Digital ID is something we should be looking out for, there has been no mention of it in relation to this Bill. The General Policy Statement says:
“The Bill mandates that social media platforms implement strict age verification measures to prevent under-16s from creating accounts.”
This means both the responsibility for restricting access and the punishment for failing to do so sit with the social media platforms, not the users. There is nothing in the Bill that sets any possible foundation through which the Government could set up its own identification system nor collect any data on New Zealanders using social media. In fact, section 8 of the Bill requires that platforms take into account “the privacy of the age-restricted user”.
I have seen some discussion that Bills like this one, and the equivalent Act in Australia, are “gateways” to the development of government Digital ID systems. I will address that concern, but first point out that as it is written right now, this Bill would not enable that gateway. Another Bill or Bills would need to be written. These are complex systems that conflict and interact with many fundamental rights as well as existing systems. For our government to build such a system or contract it out (as was suggested to me) they would have to address a web of intersecting legislation from privacy laws to human rights laws to how we allocate resources. They simply cannot do this by stealth. It is too big.
It's also important to recognise that age verification does not necessarily require a centralised or permanent Digital ID system. These are two very different concepts, and conflating them can lead to confusion and fear that this could be a “slippery slope”.
Age Verification is typically a one-time check to ensure a user meets a minimum age requirement. It does not need to be repeated with every log in or interaction. This can be done in ways that protect user privacy by using anonymous credentials or “zero-knowledge proofs”, where a user proves they are over a certain age without revealing their exact birth date or identity or employing third-party verifiers that confirm age and then delete the data, rather than storing it or linking it to an online profile. There are also AI-based “estimation” tools (facial analysis without storing images), although these raise accuracy and bias concerns and bring with them other problems.
A Digital ID System is a permanent, centralised identity tied to a person across multiple platforms and services, often for authentication, payments, voting, or public services. These systems are essentially mass surveillance weapons.
Everyone fears a “slippery slope”. I get it. I can see there is shared concern that once age verification is normalised, governments or tech companies might push for a universal digital ID as the only solution. But the two are not linked in this Bill and don’t have to be linked in other policy either. Laws can and should explicitly prohibit turning age checks into broader identity tracking systems. If this Bill ever sees the light of day this would be a key demand to take to select committee consultations.
The panic about Digital ID systems and the censorious effect they would create are misplaced in regard to this particular Bill as it is written. We must be alert to threats to our rights, but equally it is important that we encourage discourse around these matters that is constructive and based in reality.
It is also worth pointing out just how much surveillance our kids are being exposed to by virtue of simply being on these social media platforms as they are now. Tech companies are effectively able to track their usage and behaviour online, profile them for advertising, and expose them to content determined by unfiltered, unregulated algorithms. Ironically, resisting age regulation out of privacy concerns leaves them exposed to unchecked private surveillance.
It is entirely possible to protect children online without building a surveillance state. The right response to privacy concerns is not to abandon protection, but to explore different age verification systems that are privacy-preserving by design.
Parental rights versus government intervention
The breach of speech rights as immediate and downstream consequences of this Bill is what I wanted to focus this Substack on. However, there is another key set of rights that it would interact with that I think is just as important so I want to touch on parental rights. I am a strong proponent for governments staying out of the business of raising kids as much as possible. Children do not belong to the state. They belong to families and their families belong to them. Earlier, I mentioned the axioms that children are vulnerable and the state has a duty to them. This is true, but the threshold for the state intervening in a child’s life and between them and their parents must be very high.
A social media ban could, however, be seen similarly to how we age-restrict alcohol or gambling. Parents can easily assist their child in partaking in either of these things before they are 18, but the law exists in a way that makes it easier for them to set that boundary and enforce it if they see fit. As things are currently with social media usage, it is very difficult for individual parents to impose bans because the child is then isolated from the online social spaces occupied by their friends. A benefit of a law is that it creates a necessity for mass engagement in the issue by all parents and reduces that peer pressure “everybody else is doing it” dynamic.
Parents can also struggle to effectively monitor and manage the digital environments their children access. There are platforms designed to bypass parental controls through persuasive design, recommendation algorithms, and opaque privacy settings. It is worth noting many harms like addictive use, exposure to harmful content, algorithmic reinforcement of body image issues etc are invisible or develop gradually. Put simply, it can be argued that this is a societal harm that is too big for individualistic approaches and requires legislative action to inform parents and give them the means to protect their kids.
Regulation should not be about replacing parents, but about supporting them by ensuring the digital environments their children access meet a basic safety standard, just as food, toys, or medications have to. In this sense, banning social media for under 16s doesn’t remove parental choice; it redefines the baseline of safety, so families aren't left to navigate a highly asymmetrical system alone.
Some parents might argue that their child is not being harmed by social media, but that is like some parents might allow their 16 year old to drink a few beers because “he can handle it”. While individual resilience exists, policy is made at the population level, especially where systemic harm outweighs isolated cases of resilience or resistance. Public health and safety laws routinely limit individual liberty to prevent widespread harm, for example smoking/alcohol age limits and requirements to wear seatbelts or helmets. Individual success stories don’t negate the need to regulate a system causing widespread, predictable harm.
Similarly, some parents might say it is impossible for this law to be enforced and managed. This could prove to be true, but no law is perfectly enforceable and that doesn’t mean those laws shouldn’t exist. We don’t scrap underage drinking laws because teens still drink; we enforce them where possible, educate, and create cultural norms. Likewise legislative action on social media access raises the barrier to entry; platforms must implement age verification while parents and schools can reinforce limits, and social expectations can shift. Regulation can also push tech companies to design age-appropriate alternatives, rather than encouraging early exposure to platforms built for adults. The goal isn’t 100% compliance; it’s risk reduction through deterrence, accountability, and system-wide safety improvements.
Concluding thoughts
I had half written this when I attempted to discuss the matter on X. The attempt was an utter failure. People had retreated to entrenched positions too quickly and my naive attempts at engaging them in discussions pertaining to the Bill itself and the issue of mitigating social media harms resulted in all the least productive kinds of conversation. There was a lot of ad hominem and a lot of Strawman arguments. I persevered a lot longer than I should have and then got upset and gave up. It is so frustrating when we can’t disagree respectfully and productively. Well, it frustrates me because that is the kind of discourse I am interested in.
So I retreated to Substack and decided to finish this piece. I am nervous people will get angry at this too, but I hope that readers who make it this far will see that what I am trying to do here is open discussion not win an argument. I really do want to read your thoughts, but without being called names. Yeah, I know I need to toughen up.






I enjoyed this article, not because I agree with every point, but because it’s well-structured, clearly written, and honest. I could nitpick a few details, but that’s just my nature. Social media is potentially harmful yet also beneficial, addictive, and challenging for parents to monitor. The ban proposed in the bill feels like a poor solution. The real question is: what are the alternatives? It’s a tough issue, but your article is relevant because it fosters an open, fact-based, and honest discussion. Can politicians create a suitable environment for this kind of dialogue? I doubt it.
Your critique of X, however, seems a bit optimistic. From my perspective, achieving a reason-based discussion on X requires structuring posts to clearly and concisely present your position while explicitly inviting comments on specific points. Without this, X interactions often devolve into simplistic responses like “with you,” “against you,” “…but you’re wrong,” or “…but my issue is.”
You don’t need to toughen up, other people need to improve their debating skills and show some courtesy.
Your arguments seem sound to me and I think a lot of parents would welcome the support of having a legal ban to back them up. I think 16 is quite old though. It would make sense to allow social media use at 14, which is the age when children are considered old enough to be left on their own. Having some evidence of the effects of the smartphone bans in schools might enhance your argument.