by Harris Zainul and Samantha Khoo

The recent stabbing of a 16-year-old girl at a school in Bandar Utama has sent shockwaves across the nation. In its wake is a country that is struggling to come to terms with how such a barbaric attack could happen. Amidst this uncertainty, the public has demanded firm and decisive action by the government.

At the time of writing, the root cause(s) of the attack remain unestablished. Yet, simplistic explanations and easy solutions have been offered – mostly well-meaning, and perhaps to give the illusion that steps are being taken in response. While this may all be without malice, policies must be evaluated through the trade-offs they entail, and whether their benefits truly outweigh their risks and costs.

One such example is the idea of introducing age verification as a means of preventing those under the age of 16 from accessing social media platforms. This idea has been met with applause by some quarters, as after all, protecting children from online harms is an objective few would disagree with.

At first glance, the proposed ban appears straightforward: require users to prove their age using official documentation such as a MyKad, passport, or MyDigital ID; and if they fall under the threshold, they will be prevented from accessing the platform. But implementation betrays this idea’s simplicity, raising concerns about privacy, scope, and enforcement.

Implementation troubles

The first concern is privacy. Although the policy ostensibly targets children, operationalising it would in effect require all social media users, adults or otherwise, to verify their identities. It is only through this exhaustive process that child users can be distinguished from adults. In turn, such a system, whether run by the government or a platform, will pose its own set of privacy concerns.

A government-run system linking official documents to social media accounts would erode online anonymity and heighten fears of surveillance, creating a chilling effect on free speech. This will be particularly damaging for those at the margins of public discourse or those seeking to speak inconvenient truths to power.

Meanwhile, delegating verification to platforms would create another set of problems. It would incentivise them to collect, store, or monetise sensitive personal data, effectively empowering the very companies’ policymakers ought to be regulating.

The second concern is scope and efficacy. Even if a ban were politically feasible, its coverage remains questionable as to which platform would fall under this policy. A logical starting point might be the social media platforms that hold the Application Service Provider Class licence to operate legally in Malaysia. If this is the case, it is then worth pointing out that this only includes platforms with more than eight million local users.

Notably, even X has claimed that its Malaysian user base does not meet the user threshold required for licensing – what more smaller platforms, online games, and encrypted group chats where toxic discourse may be more prevalent. As a result, the policy could fail to cover precisely the platforms and communities that policymakers should be more worried about, resulting in both inconsistent and counterproductive outcomes.

The third concern is enforcement. Even if such an age verification system is built, users could easily circumvent it by using virtual private networks (VPNs) or creating accounts through foreign servers beyond Malaysian jurisdiction. Without international cooperation, enforcement would be patchy at best and meaningless at worst.

Moreover, the practicalities are unclear. Would users need to verify their identity once when registering, or repeatedly each time they log in or change devices? Frequent verification would make platforms cumbersome and intrusive, while one-off checks would be ineffective against account sharing or impersonation.

Rethinking the approach

At this juncture, it is important to contextualise the discussion. We are of the unequivocal opinion that limiting children’s access to and use of social media is not inherently wrong. The harms linked to early and unmoderated exposure, ranging from exploitation to cyberbullying to negative mental health outcomes, are well established. But the real question is at what age and in what form.

For one, the minimum age for social media use may not need to be set at 16. Doing so risks sending mixed signals to teenagers, who on one hand are being encouraged to embrace new technologies such as artificial intelligence, while on the other, not being trusted to use social media safely. To clarify objectives, the national agenda for children – including the school syllabus, guidelines for parents, etc. – must be aligned on when digital technologies should be introduced to children. This will be a more informed way of determining the minimum age threshold that does not risk undermining our ambitions to build a future ready generation.

Second, policymakers should rethink how identity is verified. To address legitimate concerns about how this could, in the wrong hands, lead to a slippery slope for the erosion of speech and privacy rights, less invasive measures should be considered. For example, verification could simply confirm whether a user is above or below the age threshold, without collecting or linking identifiable information to their social media accounts. This can be done through “zero knowledge proof” technology or other privacy-first alternatives, rather than linking official documentation with social media accounts.

Admittedly, this remains a developing policy area with no universally accepted best practice. All options have drawbacks, both political and technical, but the government should prioritise approaches that are least restrictive and most respectful of individual rights. It is also worth pointing out here that while combating online scams remain a priority, those policy objectives and considerations should not be conflated with this debate over age-gating social media for children.

More effective ways forward

Age-gating is only the starting point. Protecting younger users online must go far beyond blanket prohibitions. Here, Malaysia should focus on ensuring children have age-appropriate experiences online. This includes not just what younger users see, but also what they can do on a platform. Exposure to violent or sexual content is an obvious red line, but equally important is ensuring that platforms are designed with children’s safety in mind.

This could include ensuring accounts belonging to younger users are set to private by default, restricting interactions between adult and child users, limiting public comments, removing manipulative and addictive features, introducing time limits, and optimising the algorithms for credible, safe, and high-quality information rather than engagement. In essence, an age-appropriate experience means tailoring platform architecture to account for the users’ age and their varying risk profiles to promote better outcomes.

Taken together, effective regulation on this front must balance three imperatives: protecting children, preserving privacy, and sustaining public trust. Policy that sits on opposite ends of the spectrum risks undermining all three, and Malaysia should avoid reactionary approaches that create new risks while trying to solve existing ones.

The impulse to act swiftly in the wake of tragedy is understandable, but policymaking must remain deliberate. Public pressure for the government to “do something”, and its accompanying political capital, should not be expended on ineffective measures but channelled instead towards policies that meaningfully protect children online.

- Advertisement -