COMMENT | Recently, the issue of content moderation on social media platforms came under the spotlight, particularly surrounding government removal requests.

This followed allegations that the government had requested the removal of content unfavourable to it. In response, Deputy Communications Minister Teo Nie Ching said in Parliament that the government lacks the authority to do so.

For context, a government removal request is a submission by the government or a responsible authority to social media platforms to remove content. This is based on the content allegedly being in infringement of local laws.

In response, the platform will assess if the reported content is against its community guidelines and if it is, the content will be removed entirely.

However, if the content does not breach its community guidelines, the platform will then assess if it infringes local laws as alleged.

Should this be found true, then the availability of that reported content may be restricted in the country in question.

However, there may be room for differences in interpretation of whether the reported content does run afoul of local laws.

When deciding, platforms may also go a step further and consider if the content has a public interest value, or if it remains within the remit of international free speech rights.

What this means is that platforms do not acquiesce to all removal requests all the time.

For example, between January and June 2023, TikTok removed 92.4 percent of the 847 contents reported by the Malaysian authorities.

Of this, 373 were removed from the platform entirely because of infringements of TikTok’s community guidelines, while 410 were restricted only in Malaysia due to a violation of local laws.

Meanwhile, over the same period, Meta restricted about 3,100 content in Malaysia on both Facebook and Instagram.

There are concerns with this status quo.

Determining legality

First, should platforms be the final determiner of the legality of content for any given jurisdiction? Answering this question requires a careful balancing between the platforms’ lack of democratic legitimacy to determine free-speech redlines and the risk of censorship should governments be the final arbiter instead.

Relatedly, as these platforms are driven by commercial interests, these may inevitably factor into their decision-making process, particularly if the reported content has a large audience on their platform.

Second, there is a lack of transparency on what exactly is requested to be removed and is removed. This reduces the room available for third-party scrutiny of the legitimacy of these actions by the authorities and the platforms.

For example, Meta’s Transparency Centre’s “content restrictions based on local law” page does not provide information on the total number of requests it receives and its removal rate, unlike TikTok. Beyond just another data point, removal rates allow for a glimpse into a platform’s compliance levels.

Third, the Malaysian Communications and Multimedia Commission (MCMC) does not periodically publish its requests for removals.

Some insight, however, can be drawn from its December press statement, which highlighted that 25,642 pieces of content were requested to be removed in 2023 because of their harmful nature.

Aside from the non-disputable requests to remove pornography, scams, illegal sales and gambling-related content, questions remain over what constitutes “fake news”, which made up 15.6 percent of the requests it submitted.

Relatedly, there is the added consideration that not all false information is harmful and should be removed.

Further, with the legal basis of these removal requests sometimes being the broadly applicable Section 233 of the Communications and Multimedia Act 1998, concerns over a lack of transparency are compounded.

The consequence of the status quo can be seen in the proverbial pudding – wherein one side alleges that the other is unfairly and illegitimately censoring dissent, while the other may be under the belief that their actions are justified.

Moving past this impasse requires meaningful transparency on both fronts.

At the very least, platforms should report the requests they receive to the Lumen Database. Concurrently, platforms can and should, or in the event of non-compliance, made legally required – publish reports containing more granular information.

These will provide meaningful transparency over its content moderation decisions vis-à-vis removal requests and allow for effective scrutiny.

Meanwhile, the government needs to do away with its continued reliance on the outdated and problematic term “fake news”. In lieu, it should adopt more specific categorisations to describe the harmful content it requests platforms to remove.

With domestic politics becoming more competitive in recent years and with social media a crucial arena for public discourse, it is imperative to protect free speech.

Striking a balance between government concerns, platform responsibility and individual liberties will be crucial for a healthy democracy moving forward.

This article first appeared on Malaysia Kini, 14 March 2024

- Advertisement -