Responsible political discourse rests on balance of free speech, objectivity

As Malaysia heads into six state elections, it is imperative for social media platforms to step up their game in moderating harmful content. 

There is an increasingly worrying political trend following Malaysia’s 15th general election (GE15), where racialised political messaging and exclusionary political narratives have become normalised.  

With six states headed for elections in 2023, social media platforms, which play a key role at the heart of political discourse, must step up their game and assume greater responsibility for the posted content. 

At the centre of this discussion is content moderation, which plays an outsized part in ensuring responsible political discourse and preventing the spread of harmful content. 

Content moderation is the process of assessing posted material to determine its appropriateness for a given platform. This is done by evaluating the content against a platform’s internal guidelines and the domestic laws of the country in which the platform operates.  

The bulk of the evaluation of content is done through artificial intelligence (AI) but, to a lesser extent, through user reporting and government-removal requests. Human moderators continue to play a role in assessing content appropriateness. 

If any content infringes on either company guidelines or the relevant law, the platform will either remove it or block access to it in a specific country. 

The concern here lies in whether such efforts are sufficient to operationalise content moderation. In GE15, political messaging was weaponised in the form of  hate speech and veiled threats  towards minority groups.  

This was on the purported basis of these groups being brazen enough to “steal” political power from the majority population. While some of this content was eventually removed, it came only after widespread circulation and  public outcry. 

Relatedly, social media platforms do not disclose what resources they allocate to individual markets, especially relatively smaller ones perceived as unproblematic, like Malaysia.  

Resources here include how well-trained the AI models are at detecting issues specific to Malaysia, the number of human moderators dedicated to the country, and the language proficiency of the AI model and human moderators to account for hyperlocal colloquialisms and slang. 

For example, none of the existing resources could flag, moderate, and remove the videos calling for a repeat of the May 13th racial riots of 1969, which involved sectarian violence between Malays and Chinese in Malaysia. This is because the date alone, when detached from its historical significance or context, would not suggest that it constitutes hate speech and incitement towards violence in the present day. 

Understandably, content removal requests by the government also raised concerns. Fears are that such moves could lead to censorship of political speech, especially against critics of the current administration.  

Of greater concern is that removal requests can be made on vague bases, such as infringing broadly applicable legislation, like Section 505(b) of the Penal Code and Section 233(1) of the Communications and Multimedia Act 1998. The former draws the line for free speech at  statements bringing about public mischief, while the latter prohibits the  improper use of network facilities. 

In March, a member of the  opposition alleged  that the government had asked a social media platform to block the live stream of his parliamentary debate. The government rebuffed this, but this incident may have given the impression of political bias in content moderating. 

With electoral competition expected to intensify as the state elections draw closer and as racially charged exclusionary politics becomes the only name of the game for some politicians, there is an urgent need for platforms to improve their content moderation.  

The first step would be to increase the transparency of such processes. Currently, the only insight into how they operationalise content moderation is via periodically published transparency reports.  

Such reports, however, often lack the granularity needed to objectively assess the platforms’ efforts. By publishing more data, such as the scale of the information disorder on their platform and why action was taken on certain content but not others, would allow for fairer assessments of the platforms’ efforts. This would naturally increase trust in these platforms and insulate them from  allegations of inaction. 

The platforms should provide greater transparency on a government’s content removal requests. This is pertinent given the existence of vague and broadly applicable legislation in Malaysia, as mentioned above.  

Transparency here will contribute towards scrutinising the government’s actions and holding it accountable for its removal requests. 

Transparency would also address concerns over political bias and interference in the content-moderation process. With the electorate  continuing to vote along ethnic lines, it is reasonable to expect that some  parties and politicians  bent on capturing these voter bases by appealing to racial and religious identities would become more frequent targets of content moderation. At this, transparency can help to communicate the balance between free speech and legitimate content moderation. 

A second step would be to optimise resources for content moderation. In the lead-up to elections, platforms should engage with researchers and civil society organisations (CSOs) to better understand the information environment. This would help inform resource allocation and align content-moderation practices within the context of Malaysian elections. 

During the campaigning period, platforms would be wise to partner with local stakeholders who monitor online spaces for potential campaign infringements, including instances of hate speech.  

Through such partnerships, priority can be given to any content flagged by trusted stakeholders, making for a more efficient and targeted content-moderation effort. 

Neither greater transparency nor more engagement and partnerships can, on their own, address the deepening trend of racialised and exclusionary politics in Malaysia. However, given their ubiquity, it is imperative for social media companies to assume more responsibility for the content on their platforms and to be more proactive in safeguarding the integrity of political discourse. 

This article first appeared on Fulcrum, 22 Jun 2023

- Advertisement -