DESCRIBED as “devastatingly handsome” and “every woman’s dream come true”, North Korea’s Kim Jong-un was named the sexiest man alive.

Most would have realised that this report by The Onion was fictional, but a state-run Chinese newspaper, not realising so, has republished the news, along with a 55-page slideshow of the stout dictator.

This may have ended with laughter (humiliation for the newspaper though), but most “fake news” do not end so kindly. Financial loss to businesses, election interference, and in some cases, where it amplifies people’s fear or stokes ethnic and religious hatred, “fake news” has claimed lives of the innocents. In India, many lost their lives due to WhatsApp-fuelled lynching this year alone. Beggars and strangers, wrongly accused of being robbers or kidnappers, were beaten to death by fearful villagers.

Dissemination of “fake news” is nothing new. However, with the rise of social media and instant messaging platforms, which have changed their function to become the primary source of information, the speed with which it spreads is much faster. Additionally, the term “fake news”, though poorly defined and highly politicised by the Trump administration, has recently managed to gain attraction globally. This has sparked a renewed interest in the much broader issue of disinformation and misinformation.

How can we be so vulnerable to this exploitation? According to “The Illusory Truth Effect”, a mental strategy that relies on our memory of past experience, every time we encounter false information it will grow more familiar and cast the illusion of truth. Thus, in this day and age, everything we read online which transfers to everything we hear offline, progressively becomes our experience, then our belief.

To make matters worse, technological advancement has also contributed to this problem. “Deep Fakes”, for instance, an artificial intelligence-based technique to create fake images and videos, is real: Anyone can be made to appear as doing or saying anything. On the other hand, anyone can also dismiss their actions to this kind of fakery. Big Data is being used to create specific false information to targeted readers. Bots, which are software that can post, like or retweet automatically, can interfere in online discourses. In the recent Malaysian general election, tens of thousands of propaganda messages flooded Twitter to influence the election, as reported by international and local newspapers.

Are we Malaysians that easily manipulated? Do we trust everything we read on social media? The Malaysian Communication and Multimedia Commission’s (MCMC) survey found that 82.7 per cent of respondents trusted health-related information they found online — regardless of the source. Another study has revealed that Malaysians are increasingly uncertain on what is real and what false information is.

The impact of misinformation in Malaysia might still be relatively minimal, but the country will face bigger threats if the problem is not managed properly.

Through MCMC, Malaysia’s efforts in fighting disinformation and misinformation include awareness programmes and a fact-checking website, called The controversial Anti-Fake News Act, which has largely been criticised as a tool to stifle free speech, has seen the Dewan Negara rejecting its repeal. Debates linger whether Malaysia needs the act (or a refined version of it) or the series of existing laws is “sufficient” to ensure a safe cyber environment for the people.

Should Malaysia follow in the footsteps of Germany, imputing liability to social networks and media sites? However, the main challenge of Germany’s NetzDG law, which requires “obviously illegal” posts to be removed within 24 hours or risk a hefty fine, is that too much content is being blocked. This curtails free speech, which is always the issue in regulating content.

Or should Malaysia take it slow and steady, like Singapore? Our southern neighbour gathers input from journalists, advocacy groups and others for a parliamentary report, making no call for an urgent law.

There is no easy answer to this — at least 29 countries have attempted to legislate against this global issue. Social media and instant messaging platforms have also taken steps to fight the rampant spread of false information, feeling the pressure of being in the hot seat. As the Indian government puts it, these platforms can no longer continue being a “mute spectator” any more.

Moving forward, what can we, the users, do to safeguard ourselves? Digital literacy is the core of the solution. Users need to take responsibility for sharing information online, and understand that even with good intentions it could result in ugly outcomes. Peeling these layers of “fake” is our battle too — we must hunt for the truth, or fall prey to the false. It is not fun living April Fools’ Day, every day.

- Advertisement -