(Aug 13): Imagine a company offering products to the masses, which were later found to be defective because of a general absence of oversight, insufficient testing prior to market entry and perhaps even a wanton lack of care. In most cases, the injured parties would have a claim against the company, which reflects one of the founding principles of the law of torts. In its essence, this body of law provides for compensation should one party’s wrongful conduct harm another.

While some liberties are taken with this analogy, it is baffling that some of the world’s largest technology companies can go without similar levels of responsibility, despite market valuations that rival the gross domestic product of entire countries. This is not to say that their products are harmless. In fact, some have been linked to genocide, while others have been responsible for increased polarisation in society, heightened levels of anxiety and depression in teenage users, and shortened attention spans across the board.

Two decades on since the emergence of social media platforms, governments are finally coming to terms on the need to regulate the technology, although the “how” remains far from certain. While the rumination continues, a new challenger has appeared. Similar to how social media brought us closer together and further apart at the same time — generative artificial intelligence (AI) holds both the potential to transform society and undermine it.

On the one hand, it promises to unlock greater productivity and efficiency, although by how much and whether these benefits are fairly distributed remain debatable, while on the other hand, usage of unregulated AI chatbots has already been alleged to have contributed to the death of a teenager in the US. This is on top of the environmental impacts of generative AI use, how it can contribute to cognitive laziness, especially among students, and the vast threats of job displacements and predictions of higher inequality.

This shows that the value of this technology should not be viewed as a good-or-bad binary, but rather, as a spectrum with benefits and harms on opposing ends. Following this logic, the question then is how do we maximise the benefits while mitigating the risk of harms? Or in terms of policymaking, the question is not if the technology should be regulated, but rather, how.

While this may seem obvious, when this conversation occurs, two responses — both arguably disingenuous — are typically offered.

The first is the allegation that governments are too archaic, sluggish and unsophisticated to regulate these technologies. This perception is not helped by the countless videos of ignorant politicians (mainly those in the US) asking the most basic of questions to the leadership of these technology companies.

Yes, these politicians should have been better prepared but it remains equally true that not all arms of government are similarly inept. In fact, governments have been demonstrably able to regulate highly technical industries ranging from biomedicine to aerospace, and from nuclear technology to defence without stifling technological advancement.

Relatedly, the second allegation is that regulation is seemingly incompatible with innovation. Here, again, it is baffling that for every other industry, companies are obligated to, by law, prove product safety prior to the introduction to the market — no matter how innovative they may be. In other words, safety has never played second fiddle to innovation, therefore, why should consumer protection be any different just because the products are digital?

Proponents of these companies may argue that to move the needle forward for our collective futures, these risks are the ticket price, and some harms are inevitable collateral damage. This is doubly unconvincing because if exceptions to public safety were to be made based on a potential innovation’s utility to public good, then it follows that medical products — cures for cancer, heart disease and dementia, for example — should similarly be unregulated.

What goes unsaid is that these companies prefer to operate in a regulatory vacuum, where products can be developed and designed unfettered by legality and unconstrained by consequences. In this Wild West, free-for-all environment, the business model is a combination of “move fast and break things” and “it is easier to ask for forgiveness than permission”.

When put together with the gaslighting of governments and the people and the vast sums of funds these companies use to buy and rent-seek governments across the world, the central premise of this strategy is twofold: develop and deploy products no matter the cost before regulations catch up and to establish a dominant position to increase leverage once regulatory talks eventually begin.

Nowhere is this more evident than how generative AI companies trained their models on copyrighted data. Having to ask for permission from the millions of copyright owners prior and advocating governments to clarify copyright laws vis-à-vis the training of AI models would have taken years.

Meanwhile, having to pay a fair share for the use of copyrighted data for training would have undermined early business models. So, they went ahead and used the data anyway — or in plain words, stole them — to develop these models and entrench their market dominance.

With this, they are now on a stronger footing to approach conversations on regulation and compensation. Large intellectual property holders — the Wall Street Journals of the world, for example, who can afford teams of lawyers and impute reputational damage — can still clawback deals with these generative AI companies. For smaller and individual copyright holders, such as independent media outlets or freelance graphic designers, in the global majority, the situation stands in stark contrast.

All in all, the need to act is clear, and there is a real opportunity to learn from the lessons of the last two decades in allowing social media companies a free rein. The design choices that were adopted and deployed, which drained attention spans, polarised society and contributed to global democracy’s general state of malaise, do not need to be repeated.

The outcomes of generative AI — environmental damage, cognitive decline, jobs displacement and higher inequality — are not set in stone. But to avoid this, governments need to rediscover their ability to shape the relationship with large technology companies and ultimately the rules of the game.

This article was first appeared on the Edge, 13 August 2025

- Advertisement -