As AI use implodes, concerns arise over states’ ability to mitigate risks and harms

At the ASEAN Digital Ministers’ Meeting (ADGMIN) 2025 in Bangkok, Digital Minister Gobind Singh announced Malaysia’s championing of AI deployment safety through its lead on establishing an ASEAN AI Safety Network (ASEAN AI Safe).

The network aims to promote AI safety research, responsible AI adoption and harmonised standards across ASEAN, a region whose digital economy is expected to surpass US$1 trillion by 2030.

An AI safety institute is a state-backed institute that aims to evaluate and ensure the safety of the most advanced artificial intelligence models. As AI use implodes, safety concerns began to arise alongside conversations about its potential risks.

But the real nature and scope of these institutes vary significantly, based on national priorities. For example, the European Union’s AI Safety Office focuses on regulatory oversight with a strong emphasis on ethics, risk management and user safety. Singapore’s AI Safety Institute focuses more on innovation, practical application with a business-friendly emphasis on ethics in application and commercial development.

In the context of ASEAN, AI safety institutes are unlikely to mirror the EU’s one-dimensional approach. Instead, each institute would likely align with the national ambitions, priorities and resource capabilities of individual member states. But given the region’s diversity and resource disparities, an ASEAN AI Safety Network could be a more practical solution than standalone institutes.

Leveraging on collective strength

The concern is over developmental disparities, which might impact on member states’ readiness to govern AI effectively. Without a structured and phased approach to governance, uneven adoption of AI safety practices – such as fairness, transparency and data privacy standards – could alienate key contributors and undermine its credibility.

A regional AI safety network could provide a shared framework for managing these risks while leveraging on collective strength. The network could also tackle shared governance challenges, ensuring transparency, accountability and ethical AI use across borders. Ultimately, its structure should promote transparency, develop ethical guidelines and build trust.

To define ASEAN AI Safe’s scope, the network could drive R&D initiatives, such as creating tools for bias detection, domain-specific AI applications and frameworks for algorithmic transparency. Simultaneously, the network must address intellectual property concerns and data-sharing protocols.

While the imperative of such a regional network is indisputable, its success hinges upon several key challenges that need to be addressed prior to its establishment.

NAIO in the lead

Malaysia has made significant strides in its AI governance agenda through the recent establishment of the National Artificial Intelligence Office (NAIO). However, there is still a lack of clarity on how this body will steer regional collaboration effectively.

NAIO must first solidify its domestic frameworks and expertise to serve as a credible leader for ASEAN’s AI safety ambitions. For instance, Malaysia’s efforts to operationalise ethical AI through the National AI Roadmap could serve as a model for regional frameworks while skill-building programmes could address AI’s impact on labour and talent across ASEAN.

While ASEAN AI Safe’s success does not solely hinge on NAIO, the agency must demonstrate sufficient progress to lay the groundwork for collaboration and align priorities. This is to ensure that Malaysia could engage with other member states in a cohesive and strategic manner while demonstrating its capability, knowledge and building trust to drive the initiative forward.

Many member states also lack the advanced R&D infrastructure, technical expertise and regulatory frameworks to address complex AI safety challenges, such as auditing proprietary algorithms or mitigating biases.

But a deeper concern is the connection between AI adoption and ability to identify and mitigate associated risks, such as data breaches, algorithmic biases and identity theft. Without widespread familiarity with these technologies, governments might struggle to recognise and address these harms. Establishing shared frameworks, such as a regional AI safety network, would enable member states to pool resources, share expertise and address complex safety challenges.

Inclusive AI plans

As ASEAN chair, Malaysia has the advantage of being able to drive the regional narrative on AI safety. While this position allows Malaysia to guide discussions and set the agenda, it needs to navigate regional differences while ensuring that its AI plans are inclusive.

An actionable road map for ASEAN AI Safe is needed, structured around a phased approach that ensures clarity of purpose and inclusive collaboration. Phase one should focus on defining the initiative’s scope – such as definitions, guidelines, risk assessment and AI ethics.

The second phase would emphasise capacity building and pilot projects to bridge technological gaps and prevent widening inequalities. The final phase should solidify a multistakeholder network to sustain long-term collaboration.

ASEAN has seen some levels of success in multistakeholder-advisory models, such as the ASEAN Intergovernmental Commission on Human Rights (AICHR), which integrates government and civil society perspectives to shape human rights policy.

While a multistakeholder approach ensures balanced decision-making, it is crucial to recognise that managing such a diverse group comes with challenges, such as conflicting interests and prolonged deliberations.

Ultimately, Malaysia’s leadership in this initiative reflects its aspiration to drive ASEAN’s digital transformation. While the initiative’s potential is clear, its success hinges on Malaysia’s ability to address internal limitations, navigate regional differences and fill critical gaps.

- Advertisement -