Experts shared their views on preventing colour revolutions in the era of artificial intelligence (AI) at the Third Phnom Penh Forum, held today, November 27, at the Royal Academy of Cambodia. The forum was themed “Colour Revolutions and the Evolving Global Order: Challenges to Sovereignty and Democracy.”

Colour revolutions refer to uprisings where people demand changes in government or regime, also known as people’s power movements. Examples include uprisings in Tunisia, Libya, Syria, Yemen, Egypt and more recently in Bangladesh, where prime minister Sheikh Hasina fled to India after being ousted, according to Kin Phea, director of the International Relations Institute of Cambodia (IRIC) at the Royal Academy of Cambodia, who opened the forum.

Asanga Abeyagoonasekera, executive director of the South Asia Foresight Network (SAFN) in Washington, DC noted that various AI tools were employed during the Sri Lankan uprising in 2022, which led to president Gotabaya Rajapaksa’s resignation.

He explained that social media platforms, such as Facebook and Instagram, were used as cost-effective and non-violent tools for mobilisation, though they pose challenges for government regulation.

Although Facebook’s policies prohibit the spread of fake news, it often takes time for the company to act on government requests, according to Abeyagoonasekera.

“AI will be complementary, triggering and empowering this type of non-violent revolution,” he said.

IRIC deputy director-general Chhort Bunthang warned that AI technologies, such as deepfakes, pose serious risks to society and play a significant role in expediting colour revolutions.

How can colour revolutions be prevented in the AI era?

Chheng Kimlong, president of the Asian Vision Institute (AVI), suggested that governments collaborate with AI developers to create filters for harmful content to prevent societal unrest. 

He cautioned that the vast amounts of data stored daily by AI tools present a significant danger for future governance and stability.

“When AI processes vast amounts of big data, it creates visual statistics or representations, often highlighting only negative aspects of specific issues. For example, if predominantly negative information about the Cambodian government is input, AI tools may project a purely negative image of the government, causing people to develop a negative perception, potentially inciting anger against the administration,” Kimlong explained.

However, he emphasised that filtering negative information within AI tools alone cannot prevent bad news or colour revolutions. He suggested that creating a healthy space for people to express their opinions and exercise their freedoms is essential.

“You cannot restrict people’s freedom when they are ready to demand change. This applies not to any particular country, but globally. Each nation must ensure a healthy space for freedom – economic, political, social and cultural – that allows people to engage in and contribute to their socioeconomic development. Empowerment, innovation and creativity must be fostered,” Kimlong said.

He added that when people feel satisfied and secure in their daily lives, free from fear and undue restrictions, they are less likely to mobilise against their government.

For Cambodia specifically, Kimlong suggested leveraging AI to promote positive narratives by injecting favourable information about Cambodia into AI tools. This would ensure that when foreigners search for information about the country, they encounter more positive stories, which could help counteract the media-driven factors often associated with colour revolutions.

Sok Eysan, a senator and spokesperson for the ruling Cambodian People’s Party (CPP), argued that AI tools and social media might actually counter opposition groups that spread fake news. He noted that such disinformation campaigns have largely failed, as the majority of Cambodians are capable of discerning the truth.

“As we all know, opposition figures frequently slander the government. However, people use modern technology and media to verify these claims. If they discover that the claims are false, the opposition loses credibility. Therefore, we shouldn’t be overly concerned about AI being used to spread fake news,” he stated.