The World Economic Forum’s Global Risk Report also ranks AI-supported disinformation as a top threat, often aimed at undermining democratic values and dividing societies, Sittig highlights.
In collaboration with the Konrad Adenauer Foundation, researchers Karen Allen from the Institute for Security Studies in South Africa and Christopher Nehring from the cyberintelligence institute in Germany documented AI disinformation in Africa and Europe around national elections.
They found that such disinformation campaigns primarily aim to undermine electoral authorities and processes, noting a lack of research on AI disinformation in Africa beyond elections.
Variety of actors exploit AI disinformation for propaganda
Allen and Nehring discovered that Europe and Africa share similar challenges in dealing with AI disinformation. They identified various perpetrators using the same AI tools, with a significant prevalence among extreme right political parties.
While Russia appears to have made AI-generated disinformation a tool of its international policy, other actors from China or the Gulf states also target Africa with propaganda.
Groups linked to other states, cyber criminals, terrorist, and Islamist organizations use AI in online disinformation campaigns mainly to produce content, consistent with earlier studies by different researchers.
In many African countries, internet and social media access is either too expensive or unavailable; therefore, the dissemination of ‘deepfakes’ and ‘cheap fakes’ is limited.
Disinformation influences public opinion in conflicts
The ongoing conflict in the Democratic Republic of Congo between government forces and the Rwanda-backed M23 group has been a breeding ground for disinformation and hate speech, with content linked to Rwandan accounts influencing public opinion and tension.
In Rwanda, AI is allegedly used to suppress dissenting voices on social media platforms, a tactic not only used in conflict zones but also during elections.
Increasing presence of AI-generated content and fact-checking
AI-generated content is growingly influential in Africa, alongside an increase in accredited fact-checking organizations. In South Africa, for instance, the Real411 platform enables voters to report suspicions about online political content, including AI use.
During South Africa’s 2024 parliamentary election, the newly founded Umkhonto we Sizwe party, led by former President Jacob Zuma, shared a deepfake video of US President Donald Trump alleging his support for the party. This instance highlighted the growing trend of deepfakes around political transitions.
In Burkina Faso, videos were circulated to support the military junta, featuring avatars urging citizens to back the coup leaders. The use of Synthesia, an AI tool, symbolizes a broader pattern of AI exploitation by different political actors, including those from China.
Citizens can protect themselves against online manipulation by sourcing news from a variety of platforms. But with social media giants like X and Facebook scrapping their fact-checking initiatives, verifying content online has become more challenging.
Despite being behind Europe in data protection measures, Africa’s evolving regulatory landscape offers the chance to learn from others’ missteps and better combat AI-driven disinformation.
Dieser Artikel wurde ursprünglich auf Deutsch verfasst und von Cai Nebe angepasst.