Africa: The Deepfake Is a Powerful Weapon in the War in Sudan

23 October 2024
analysis

While still rudimentary - voice-cloning models can't yet relay convincing Sudanese dialects - the use of Deepfakes is now routinely used on Sudan's violent, if bloodless, alternative battlefield: social media.

In April 2024, an image of a building engulfed in flames went viral on Facebook in Sudan. The image was widely shared with captions that claimed the building was part of Al-Jazeera University in Wad Madani city, and the Sudanese army had bombed it. Many political leaders and public figures were among those that fell prey and shared it.

This, however, was not an isolated case, As the conflict between the country's army and the rebel paramilitary Rapid Support Forces continues, social media platforms have become an alternative battlefield where AI-generated deepfakes are being used generously to spread fake news about each other and gain more sympathisers. The trend poses a serious threat in the Northeast African country that is in dire need of a healthy information ecosystem.

AI was used to generate fake videos during the very early days of the ongoing war. In August 2023, the Daily Mail identified a video of the U.S. ambassador to Sudan saying that America has a plan to reduce the influence of Islam in the country.

In October 2023, an investigation by the BBC exposed a campaign that used AI to impersonate Sudan's former leader, Omar al-Bashir, which had received hundreds of thousands of views on TikTok.

In March 2024, an X account for a TV and radio presenter shared a recording attributed to the Sudanese Armed Forces head ordering the killing of civilians, deployment of snipers, and occupation of buildings. This AI-created recording was viewed by 230K and shared by hundreds, including known Sudanese politicians.

Moreover, in September 2023, some tech savvy Sudanese with no obvious political affiliations started using deepfake technologies to create satirical content. For instance, a song that was originally published to support the Sudanese Armed Forces was reproduced as a deepfake in which RSF leader, Mohamed Hamdan Dagalo (alias Hemedti), is seen singing the song along with one of his high-ranking officer. It was seen thousands of times. While viewers did not miss the humorous goal of the modified song, in other cases, this content morphed into disinformation.

In March 2024, an AI-created recording showed a secret meeting between the RSF militia leaders and a couple of the Freedom and Change coalition leaders discussing a plan for a military coup, while the recording was not authentic, it was shared in the same month by known journalists and even the National TV before being removed later. Obai Alsadig, the creator of the recording, told me that the recording was "sarcastic with weak dialogue and fake" and that he " wanted to demonstrate to the public that creating such fake recordings is not difficult".

The Sudanese Armed Forces supporters, as part of the psychological warfare, launched a campaign to cast doubt on the authenticated recordings of the Hemedti, falsely claiming they were all AI created and that he is dead, even though independent analysis revealed, with high confidence, that these recordings are accurate.

There have been several efforts made to combat the deep fake disinformation content headquartered in the Khartoum-based Beam Reports, the only Sudanese fact-checking organisation verified by the International Fact-checking organization, and which has been fact-checking content in Sudan since 2023. The organisation has been tracking the deep fakes in Sudan and published an analysis about it.

"While there is use of deepfake technology on social media for misleading purposes, we cannot say that its use has increased significantly in the previous six months, specifically in the context of Sudan. However, it is noticeable that misleading audio content has also been generated using AI and attributed to people active in Sudanese affairs. Marsad Beam, a division inside Beam Report that's tasked with monitoring and fact-checking viral fake news, has worked on a report in which it verified the authenticity of this content, Beam Reports explained to me via email.

In an online seminar organised by UNESCO in May, Beam Reports stressed the challenges that the use of AI has thrown up in recent months.

"After a year of countering disinformation online, Beam Reports stressed that the absence of on-the-ground reporting is leading to the increase of mis/disinformation," UNESCO said in a statement after the event. "This is further amplified and complicated by the increasing use of generative artificial intelligence in the production and dissemination of disinformation and hate speech."

Individuals with advanced technical skills also chimed in to help fact-check viral content, Mohanad Elbalal, a UK-based Sudanese activist who voluntarily fact-checks content on social media, explained to me. "If I think a clip is a deepfake I will reverse image search frames of it to try and find a match as usually these AI deepfakes are re-produced from a template, so similar images are likely to show up. I will search for anything identifiable in the clip such as the bogus news channel logo used in the deepfake."

"The biggest limitation in deepfake detection is the lack of access to reliable tools," Shirin Anlen, a media technologist at Witness, told me via email. "While there's some inspiring research happening in the field, it's often out of reach for the general public and requires a high level of technical expertise. The publicly available detection tools can be difficult to understand due to a lack of transparency and clarity in the results, leaving users in the dark -- especially when these tools produce false positives, which they do quite often," she explained.

"From a technical standpoint, these tools are still heavily dependent on the quality and diversity of the training data. This reliance creates challenges, particularly around biases towards specific types of manipulation, personas, or file quality. In our work, we've noticed that file compression and resolution play a big role in detection accuracy" she added.

The problem of AI-generated fake news in Sudan could further intensify as the technology becomes more advanced.

"So far, much of the AI-generated deepfakes circulated in the country could easily be identified as fake because of their poor quality, which could be attributed to the lack of data trained on Sudanese dialects," Mohamed Sabry, a Sudanese AI researcher at Dublin City University, shared his thoughts with me. But this will change in the future if malicious actors decide to invest more time and money to utilise advanced AI technology to produce their content, he added.

"In low-resource languages, such as Sudanese dialects, voice cloning models are less effective and easily identifiable. The robotic tone is quite apparent even to inexperienced listeners," Mohamed said. However, "many efforts are being made to tackle the challenges of low-resource language datasets. Additionally, the impressive generalisability of deep neural networks when trained on near Arabic dialects and domains is noteworthy."

While fake content that could incite violence in Sudan's volatile political climate is a major threat, there is also a risk of the liar's dividend among local politicians.

In June 2023, Mubarak Ardol, a Sudanese politician, tweeted that his voice recording of his telephone conversation with another politician engaging in mocking the army leader and his willingness to accept any offer from the RSF militia was not accurate and was made using AI software. He claimed that the creator of the recording relied on samples of his voice available on the internet.

This phenomenon has a serious negative impact as it creates an environment of skepticism that pushes citizens to question even valid AI-generated information.

Social media platforms have started taking strong measures against Deepfake content, but all of this content is still accessible, showing how these policies are not enforced in countries such as Sudan possibly due to inadequate or lack of content moderators who understand the local context.

YouTube shared with me its deepfakes policy that prohibits this kind of disinformation, the policy stresses that "technically manipulated content that misleads users and may pose a serious risk of egregious harm is not allowed on YouTube. "

Moreover, the policy requires "creators to disclose when they've created or altered synthetic content that is realistic", and that "we'll apply a transparency label to this content so viewers have important context".

Mohamed Suliman is a researcher and writer, he also holds a degree in Engineering from the University of Khartoum

AllAfrica publishes around 500 reports a day from more than 100 news organizations and over 500 other institutions and individuals, representing a diversity of positions on every topic. We publish news and views ranging from vigorous opponents of governments to government publications and spokespersons. Publishers named above each report are responsible for their own content, which AllAfrica does not have the legal right to edit or correct.

Articles and commentaries that identify allAfrica.com as the publisher are produced or commissioned by AllAfrica. To address comments or complaints, please Contact us.