The 2023 general elections in Nigeria were concluded a year ago, paving the way for the upcoming 2027 general elections, set to unfold in an era marked by a profound surge in artificial intelligence (AI) tools--a phenomenon unprecedented in the annals of technological progress.
Unlike the prelude to the 2023 elections, when AI tools were relatively obscure in cyberspace, the current landscape is witnessing a notable proliferation of these technologies.
In the lead-up to the 2023 general elections, the utilization of AI tools was limited, with only a few instances of negatively employed deep-fake videos and images on a small scale. However, the present scenario is markedly different, with the explosion of AI tools fueled by advancements in generative AI posing a significant threat to the Nigeria 2027 general elections.
Videos, known for their high viewership and rapid dissemination, become potent tools for misinformation when coupled with AI-powered text-to-video converters. Unscrupulous political actors in Nigeria can exploit these tools in 2027 to fabricate and incite events that never transpired.
The process can involve using the AI tool to convert written narratives into videos and subsequently employing its editing features to manipulate visuals and audio, creating a semblance of reality. By orchestrating the video environment to resemble specific locations, these deceptive videos can aim to convince viewers that the events depicted are genuine occurrences in familiar places. This calculated deception can be orchestrated to mislead the general public, potentially inciting violence.
The fabricated videos, once created, can be disseminated strategically across various social media platforms, aiming to deceive the public and evoke negative reactions based on the false information embedded in the content. The ultimate consequence is the potential escalation of social and political turmoil, as unsuspecting audiences react to events that exist solely within the fabricated narratives circulated online.
Advanced AI technology can empower cyber attackers to create autonomous attack systems capable of targeting electoral systems independently, without human intervention. These autonomous attack tools can possess the capability to persistently assault a target system until successful penetration is achieved. Their formidable feature lies in the capacity for massive, simultaneous attacks numbering in the hundreds of thousands, posing a significant challenge for cybersecurity systems to withstand.
The perilous aspect of these autonomous attack tools lies in the anonymity they afford. With no physical attacker present at a remote location orchestrating the assaults, apprehending the culprits becomes an elusive task. This lack of a tangible adversary complicates the attribution of responsibility and makes it exceedingly difficult for Nigerian law enforcement and cybersecurity experts to trace and hold accountable those behind the attacks.
The absence of a human perpetrator in the immediate vicinity further underscores the clandestine and surreptitious nature of these autonomous attacks, exacerbating the challenges in defending against and responding to such cyber threats.
Unpatriotic individuals in Nigeria can exploit AI video translation tools to manipulate content in a manner that accentuates or exaggerates divisions based on ethnicity, religion, or region. It is well-known that these factors are particularly sensitive in the Nigerian context, especially during the lead-up to general elections when emotions run high, and various forms of propaganda are prevalent.
Deliberately framing content in this manner has the potential to inflame existing prejudices, contributing to the escalation of conflicts and creating an environment conducive to civil unrest, all with the aim of disrupting the electoral process. These orchestrated threats are often orchestrated by the "foot soldiers" of politicians.
Looking ahead to the 2027 democratic process, there is a foreseeable shift from traditional political thugs to what I call cyber thugs, reflecting the evolving landscape where technology plays a central role in influencing public sentiment and potentially destabilising the electoral process.
The evolution of Deepfake technology has undergone significant improvement with the progression of technology, it will make it easily accessible at one's fingertips in 2027, a notable advancement compared to the landscape in 2023. Deepfake videos and images have reached a level of sophistication that enables them to elude detection tools designed to identify such manipulations. The emergence of highly sophisticated Deepfake technology can introduce a new dimension to the potential for election interference in Nigeria.
These advanced Deepfakes, capable of convincingly manipulating media content, pose a heightened risk of spreading false information, damaging reputations, and fabricating events. The looming possible threat of malicious actors exploiting these high-calibre Deepfakes during the 2027 general elections raises serious concerns for the Nigerian democratic process.
Recently, a purported "leaked audio" featuring the voice of former Sudanese president Omar Al-Bashi, shared on X and Facebook, has been determined by BBC investigators to be a deepfake with a cloned voice.
A text-to-video AI tool equipped with advanced capabilities for synchronising text, audio, and video elements, along with the capability to incorporate a fabricated image of any individual, poses a concerning threat. Cyber thugs in the build-up to the 2027 election can draft an inflammatory speech, upload it into the tool, and seamlessly generate a video wherein an impersonated figure, such as the head of the electoral body, delivers the prepared speech. This malicious tactic can be designed to instigate chaos, sow uncertainties, and create confusion among the electorate.
The potential for such manipulated videos to mislead and manipulate public perception adds a disruptive dimension to the electoral process, warranting heightened awareness and preventive measures to safeguard the integrity of democratic proceedings.
Another difficulty arises from the widespread use of large language models, posing a challenge for chatbot developers to effectively control them. A recent incident involved a user creating a GPT instance impersonating a presidential candidate, actively seeking votes for an upcoming election in the USA, a clear violation of ChatGPT's policies.
This misuse raises concerns about the potential for unpatriotic actors to use it during the lead-up to the 2027 general elections. Large language models can be exploited by political opponents to impersonate influential public figures or front-runners in political the races, disseminating incendiary information, falsely claiming withdrawal from the race, or spreading other misleading information that could pose a threat to the smooth conduct of the election process.
As AI weaves its way into the democratic process, the negative threats it can bring demand careful consideration and proactive measures to safeguard Nigeria's democracy.
Haruna Chiroma wrote from University of Hafr Batin, Saudi Arabia - [email protected]