Nairobi — Artificial intelligence (AI) is rapidly reshaping the cybersecurity landscape, with both defenders and cybercriminals leveraging its capabilities to outmanoeuvre each other. As businesses across Africa embrace digitisation, the threat of AI-enhanced cyberattacks is growing, experts warn.
"All depends on who is behind the keyboard," says Allan Juma, a Cyber Security Engineer at ESET East Africa. "AI itself is neither inherently good nor bad - but it has the potential to be both. It can offer powerful protection against cyberattacks, but in the hands of criminals, it can exploit human vulnerabilities on a massive scale."
One of the most concerning AI-enhanced threats is advanced social engineering. According to Juma, generative AI tools like ChatGPT make it easier for cybercriminals to craft highly convincing phishing emails that closely mimic real executives or colleagues. "These models are also skilled in translation, allowing attackers to expand their reach into new regions and smaller countries with niche dialects," he explains.
Attackers are also using AI to automate vulnerability scanning, increasing the speed and scale at which they exploit weaknesses in corporate security systems. This includes identifying compromised internal accounts, which they then use to send phishing emails or launch deepfake impersonation scams. With AI-generated audio and video imitating managers, CEOs, and finance officers, distinguishing between legitimate and fraudulent communications is becoming increasingly difficult.
Juma stresses that the human element remains a major vulnerability. "A significant percentage of data breaches are the result of human error. Cybersecurity awareness training is essential to combating AI-driven cybercrime. A lack of knowledge is the greatest threat to cybersecurity in any business - and cybercriminals know that."
A recent report by the Google Threat Intelligence Group (GTIG) highlights how cybercriminals are already misusing AI for nefarious purposes. The Adversarial Misuse of Generative AI report, released in early 2025, found that criminals are using Google's AI model, Gemini, for research and content generation. This includes developing target personas, crafting deceptive messaging, and expanding their reach through translation and localisation.
While cybercriminals are exploiting AI, it is also playing a crucial role in defence. Security teams are using AI-driven tools to analyse patterns, predict threats before they occur, and automate responses to cyber incidents--reducing reaction times and mitigating potential damage.
"Actually, AI has been part of cybersecurity approaches and software for a long time - way before it became a big talking point for the general public," says Juma. "This gives us an advantage because we've had time to integrate it into our systems. The danger now is that AI's widespread adoption has led to complacency. Businesses must remain vigilant and aware of its risks."
As AI-driven cybercrime evolves, companies must invest in both cutting-edge security technologies and comprehensive employee training to safeguard their operations. Without a proactive approach, organisations risk falling victim to increasingly sophisticated attacks that exploit the weakest link--human error.