A group, Gatefield, has stated that generative artificial intelligence (AI) is rapidly amplifying gender-abuse online in Nigeria, putting up to 70 million women and girls at risk of exposure, with 30 million likely to be directly targeted through deepfakes, impersonation and coordinated harassment campaigns by 2030.
Gatefield, a Sub-Saharan African public strategy and media group, stated this in an analysis released in Abuja on Tuesday.
It said that based on Nigeria's projected 200 million internet users by 2030, nearly half of women online could be exposed to Al-facilitated harm, with direct targeting affecting 30--40 million women and girls annually.
According to Gatefield's 2025 report, nearly half of Nigerian internet users experience online harm, with women comprising 58% of victims.
Follow us on WhatsApp | LinkedIn for the latest headlines
"AI facilitated abuse constitutes structurally violent digital harm, systematically targeting women while exploiting both human and technological vulnerabilities.
"Research on Nigerian women targeted with non-consensual image sharing found that nearly 90% experienced depression or suicidal thoughts, with 11 out of 27 considering suicide and one attempting it.
"Platforms like X and Grok facilitated abuse through functionless content generation, delayed moderation, and opaque policies," said Farida Adamu, Insights and Analytics Lead at Gatefield. "This is unsafe product design, not just bad actors."
The group said that while global jurisdictions are moving fast to regulate AI-facilitated abuse, Nigeria currently lacks AI-specific legislation, forensic capacity, and platform accountability frameworks.
It called for the introduction of Al-facilitated violence guidelines to define and address Al-generated cyber violence, including impersonation and political/social disinformation.
"Cover Al-generated images, video, audio, and text that mislead, manipulate, or impersonate individuals, with explicit coverage of gendered, political, and civic issues.
"Label Al-generated content visibly; remove harmful content within 24--48 hours; conduct algorithmic risk assessments; publish biannual transparency reports."