IN SHORT: Viral videos showing bank notes flying out of a South African ATM are fake. Contextual information and visual clues show that the videos were made using an AI video generator.
Videos shared on Facebook, Tiktok, X and YouTube in South Africa appear to show a popular bank's automated teller machines glitching and spewing out large sums of cash, with people scrambling to pick up the notes.
Two prominent versions of the video were circulating in October and November 2025. One appears to show an ATM inside a shopping mall, named in some videos as Menlyn Park mall in Pretoria, South Africa and another an ATM on the side of a road. Both show similar scenes, including a security guard trying to stop the crowd from collecting the bank notes piling up on the ground.
The video post descriptions largely call this a "glitch" at the ATMs, which are branded with the logo of Capitec, a large South African bank. One video is captioned "Free money" and another reads "When the ATM decides to bless the community".
Keep up with the latest headlines on WhatsApp | LinkedIn
The scenes shown in these clips are chaotic and sensational, and might appear realistic at first glance. They have reached large audiences across multiple social media platforms. But they aren't real. Here's how to know.
'Inside' and 'outside' clues
Videos generated using artificial intelligence (AI) tools take a written idea, called a prompt, and convert it into video using models trained on massive datasets from across the web. Relatively new AI tools like ChatGPT's Sora 2 and Google's Veo 3 have made it free or cheap to create realistic-looking videos of just about anything.
But experts have raised serious concerns about the lack of regulation of these tools, which are known to perpetuate harmful societal biases and often have ineffective guardrails to prevent misuse. Africa Check has previously written about the potential for creating and spreading false information at scale with AI video and image generators.
The clips of the supposed Capitec ATM glitch are clear examples of how AI video generators are being used to spread false information, this time by presenting an entirely made-up story as breaking news.
The clearest sign that these videos aren't real is that many versions contain a watermark, the Sora company name and logo that pops up within the video. This lets you know the video was AI-generated using Sora 2. But some versions of the clip don't contain the logo, and in others it is covered with an emoji.
Watermarks can easily be removed with online tools, cropped out or covered up. But watermarks aren't useful if social media users don't know what they mean. Many of the posts we found of the Capitec ATM clips seemed to follow this logic, presenting a video as real even though it contains a Sora watermark.
Another clue we found in the shopping mall version was that some parts of the clip don't follow the laws of physics. This is relatively common in AI-generated videos. Here, near the end of the video clip, a person can be seen floating in the air.
When looking closely, the banknotes also move strangely in some parts of the video, and appear almost cartoonlike or animated, compared to the rest of the clip. Often, the sense that something feels "off" in a video is a sign that you have sensed that something doesn't quite add up, whether or not you can tell what it is.
These clues from within the video itself, what we can call "inside" clues, might be easy to spot here, but they aren't always this reliable. Companies developing AI tools want to fix such errors, and AI-generated videos are increasingly becoming difficult to distinguish from real footage.
Don't forget about context
Traditional verification techniques still have their place. Context is everything. Looking at who posted a suspicious video clip or what trusted news outlets are saying - or not - about an event are important clues that apply as much to AI-generated content as to more traditional misinformation, like images shared out of context.
A technical glitch causing large sums of money to fly out of a major bank's ATMs would definitely have made national news, but we found no coverage of this from reputable news outlets. Capitec would also have issued a statement about any glitch with such serious consequences. Silence from them is a big clue here, too.
Finally, AI-generated videos, and fake stories generally, are often designed to shock or cause anger or fear. This makes people more likely to engage with the content and can mean that what you see online often skews towards what's most shocking, frightening or outrageous.
If you want to learn more about detecting AI-powered content, read this comprehensive guide from the Global Investigative Journalism Network, or see our article on Google's Veo 3 tool.