Shocking lion attack turns into heartwarming hug? Why you should care about badly made AI-generated videos
IN SHORT: A clip circulating on social media in January 2026 appears to show a lion pouncing on and then hugging a game ranger. But the clip shows clear signs consistent with AI-generated content, and one version is labelled by YouTube as inauthentic content.
A viral YouTube clip shows a lion emerging from a bush to ambush, tackle and then hug a game ranger, to the surprise of nearby onlookers. One version of the clip was viewed almost 8 million times as of January 2026, with the clickbait title "The Lion Charged at the Biologist but Not for the Reason You Think". It also appeared elsewhere on the video-sharing platform, and in Facebook posts.
Comments on the video appear divided on whether it is real, but the clip shows clear signs of being generated using artificial intelligence (AI) tools. Here's how you can tell.
Follow us on WhatsApp | LinkedIn for the latest headlines
Inside and outside clues
AI generators often take a written idea, or prompt, and convert it into an image or a video, using AI models trained on huge datasets from across the internet. Newer tools have made it free or inexpensive to make ever more realistic-looking videos.
But experts have warned that there are few regulations and guardrails preventing people from using the tools to cause harm. Generative AI tools are known to perpetuate biases and make it easier to create and spread false information and impersonate people without their consent.
Often videos are simply posted to get as much engagement and attention as possible. Generating something unbelievable, surprising or violent is a quick way to get that attention, whether or not people actually believe the video is real.
In this video, the lion looks like it's about to pounce on the unsuspecting ranger, and attacks, but the attack suddenly turns into a hug. This is shocking enough for the video to go viral, whether or not people think it is real.
But there are clear signs that it isn't. We can look at both "inside" and "outside" clues, with inside clues being picked up within the video itself.
The video shows patches of blurriness and visual distortions, while the rest of it is in focus. The lion's upper body morphs and grows unnaturally large at one point, and the lion and man's faces blend together in another.
And, as one commenter pointed out, the onlookers, while clearly surprised, don't appear to make any attempt to move away from the lion.
Outside clues are pieces of context we can consider, to help figure out whether a video or image is real. These are often the same clues fact-checkers look for in false content not generated by AI.
In this video, the most obvious outside clue is that in one version on YouTube, the video is labelled as "altered or synthetic content", meaning that "sound or visuals were significantly edited or digitally generated".
YouTube says it provides these labels when the person who posted it discloses that it is not real, it was created using YouTube's own AI tools, or there is a digital signature showing the videos was tampered with in some way.
Looking into who posted the video is also a good way to check if it might be AI-generated. The key YouTube account responsible for the suspicious lion attack video has also posted a string of similar clips showing shocking close encounters with animals.
Since engagement on social media is often fuelled by strong emotions, it's useful to see what feelings the post might be trying to provoke. If something seems too shocking to be true, take a closer look.
Why are fact-checkers the fun police?
AI video generators have become commonplace, and social media platforms are increasingly filled with entirely fabricated videos of just about everything. Some of these are easy to debunk and often shared more out of humour than with any intention to deceive. Others are more realistic, explicitly designed to trick someone into believing they're real.
But both types worry us at Africa Check, because even clearly fake videos shared for fun, as a joke, or to get views can cause real harm.
In part this is because if a video spreads far, there are likely to be some people who do end up thinking it's real especially if, in the process of it being widely circulated, the video loses any clear labels saying it was AI-generated.
But more generally, normalising AI-generated videos can make it harder to trust anything we see online, even when it's real and important. And it can make it easier for people to avoid responsibility for real events in a video, if viewers think it could have been AI-generated.
To learn more about detecting AI-generated content, read this guide from the Global Investigative Journalism Network or see our article about Google's Veo 3 AI generator.