In a world increasingly shaped by Artificial Intelligence (AI), the line between reality and fabrication is blurring.
A recent viral video featuring former U.S. President Donald Trump has sparked widespread conversation—yet, it never actually happened.
The clip appears to show Trump expressing interest in hiring Bradley Marongo, a Kenyan man dubbed the “Gen Z Goliath,” as his personal bodyguard.
“I want to visit Kenya and meet Bradley the Goliath. The guy is very tall he can be my bodyguard and I want him to be my personal bodyguard. I think he will work well in that sector. Bradley I am coming for you,” Donald Trump said in a trending deepfake video.
The video was generated by AI technology, representing a new frontier in the spread of misinformation.
The video, while seemingly harmless, demonstrates the growing sophistication of “deepfake” technology.
These AI-generated videos can make anyone say or do anything on screen, regardless of whether it ever happened.
In the case of Trump and Marongo, the video plays on Trump’s well-known penchant for dramatic statements and larger-than-life personalities, making the deepfake seem plausible at first glance.
But in reality, Trump never mentioned Marongo, let alone offered him a job.
Deepfakes: A Growing Threat
The term “deepfake” refers to media—usually videos or audio recordings—that have been manipulated using AI to create realistic but fabricated portrayals of individuals.
These creations can make it appear as though someone said or did something they never actually did, with remarkable accuracy.
Deepfakes are a double-edged sword: they can be used for entertainment, but they also have a more sinister side, as seen in the increasing use of such technology to spread disinformation.
The Trump-Marongo video is a perfect example of how deepfakes can sow confusion.
At first glance, it looks convincing: Trump, speaking on his campaign trail, makes a quip about Marongo’s towering height and suggests that he’d make an excellent bodyguard.
For anyone unfamiliar with AI manipulation, the video could easily be mistaken for a genuine statement.
Yet, the video is completely fabricated, and therein lies the danger.
As deepfakes become more sophisticated, they have the potential to disrupt political discourse, incite social unrest, and damage reputations.
In an age where content spreads like wildfire on social media, a deepfake can go viral before the truth has a chance to catch up.
The implications of deepfakes extend far beyond mere entertainment.
When used maliciously, these videos pose a significant threat to democracy, the integrity of elections, and public trust in institutions.
A deepfake of a political figure, like the Trump video, can be weaponized to spread false information, mislead the public, or even manipulate voter sentiment.
Consider, for example, the potential impact of a deepfake during an election campaign. A candidate could be falsely depicted making inflammatory statements, or announcing controversial policies.
The consequences of such a video could be catastrophic—especially if it goes viral before fact-checkers can debunk it.
The Trump-Marongo video may seem humorous, but it underscores the risk that deepfakes pose to political stability.
The Trump-Marongo incident is a wake-up call. While the video may not have caused significant harm, it shows how easily AI can distort reality—and how quickly that distortion can spread.
As deepfakes become more common, the need for robust defenses against their misuse grows ever more urgent.
Related: Gen Z Goliath Bradley Marongo Announces Vacancy For Female Bodyguards