
How Google’s Veo 3 became a weapon for racist propaganda on TikTok
Watchdog warns that AI is powering a new wave of bigoted, viral disinformation.
A prisoner in a Nazi death camp stands in front of a smoking chimney and says, “It’s a little smoky in here, but something smells good. I’m going to take a shower now.” A soldier in Ku Klux Klan robes chases a Black man. A car plows through a group of protesters, filmed from the driver’s perspective - what do these scenes have in common? They’re all grotesquely racist AI-generated videos, created with Google’s Veo 3 model, that have racked up hundreds of thousands, and in some cases millions, of views on TikTok.
“The conversation around AI and disinformation often focuses on whether generated content appears credible. But hateful messages don’t have to look real to reinforce racist beliefs among viewers,” said Media Matters, the media watchdog that revealed the videos in a report this week. “The United States has a long history of using cartoons to spread anti-Black hate propaganda. As the race to build generative AI tools continues, the conversation must address the harm caused by both realistic and unrealistic imagery.”
TikTok is already notorious for hosting hate speech and racism; the flood of antisemitic content following the October 7 Hamas attacks was one factor that pushed U.S. lawmakers to pass legislation requiring TikTok’s American operations to be sold to a domestic company. But Google’s new video-generation tool, Veo 3, is taking hateful content to a horrifying new level.
Launched on May 20, Veo 3 lets anyone create eight-second videos with audio, and was immediately described, not without concern, as capable of producing more realistic clips than any previous AI video model. Google has reportedly planned to integrate Veo with YouTube Shorts, its answer to TikTok.
But for now, Veo 3–generated videos, including shockingly racist clips, are going viral on TikTok. Many carry telltale signs that they’re AI-made: a model logo in the corner, a tag or caption identifying the source, or visible glitches like continuity errors, scrambled text, or distortions.
Racist tropes go viral
Many of the clips identified by Media Matters recycle deeply racist stereotypes about Black people. One video titled “Average Waffle House in Atlanta” shows the inside of a restaurant where all the diners are monkeys. Suddenly, a car crashes through the front window and the monkeys spill out holding buckets of fried chicken. The video has drawn more than 600,000 views. In the U.S., depicting Black people as monkeys is a long-standing racist trope.
Another clip, viewed four million times, shows a cashier in a bright pink wig, long pink nails, and fake eyelashes standing outside a 7-Eleven, eating ice cream and saying, in a mocking caricature of African American speech, “My probation officer called. Good news, I don’t have to do any more community service. Bad news, it’s because there’s a new warrant for my arrest.”
A related genre, dubbed “the immediate suspect,” shows monkeys committing crimes, with titles or characters declaring, “This is the immediate suspect.” One of these clips, with 5.3 million views, shows a monkey fleeing a crashed car while a police cruiser pulls up.
Some videos are even less subtle. One shows a Black man in a ski mask walking through a looted storefront carrying a stolen TV: “I’m just doing my leisurely shopping for today.” In another, which has over 2.1 million views, a white police officer dangles a watermelon from a stick, another racist stereotype, and says, “My numbers are down this week, a man’s gotta do what a man’s gotta do,” while a Black woman crawls toward it on all fours.
Other clips target immigrants. One shows a protester waving a Mexican flag with flames behind him, saying: “I wave the flag of the place I don’t want to be sent back to, while I destroy the place I don’t want to leave.” In another, Bigfoot sits behind the wheel of a pickup truck and gleefully drives into a crowd of protesters.
Perhaps the most shocking video is overtly antisemitic and Holocaust-denying. In this compilation, a man wearing a concentration camp uniform takes a selfie with a smoking chimney behind him, a chilling reference to the gas chambers. “It’s a little smoky, but something smells really nice here,” he says. In other shots, he stands in front of a barbed wire fence and boasts, “Everyone here is having a great time,” sits in cramped quarters praising the food and “rustic charm,” jokes about finding Air Jordans in a pile of victims’ shoes, and talks about getting a tattoo despite his wife’s protests. The video has been viewed more than a million times.
A new breed of hate
These clips don’t try to pass as real. Many are openly labeled as AI, but that’s beside the point. As Media Matters notes, the real danger isn’t that these videos trick people, it’s that they encourage and normalize bigotry. While fears about deepfakes and fake news are valid, the bigger, more immediate threat may be AI’s use as an amplifier for racist, violent, and extremist content.
The people who watch, comment on, and share these videos know exactly what they’re seeing. They’re not falling for fake news, they’re celebrating racism and antisemitism under the guise of edgy AI content. These clips mock, humiliate, and dehumanize vulnerable groups, and help hateful communities grow, one viral meme at a time.
















