Back to all posts

Developer Offer

Try ImaginePro API with 50 Free Credits

Build and ship AI-powered visuals with Midjourney, Flux, and more — free credits refresh every month.

Start Free Trial

How Racist AI Videos Revive Old Stereotypes

2025-11-07Andre Gee5 minutes read
Artificial Intelligence
Racism
Propaganda

The Rise of AI-Generated Propaganda

Just before a recent pause on federal Supplemental Nutrition Assistance Program (SNAP) benefits took effect, a wave of disturbing videos began to surface on TikTok. One clip featured a Black woman surrounded by crying babies, yelling about her EBT benefits being cut. Another showed a Black woman trying to buy a corn dog with food stamps and berating a cashier. An account with the bio “Exposing the Food Stampers and section8ers” posted a video of a Black woman eating crab legs and taunting taxpayers. These clips, along with another depicting a protest chant for food stamps and Section 8, all had one thing in common: they were entirely fabricated by artificial intelligence.

This pause in SNAP benefits is impacting over 41 million Americans, leaving them without crucial access to food. While the government has allocated a contingency fund, it is significantly less than the standard monthly amount, pushing millions toward a period of hunger. In this volatile environment, some are using AI to generate clips that mock this hardship, pushing a narrative that women receiving federal aid, particularly Black women, are lazy and undeserving. Scholars explain these videos play on decades-old racist stereotypes used to vilify welfare recipients.

While a close look at the videos reveals the telltale glitches of generative AI, many have been shared as real, and some news outlets have fallen for the deception. Both Fox News and Newsmax were criticized for reporting on the videos as authentic. Ironically, Newsmax later reported on Fox News’s mistake after airing an AI clip of its own. That same weekend, Newsmax anchor Rob Schmitt falsely claimed people use SNAP benefits for non-food items like getting their nails and hair done, a claim that is factually incorrect as SNAP can only be used for food purchases.

Reviving the Racist Welfare Queen Trope

Simone Browne, a professor of Black studies at the University of Texas at Austin, describes this phenomenon as “AI slopaganda” and a form of “mimetic warfare.” She argues that conservatives use these AI creations to distract from the real-world consequences of the government shutdown. The goal is to “get us away from looking at the real political stakes” by employing the racist “welfare queen” trope. “[The clips] place our attention elsewhere as opposed to looking at things like food insecurity and the crisis around that,” Browne states.

The term “welfare queen” was popularized in the 1970s by then-presidential candidate Ronald Reagan to justify his plans to cut federal aid programs. He frequently told a story about a Chicago woman who was supposedly defrauding the welfare system on a massive scale. This woman is believed to be Linda Taylor, a scammer who did steal from government programs. In a 2013 Slate feature, an audio clip from a Reagan radio ad was shared where he made unverified, exaggerated claims about her crimes.

Despite the ambiguity around Taylor’s race, Reagan successfully used this urban myth to stoke fears that Black women were exploiting the welfare system. This narrative helped him win the 1980 election, after which he cut government aid by $140 billion. He effectively used Black women as the face of lazy freeloading, a tactic that has been replicated by conservatives ever since.

Technology Reflecting Societal Bias

Janice Gassam Asare, an organizational psychologist, points out that the “welfare queen” stereotype endures even though more white people receive public assistance than any other racial group. Artificial intelligence, often presented as a neutral and futuristic tool, is instead perpetuating these old, harmful pathologies. Asare emphasizes that technology is not objective. “Technology is only mirroring the embedded biases that those who programmed and developed the technology hold,” she says. This is true for prejudiced search algorithms just as it is for TV shows that unconsciously replicate societal biases.

Previously, those wanting to spread the “welfare queen” ideology might have used false statistics or anecdotes. Now, they have powerful digital tools to visualize their prejudices and deceive the public. “What I think is really disturbing about this is that the images that I’m seeing... are indistinguishable from real Black people on a screen,” Browne says. “And sometimes the only thing that can let many viewers know this is not a real person, is some glitch in the background.”

Blurring the Lines Between Real and Fake

The confusion is already palpable. A real news clip from News 12, posted on Instagram, showed New Yorkers expressing their distress over the SNAP shutdown. One segment featured a Black woman crying about the dire consequences of the benefit pause. Because the post’s format resembled the AI-generated propaganda, many viewers assumed it was fake. The comments were filled with cruel, fatphobic insults and baseless accusations, with one person writing, “I thought this AI smdh.” This incident signals a future where distinguishing between genuine pleas for help and malicious fakes becomes nearly impossible.

An Unregulated Future of Digital Deception

Browne suggests that “mass noncompliance” could be a way to push back against generative AI, but the problem is poised to grow. OpenAI’s video platform Sora has taken minor steps by banning the creation of deepfakes of Martin Luther King Jr., but there is little to no broader regulation on racially insensitive AI content. While Sora is a paid service, numerous free and low-cost alternatives are available, making these tools widely accessible.

“Once it becomes more widely available, we will see an explosion of AI slop and more of these racist AI-generated videos,” Asare predicts. She clarifies that she is not against technology but is wary of its potential for harm in human hands. “I think technology can and will cause more harm than good if we don’t put a lid on things now... If we don’t address it now, before we know it, it will be too late and we will look up and ask ourselves, ‘How did we get here?’”

Read Original Post

Compare Plans & Pricing

Find the plan that matches your workload and unlock full access to ImaginePro.

ImaginePro pricing comparison
PlanPriceHighlights
Standard$8 / month
  • 300 monthly credits included
  • Access to Midjourney, Flux, and SDXL models
  • Commercial usage rights
Premium$20 / month
  • 900 monthly credits for scaling teams
  • Higher concurrency and faster delivery
  • Priority support via Slack or Telegram

Need custom terms? Talk to us to tailor credits, rate limits, or deployment options.

View All Pricing Details
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.