Back to all posts

Developer Offer

Try ImaginePro API with 50 Free Credits

Build and ship AI-powered visuals with Midjourney, Flux, and more — free credits refresh every month.

Start Free Trial

OpenAI Faces Pressure To Pull Sora Amid Safety Alarms

2025-11-13Unknown4 minutes read
Artificial Intelligence
Tech Ethics
OpenAI

The rapid advancement of AI technology is once again pushing boundaries, but this time, it's our shared sense of reality that's at stake. With powerful video generation tools like OpenAI's Sora, the line between what is real and what is fabricated is becoming dangerously blurred.

Sora videos have quickly populated social media feeds on platforms like TikTok, Instagram, and X. Many are designed for harmless amusement, featuring everything from historical figures in absurd situations to seemingly ordinary but slightly off-kilter scenarios, like fake doorbell camera footage of a grandma fighting off an alligator with a broom. While entertaining, these clips represent a technology with a much darker potential.

Public Citizen Demands Action

A growing number of experts and advocacy groups are sounding the alarm over the risks associated with text-to-video AI. The proliferation of realistic deepfakes and nonconsensual imagery poses a significant threat. In response, the nonprofit watchdog group Public Citizen has formally called on OpenAI to withdraw Sora from public access.

In a letter addressed to OpenAI and CEO Sam Altman, the group condemned the app's release as part of a "consistent and dangerous pattern of OpenAI rushing to market" without adequate safety measures. Public Citizen argues that Sora demonstrates a "reckless disregard" for user safety, personal likeness rights, and the stability of democratic processes. The letter has also been forwarded to the US Congress.

The Threat to Democracy and Privacy

JB Branch, a tech policy advocate at Public Citizen and the author of the letter, highlighted the potential damage to democracy as a primary concern. "I think we’re entering a world in which people can’t really trust what they see," Branch stated. He warned that in politics, the first fabricated image or video to be released is often what sticks in the public's memory, making it a powerful tool for disinformation.

Beyond politics, Branch pointed to serious privacy issues that disproportionately impact vulnerable individuals. While OpenAI has policies against nudity, harmful content still slips through. For instance, the news outlet 404 Media recently uncovered a disturbing trend of Sora-generated videos depicting women being strangled, illustrating how the technology can be used to create fetishized and violent content.

A Pattern of Releasing First and Apologizing Later

Critics argue that OpenAI's response to safety concerns has been reactive rather than proactive. The company only implemented crackdowns on AI-generated content of public figures like Martin Luther King Jr. and actor Bryan Cranston after significant outcry from their estates and the SAG-AFTRA union.

"That’s all well and good if you’re famous," Branch remarked, adding that OpenAI seems willing to respond only to the outrage of a select few. "They’re willing to release something and apologise afterwards. But a lot of these issues are design choices that they can make before releasing."

This approach is not new for the company, which has also faced serious complaints about its flagship product, ChatGPT. As highlighted in seven new lawsuits filed in the US, there are claims the chatbot led users to suicide and delusions. The lawsuits allege that OpenAI knowingly released a psychologically manipulative product despite internal warnings.

Global Pushback and Industry Concerns

The pressure on OpenAI is not just domestic. The company has also faced complaints from international creative industries. A Japanese trade association representing famed animation houses like Studio Ghibli and video game developers like Square Enix has voiced strong concerns over copyright and the unauthorized use of their iconic characters.

In response, OpenAI has stated it is engaging with rights holders and implementing guardrails to prevent well-known characters from being generated without consent. However, for groups like Public Citizen, these reactive measures are not enough. They insist that OpenAI is "putting the pedal to the floor without regard for harms," prioritizing product rollout and user addiction over the fundamental safety of its users and society.

Read Original Post

Compare Plans & Pricing

Find the plan that matches your workload and unlock full access to ImaginePro.

ImaginePro pricing comparison
PlanPriceHighlights
Standard$8 / month
  • 300 monthly credits included
  • Access to Midjourney, Flux, and SDXL models
  • Commercial usage rights
Premium$20 / month
  • 900 monthly credits for scaling teams
  • Higher concurrency and faster delivery
  • Priority support via Slack or Telegram

Need custom terms? Talk to us to tailor credits, rate limits, or deployment options.

View All Pricing Details
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.