Back to all posts

Developer Offer

Try ImaginePro API with 50 Free Credits

Build and ship AI-powered visuals with Midjourney, Flux, and more — free credits refresh every month.

Start Free Trial

How Coded Language In AI Objectifies Women

2025-11-10Carla Moriarty3 minutes read
Generative AI
AI Bias
Technology Ethics

A troubling new form of coded language has emerged from the world of generative AI, allowing users to sidestep safety filters and create sexualized imagery. This language trick consistently produces content that objectifies women and reinforces harmful gender stereotypes.

Generative AI, which refers to large language models that create new content from vast datasets, produces an estimated 34 million unique images daily based on user text prompts. These models learn to associate words with visual concepts by scraping billions of annotated images and texts from the internet. However, a covert language, which researchers have termed ‘algotext’, is being used to manipulate this process.

This phenomenon is similar to ‘algospeak’, a social media trend where users substitute words like 'unalive' for 'dead' to avoid algorithmic suppression. Algotext applies the same principle to evade AI platform safeguards.

Uncovering Deeply Ingrained Gender Bias

The use of algotext was discovered during recent linguistics research that analyzed 7.4 million user text prompts on the popular AI platform Midjourney. The goal was to detect embedded gender stereotypes, and the findings were stark. The analysis revealed that prompts requesting images of women were twice as common as those for men.

Furthermore, the focus of these prompts differed significantly by gender. Requests for women centered on physical appearance, using phrases like ‘lips, hips, thin waist’. In contrast, prompts for men focused on actions and roles, such as ‘the man is holding a gun and a leash’. The resulting images reflected these biases, generating idealized and hyper-feminized depictions of women characterized by youth, whiteness, vulnerability, and sexualization. Men were typically shown with traits of ‘apex masculinity’, including physical dominance, aggression, and control, as detailed in the full research paper.

Bypassing Safeguards with Coded Words

Many generative AI platforms like Midjourney have security filters and community guidelines that forbid adult content, nudity, and sexualized imagery by blocking specific keywords. However, algotext provides clever workarounds to bypass these restrictions.

For example, while the word ‘lingerie’ is banned on Midjourney, the intentional misspelling ‘lingeri’ is used prolifically. Other forms of algotext include using specific brand names like ‘La Perla Neoprene’ or ‘Chantal Thomass’—both high-end lingerie brands—to nudge the AI toward a sexualized aesthetic without explicit requests. The sexualization is also reinforced by requesting revealing clothing or using synonyms for banned terms. Words like ‘bikini’ and ‘underwear’ are blocked, but numerous requests for women in ‘spandex’, ‘gym-wear’, or ‘yoga-wear’ often produce images of women in revealing attire or even nude.

The Broader Impact and Escalating Risks

These findings align with other studies exposing how generative AI can perpetuate and amplify harmful gender stereotypes. A 2024 UNESCO report on gender bias in large language models warned of the “persistent social biases” in these systems and the risks they pose, including the potential to exacerbate gender-based violence, online stalking, and the creation of deepfakes.

This is not a niche problem. The global AI market is projected to be worth $1.3 trillion by 2032. Midjourney alone has over 20 million subscribers, two-thirds of whom are male, primarily aged 18 to 34. This data highlights the technology's scale, its primary audience, and the significant financial incentives driving the industry.

A dangerous ‘echo chamber’ effect threatens to compound the issue exponentially, as generative AI repurposes its own biased content to create new material. In the wider context of rising toxic masculinity, generative AI has become a powerful tool in the ongoing subjugation of women. This trend should be deeply alarming for anyone concerned about the future of gender equality in our increasingly digital culture.

Read Original Post

Compare Plans & Pricing

Find the plan that matches your workload and unlock full access to ImaginePro.

ImaginePro pricing comparison
PlanPriceHighlights
Standard$8 / month
  • 300 monthly credits included
  • Access to Midjourney, Flux, and SDXL models
  • Commercial usage rights
Premium$20 / month
  • 900 monthly credits for scaling teams
  • Higher concurrency and faster delivery
  • Priority support via Slack or Telegram

Need custom terms? Talk to us to tailor credits, rate limits, or deployment options.

View All Pricing Details
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.