Developer Offer
Try ImaginePro API with 50 Free Credits
Build and ship AI-powered visuals with Midjourney, Flux, and more — free credits refresh every month.
How AI Harassment Silences Women In India
The Chilling Effect on Personal Expression
For Gaatha Sarvaiya, a young law graduate in Mumbai, building a public profile for her career feels like a risk. The rise of AI-powered deepfakes means any image she posts online could be twisted into something grotesque and violating. "The thought immediately pops in that, ‘OK, maybe it’s not safe. Maybe people can take our pictures and just do stuff with them,’” she says.
This sentiment is what Rohini Lakshané, a researcher on gender rights and digital policy, calls a tangible “chilling effect.” Lakshané herself avoids posting personal photos online. “The fact that they can be so easily misused makes me extra cautious,” she explains.
A New Wave of AI-Powered Harassment
India has quickly become a major hub for AI development, now ranking as the world’s second-largest market for OpenAI with the technology being adopted across many professions. However, this rapid adoption has a dark side. A new report from the Rati Foundation and Tattle, a misinformation research group, reveals that AI has become a powerful new weapon for harassing women.
The report, drawing on data from a national helpline for online abuse victims, states, “It has become evident in the last three years that a vast majority of AI-generated content is used to target women and gender minorities.” It found a significant increase in the use of AI tools to create manipulated nudes or culturally stigmatizing images of women, such as showing public displays of affection.
The Indian singer Asha Bhosle, left, and journalist Rana Ayyub, who have been affected by deepfake manipulation on social media. Photograph: Getty
From Public Figures to Everyday Users
High-profile cases have drawn national attention to the issue. The likeness and voice of legendary Bollywood singer Asha Bhosle were cloned by AI, while journalist Rana Ayyub was targeted with deepfake sexualized images during a doxing campaign. These incidents have sparked a conversation about legal rights, but the wider impact on ordinary women is less discussed.
“The consequence of facing online harassment is actually silencing yourself or becoming less active online,” says Tarunima Prabhakar, co-founder of Tattle. Her organization's research identified a pervasive sense of “fatigue” among women, which leads them to “completely recede from these online spaces.”
This fear is palpable for Sarvaiya, who has since made her Instagram account private. Yet, she worries it’s not enough, noting that women can be photographed in public and have those images later appear online. “Friends of friends are getting blackmailed – literally, off the internet,” she says.
The Disturbing Rise of Nudify Apps
The Rati Foundation’s report highlights how tools like “nudify” apps, which digitally remove clothing from images, have made extreme forms of abuse alarmingly common. One case involved a woman who applied for a loan online. When she refused to meet the lender’s extortion demands, the photo she submitted was altered with a nudify app, placed on a pornographic image, and circulated on WhatsApp with her phone number.
She received a “barrage of sexually explicit calls and messages” and told the helpline she felt “shamed and socially marked.”
A fake video ostensibly showing Rahul Gandhi, the Indian National Congress leader, and India’s finance minister, Nirmala Sitharaman, promoting a financial scheme. Photograph: DAU Secretariat
A Battle Against Legal Loopholes and Platform Inaction
Globally, deepfakes exist in a legal grey area, and India is no exception, with no specific laws to address this unique form of harm. While existing harassment laws can be applied, Sarvaiya notes that the legal process is “very long” and filled with red tape.
Much of the responsibility falls on platforms like YouTube, Meta, X, Instagram, and WhatsApp, where this content spreads. However, a report by Equality Now describes the process of getting them to remove abusive material as “opaque, resource-intensive, inconsistent and often ineffective.”
While companies like Apple and Meta have taken some steps against nudify apps, the Rati Foundation found their responses are often “delayed and inadequate.” Victims are frequently ignored, and even when an account is removed, the abusive content often reappears elsewhere—a phenomenon called “content recidivism.”
“One of the abiding characteristics of AI-generated abuse is its tendency to multiply,” the foundation concludes, stating that a solution “will require far greater transparency and data access from platforms themselves.”
Compare Plans & Pricing
Find the plan that matches your workload and unlock full access to ImaginePro.
| Plan | Price | Highlights |
|---|---|---|
| Standard | $8 / month |
|
| Premium | $20 / month |
|
Need custom terms? Talk to us to tailor credits, rate limits, or deployment options.
View All Pricing Details

