New Revenge Porn Law Sparks Free Speech Debate
Privacy and digital rights advocates express concerns about a new federal law designed to combat revenge porn and AI generated deepfakes.
The Take It Down Act A Double Edged Sword
The recently enacted Take It Down Act criminalizes the publication of nonconsensual explicit images whether real or AI generated. The law mandates that platforms remove such content within 48 hours of a victim's request or face potential liability. Although hailed by many as a significant victory for victims' rights, some experts caution that its ambiguous wording, lenient verification processes, and strict compliance deadline might lead to excessive enforcement, censorship of lawful content, and increased surveillance.
India McKinney, Director of Federal Affairs at the Electronic Frontier Foundation, a digital rights group, told TechCrunch, "Content moderation at scale is widely problematic and always ends up with important and necessary speech being censored."
Verification Loopholes And Abuse Potential
Online platforms are given a one year period to develop procedures for removing nonconsensual intimate imagery (NCII). The law stipulates that takedown requests must originate from victims or their designated representatives, requiring only a physical or electronic signature without the need for photo identification or other verification methods. While this approach is intended to lower obstacles for victims, it also opens the door for potential misuse.
McKinney expressed concerns, stating, "I really want to be wrong about this, but I think there are going to be more requests to take down images depicting queer and trans people in relationships, and even more than that, I think it's gonna be consensual porn."
Senator Marsha Blackburn, a co sponsor of the Take It Down Act, also backed the Kids Online Safety Act, which holds platforms responsible for safeguarding children from harmful online content. Senator Blackburn has indicated her belief that content concerning transgender individuals is detrimental to children. In a similar vein, the Heritage Foundation, the conservative organization associated with Project 2025, has stated that "keeping trans content away from children is protecting kids."
McKinney further noted that due to the liability platforms face for not removing an image within the 48 hour window, "the default is going to be that they just take it down without doing any investigation to see if this actually is NCII or if it's another type of protected speech, or if it's even relevant to the person who's making the request."
Platforms Grapple With Compliance Demands
Both Snapchat and Meta have voiced support for the law. However, neither company provided TechCrunch with details on their methods for verifying if the individual requesting a takedown is indeed the victim.
Mastodon, a decentralized platform operating its own primary server, informed TechCrunch that it would likely opt for removal if victim verification proved too challenging.
Decentralized platforms such as Mastodon, Bluesky, or Pixelfed could be particularly susceptible to the suppressive effects of the 48 hour takedown mandate. These networks depend on independently run servers, frequently managed by non profit organizations or individuals. The law permits the FTC to consider any platform failing to reasonably adhere to takedown requests as engaging in an "unfair or deceptive act or practice," regardless of whether the host is a commercial business.
The Cyber Civil Rights Initiative, a non profit organization focused on combating revenge porn, stated, "This is troubling on its face, but it is particularly so at a moment when the chair of the FTC has taken unprecedented actions to politicize the agency and has explicitly promised to use the power of the agency to punish platforms and services on an ideological basis, as opposed to a principled one." This comment was part of their official statement.
Proactive Monitoring And Its Unseen Risks
McKinney anticipates that platforms will begin moderating content prior to its distribution to reduce the volume of problematic posts requiring future takedowns.
Currently, platforms are employing AI technologies to monitor for harmful content.
Kevin Guo, CEO and co founder of Hive, an AI content detection startup, mentioned that his company collaborates with online platforms to identify deepfakes and child sexual abuse material (CSAM). Hive's clientele includes platforms like Reddit, Giphy, Vevo, Bluesky, and BeReal.
Guo informed TechCrunch, "We were actually one of the tech companies that endorsed that bill. It'll help solve some pretty important problems and compel these platforms to adopt solutions more proactively."
Hive operates on a software as a service model, meaning the startup does not dictate how platforms utilize its product for flagging or removing content. However, Guo noted that many clients integrate Hive's API at the upload stage to monitor content before it reaches the community.
A spokesperson for Reddit told TechCrunch that the platform employs "sophisticated internal tools, processes, and teams to address and remove" NCII. Reddit also collaborates with the non profit SWGfl to use its StopNCII tool, which scans live traffic against a database of known NCII and removes identified matches. Reddit did not specify its methods for ensuring that the individual requesting a takedown is the actual victim.
McKinney raised concerns that this type of monitoring might eventually extend to encrypted messages. Although the law primarily targets public or semi public dissemination, it also mandates platforms to "remove and make reasonable efforts to prevent the reupload" of nonconsensual intimate images. She contends this could encourage proactive scanning of all content, including content in encrypted environments. The law does not provide exceptions for end to end encrypted messaging services such as WhatsApp, Signal, or iMessage.
Meta, Signal, and Apple did not respond to TechCrunch's inquiries regarding their plans for encrypted messaging under the new law.
Free Speech Alarms In A Polarized Climate
On March 4, former President Trump, in a joint address to Congress, commended the Take It Down Act and expressed his anticipation of signing it into law.
He added, "And I'm going to use that bill for myself, too, if you don't mind. There's nobody who gets treated worse than I do online."
While this remark drew laughter from the audience, some did not perceive it as humorous. Former President Trump has a history of suppressing or retaliating against speech he deems unfavorable, such as labeling major media outlets as enemies of the people, preventing The Associated Press from accessing the Oval Office despite a court order, and withdrawing funding from NPR and PBS.
On Thursday, the Trump administration barred Harvard University from accepting foreign student admissions. This action escalated a conflict that started after Harvard refused to comply with Trump's demands to change its curriculum and remove DEI related content, among other things. In retaliation, Trump has frozen federal funding to Harvard and threatened to revoke the university's tax exempt status.
McKinney concluded, "At a time when we're already seeing school boards try to ban books and we're seeing certain politicians be very explicitly about the types of content they don't want people to ever see, whether it's critical race theory or abortion information or information about climate change...it is deeply uncomfortable for us with our past work on content moderation to see members of both parties openly advocating for content moderation at this scale."