Back to all posts

Developer Offer

Try ImaginePro API with 50 Free Credits

Build and ship AI-powered visuals with Midjourney, Flux, and more — free credits refresh every month.

Start Free Trial

Australia Fights AI Crime With Digital Poison

2025-11-10Australian Federal Police4 minutes read
Cybersecurity
Artificial Intelligence
Law Enforcement

The Australian Federal Police (AFP) and Monash University have joined forces to combat the rising tide of cybercrime, turning the tables on criminals by developing a novel form of digital poison.

A New Alliance Against Cybercrime

This collaboration, operating under the AI for Law Enforcement and Community Safety (AiLECS) Lab, is pioneering a new disruption tool designed to thwart criminals who use AI. The technology aims to stop the production of AI-generated child abuse material, extremist propaganda, and malicious deepfake images and videos.

Introducing Silverer The Digital Poison

The tool, currently in its prototype stage and named ‘Silverer’, employs a technique known as ‘data poisoning’. This method involves making subtle alterations to data, which significantly complicates the ability of AI programs to produce, manipulate, or misuse images and videos. Development has been underway for 12 months, led by AiLECS researcher and PhD candidate Elizabeth Perry.

Ms. Perry explained the name is a nod to the silver used in mirrors. "In this case, it’s like slipping silver behind the glass, so when someone tries to look through it, they just end up with a completely useless reflection," she said.

How Data Poisoning Disrupts AI Models

Artificial intelligence and machine learning tools rely on vast amounts of online data to generate content. By poisoning this source data, AI models are tricked into creating inaccurate, skewed, or corrupted results. This not only disrupts the creation of malicious content but also makes it easier to identify a doctored image or video.

"Before a person uploads images on social media or the internet, they can modify them using Silverer," Ms. Perry added. "This will alter the pixels to trick AI models and the resulting generations will be very low-quality, covered in blurry patterns, or completely unrecognisable. Silverer modifies the image by adding a subtle pattern... which tricks the AI into learning to reproduce the pattern, rather than generate images of the victim.”

A New Tactic for Law Enforcement

AFP Commander Rob Nelson stated that while data-poisoning technologies are still emerging, they show significant promise for law enforcement. "Where we see strong applications is in the misuse of AI technology for malicious purposes,” he said. "By poisoning the data, we are actually protecting it from being generated into malicious content."

He compared the strategy to placing speed bumps on a road. "We don’t anticipate any single method will be capable of stopping the malicious use or re-creation of data, however, what we are doing is similar to placing speed bumps on an illegal drag racing strip. We are building hurdles to make it difficult for people to misuse these technologies.”

Addressing the Surge in AI-Generated Crime

The AFP has noted a disturbing increase in AI-generated child abuse material. This new technology comes in response to a wave of criminal activity, including several high-profile cases. As part of a global operation, two Australian men were among 25 people arrested for their alleged involvement in producing and distributing AI-generated child abuse material. Other recent charges include a Sydney man in October 2025, a NSW South Coast man in August 2025, a Tasmanian man jailed in March 2024, and a Melbourne man sentenced in July 2024, all for offenses involving AI-generated explicit content.

Commander Nelson believes this tool can help investigators manage the overwhelming volume of fake material. “Data poisoning, if performed on a large scale, has the potential to slow down the rise in AI-generated malicious content... which would allow police to focus on identifying and removing real children from harm,” he explained.

Protecting Against Scams and Deepfakes

The problem extends beyond illicit content. Scammers increasingly use AI to create deepfakes of celebrities and public figures to promote fake investment opportunities. These scams, which lend false credibility to fraudulent schemes, led to Australians losing over $382 million in the 2023-2024 financial year.

Digital forensics expert and AiLECS Co-Director Associate Professor Campbell Wilson noted, “Currently, these AI-generated harmful images and videos are relatively easily created using open-source technology and there's a very low barrier to entry for people to use these algorithms.”

Empowering the Public

The ultimate goal of the ‘Silverer’ project is to develop user-friendly technology for ordinary Australians to protect their online data. “Many harmful deepfakes are generated using only a small handful of training data images. If a user can poison those images before uploading them, it makes it significantly harder for criminals to generate malicious images of that user,” Commander Nelson said.

The AFP urges the public to consider using such tools to protect images that could be manipulated. A simple dose of data poison could make it much more difficult for criminals to distort reality with artificial intelligence. The prototype is currently being considered for internal use at the AFP.

Read Original Post

Compare Plans & Pricing

Find the plan that matches your workload and unlock full access to ImaginePro.

ImaginePro pricing comparison
PlanPriceHighlights
Standard$8 / month
  • 300 monthly credits included
  • Access to Midjourney, Flux, and SDXL models
  • Commercial usage rights
Premium$20 / month
  • 900 monthly credits for scaling teams
  • Higher concurrency and faster delivery
  • Priority support via Slack or Telegram

Need custom terms? Talk to us to tailor credits, rate limits, or deployment options.

View All Pricing Details
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.