New Tech Makes Your Photos Unlearnable For AI Models
A New Defense Against AI Data Scraping
In an era where artificial intelligence models are constantly learning from the vast ocean of online content, Australian researchers have developed a groundbreaking technique to help you reclaim control. This new method, created by a partnership between CSIRO, the Cyber Security Cooperative Research Centre (CSCRC), and the University of Chicago, can effectively stop AI systems from learning from your photos, artwork, and other images.
The technology offers a powerful tool for anyone concerned about their digital privacy and intellectual property. Whether you're an artist wanting to protect your work, a social media user hoping to prevent your photos from being used in deepfakes, or an organization with sensitive visual data, this breakthrough could provide a new layer of security.
How Does This AI-Proofing Technique Work
The core of this method lies in subtly altering an image in a way that is completely invisible to the human eye but profoundly confusing to an AI. These tiny modifications make the content unreadable and "unlearnable" for machine learning models.
Unlike previous attempts at data protection that often rely on guesswork, this new technique provides a mathematically proven guarantee. According to CSIRO scientist Dr. Derui Wang, this is a major step forward. "Our approach is different; we can mathematically guarantee that unauthorised machine learning models can’t learn from the content beyond a certain threshold," Dr. Wang explained. This provides a robust safeguard that holds up even against sophisticated attacks or attempts to retrain the AI.
Real-World Applications and Future Potential
The implications of this technology are far-reaching. Imagine a social media platform automatically embedding this protective layer into every photo you upload. This could drastically curb the creation of harmful deepfakes by starving them of training data and help users retain control over their personal content.
Similarly, defense organizations could shield sensitive satellite imagery from being analyzed by adversarial AI. The potential to reduce intellectual property theft and give creators peace of mind is enormous.
While the method currently applies to images, the team has ambitious plans to expand it to other media formats, including text, music, and videos, in the future.
Current Status and How to Get Involved
The method is currently theoretical, with its effectiveness validated in controlled laboratory settings. The team is now seeking to bridge the gap from theory to practice and is looking for research partners across various sectors, including AI safety, defense, cybersecurity, and academia.
For those interested in the technical details, the academic paper, titled Provably Unlearnable Data Examples, was presented at the 2025 Network and Distributed System Security Symposium (NDSS), where it won the Distinguished Paper Award. The code is also available on GitHub for academic use.
To collaborate or learn more about this technology, you can contact the team directly at seyit.camtepe@csiro.au.