The Global Push To Regulate AI Creations
The Butterfly Dream and The Modern AI Prompt
Over two thousand years ago, the sage Zhuang Zhou dreamt he was a butterfly, and upon waking, he was unsure if he was a man who had dreamt of being a butterfly or a butterfly now dreaming he was a man. This ancient philosophical puzzle resonates today as advances in generative artificial intelligence (AI) blur the lines between reality and artifice.
Modern AI can create incredibly realistic synthetic videos from simple text inputs, known as prompts. This technology has sparked a fascinating trend called “prompt theory,” which explores the idea of AI characters who are self-aware of their digital origins. This concept pushes a provocative thought into the mainstream: what if we, too, are the product of a prompt?
While this remains a philosophical debate for most, it presents a tangible challenge for regulators. Their role is to establish a legal framework that can clearly distinguish between what is real and what is artificial, managing worldly affairs in an age of synthetic media.
Watermarks A Historical Solution for a Modern Problem
Historically, societies have used markers of origin to separate the genuine from the counterfeit. A prime example is the watermark, a technique developed in 13th-century Italian papermills. By embedding a faint design into paper pulp, papermakers created a built-in verification system. This early form of traceability was later adopted to authenticate banknotes and prevent forgery.
Today, digital watermarking is a leading method for tracking AI-generated content. It involves embedding an invisible signal directly into the media itself to signify its artificial origin. A robust watermark is designed to survive modifications or to break if the content is tampered with, thus revealing the alteration.
Watermarking vs Labeling Two Approaches to Traceability
In contrast to watermarking, labeling relies on external markers like metadata or on-screen disclaimers. While labels are more obvious to the average person, they can be easily removed or stripped from the content, making them a less secure solution for ensuring traceability.
Recognizing the need for intervention, the technical development of both watermarks and labels has been researched for years. This effort has gained significant momentum recently due to concerns over the misuse of AI, such as academic dishonesty facilitated by large language models. However, technical solutions are not infallible; determined actors can bypass them, and they require constant updates to stay ahead of new circumvention techniques.
A Fractured Global Approach to AI Regulation
The most significant obstacle to effective AI traceability is the lack of a coordinated international policy. The world's three major legal systems—the United States, the European Union, and China—are each charting their own course, creating a fragmented regulatory landscape.
In the U.S., a 2023 executive order encouraged the development of standards for watermarking but did not mandate them for private companies. This order was later rescinded, and the subsequent America’s AI Action Plan makes no explicit mention of traceability obligations, aside from a brief note on combating synthetic media in the legal system. Instead, the U.S. has leaned on voluntary industry partnerships, such as the Coalition for Content Provenance and Authenticity, to develop standards in coordination with the National Institute of Standards and Technology.
The EU, with its landmark AI Act, has taken a more direct approach. The act mandates clear disclosure for synthetic content and the labeling of deepfakes. However, its full impact will only be clear after a complex series of transitional periods for implementation.
China is moving aggressively to formalize the traceability of AI content under the concept of “deep synthesis.” Following initial 2023 measures, the China Cyberspace Administration has released new rules and a binding national standard for labeling, set to take effect in September 2025.
The Call for a Unified Global Framework
Geopolitical tensions currently hinder the creation of a shared, global solution. A 2024 report from the UN's High-level Advisory Body on AI was seen as a missed opportunity, as it only made a passing reference to labeling and omitted watermarking entirely. Despite this, the report’s call for international dialogue offers a foundation for future cooperation.
Global problems, such as verifying the authenticity of the content we consume daily, demand global solutions. A reasonable path forward is to support the development of a shared reference framework built on international collaboration. By combining the strengths of unilateral regulations and voluntary standards, we can better define the boundaries of synthetic creations—and perhaps, in the process, reflect more deeply on our own.