Back to all posts

Big Tech AI and the Battle for Your Digital Property

2025-07-31Joseph Gordon-Levitt5 minutes read
AI
Copyright
Data Rights

The New Digital Feudalism

In a bygone era, kings held dominion over all land, while serfs toiled without ownership. The notion of a serf owning the plot they farmed was laughable. A king might have dismissed the idea, questioning how such a system of individual ownership could ever be managed. Today, we face a similar paradigm in the digital realm. Data has become the new land, and the titans of Silicon Valley are the new lords, just as reluctant to let the common person own their digital property.

This sentiment was recently echoed by former President Donald J. Trump at the "Winning the AI Race Summit" in Washington, D.C. When discussing whether tech giants should compensate the countless people whose work, skills, and talent form the backbone of their profitable AI products, Trump stated, "You just can’t do it, because it’s not doable."

A Problem of Practice, Not Technology

As a creative professional, I find that the joy of my work comes from collaborating with fellow artists. One might think I'd be opposed to using technology for creative tasks, but that's not true. I don't have a problem with AI as a tool; some of the new creative applications are genuinely inspiring. The urgent issue, however, lies with the unethical business practices of today's major AI corporations.

Generative AI cannot exist without its "training data"—the vast collection of writing, photos, videos, and other human-made content that gets crunched by algorithms. For years, AI companies have been scraping this data from the internet without seeking permission or offering compensation to the original creators whose work is essential to their technology.

Is It Inspiration or Is It Theft?

Silicon Valley, a position Mr. Trump appears to support, justifies this mass data scraping by comparing a Large Language Model (LLM) to a person reading a book for inspiration. This comparison is not only fundamentally inaccurate but also deeply anti-human. These AI models are not people, and our laws shouldn't protect their algorithmic processes in the same way we protect human ingenuity and labor.

Glimmers of Hope from Lawmakers and Courts

Thankfully, there is a pushback. Republican Sen. Josh Hawley and Democratic Sen. Richard Blumenthal recently introduced The AI Accountability and Personal Data Protection Act. This bipartisan legislation would prohibit AI companies from training models on copyrighted works without consent and empower individuals to sue over the use of their personal data. It’s a powerful stand for working Americans against tech giants.

There are also positive signs from the judiciary. A few weeks ago, a federal court ruled against a group of authors suing Meta, but the ruling came with a significant caveat. Judge Vince Chhabria noted that the case likely failed because the lawyers used the wrong legal argument. In his ruling, he wrote, "No matter how transformative LLM training may be, it’s hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars while enabling the creation of a potentially endless stream of competing works that could significantly harm the market for those books." With more lawsuits pending, including one from major Hollywood studios, future plaintiffs will almost certainly focus on this argument of market harm.

The Threat Reaches Beyond the Creative Arts

If these unethical practices are allowed to continue, the consequences could be catastrophic for any commercial content business, including film, television, and professional journalism. The vibrant creator economy could be wiped out. While people will always create, they may no longer be able to earn a living from it. Why pay a human creator when an AI can generate similar content for virtually nothing, using that same creator's work as its foundation?

This isn't just about the future of art; it's about the future of work itself. We creatives are on the front lines, but anyone working on a computer—in marketing, finance, logistics, or design—is in the same boat. Blue-collar jobs will follow as robotics and autonomous systems advance. The AI powering a future robot plumber will be trained on data from countless human plumbers. Do those humans deserve compensation? According to Silicon Valley, the answer is no.

National Security or a Smokescreen for Profit?

People sense this looming threat. A recent poll showed that 77 percent of Americans prioritize getting AI right over getting it first. In response, Big Tech often raises the alarm about national security, claiming we must allow them to operate without restriction or risk losing the AI race to China. Mr. Trump repeated this line at his summit.

But let's be realistic. These companies are loyal to shareholders, not the American people. Furthermore, the Fifth Amendment's "Takings Clause" states that private property cannot be taken for public use without just compensation. If using our data is truly a matter of national security, then someone should pay for it. The urgency to "beat China" feels less like a patriotic duty and more like a competitive business strategy.

A Call for Genuine Support

Many of Mr. Trump's voters believed he would fight for working Americans against a powerful establishment. Today, there is no establishment more powerful than the tech giants behind AI. If he truly wants to stand for the American people, he should support the policies being built by Senators Hawley and Blumenthal to protect the public good over corporate profits. It's the right thing to do, and it is, in fact, doable.

Joseph Gordon-Levitt is an actor, filmmaker, and founder of the online community HITRECORD. He recently started publishing “Joe’s Journal” on Substack and is set to direct an upcoming thriller about AI for Rian Johnson and Ram Bergman’s T-Street.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.