Back to all posts

AI Tool Encourages Content Theft In Journalism

2025-06-16Damien McFerran7 minutes read
Ai
Copyright
Journalism

Artificial Intelligence is undeniably a major topic today, and it's not always in a positive light.

ChatGPT Actively Encourages Copyright Theft, And EDGE And Retro Gamer Publisher Future Is Sadly Complicit 1 Image: Damien McFerran / Time Extension

While technology's relentless march brings us AI solutions for groundbreaking scientific discoveries and medical advancements, the rise of 'Generative AI' paints a different picture. This AI variant consumes human-made content—text, audio, video, and images—to produce strikingly similar outputs in seconds. This leads to a concerning future, one where human creativity might be sidelined by 'copy-paste' material, often based on art cloned without the original creator's consent.

OpenAI, operator of the ChatGPT model, is a dominant force in AI. Depending on the source, ChatGPT boasts between 500 million and 1 billion weekly active users, showing how rapidly this AI chatbot has integrated into daily life.

I've become something of an AI Luddite, largely because I'm painfully conscious of the threat Large Language Models (LLMs) like ChatGPT pose to the realm of journalism.

To be clear, I'm not a heavy AI user. I use Grammarly for writing assistance and have dabbled with AI for upscaling old video game art, often with inconsistent results. My skepticism stems from the clear threat LLMs like ChatGPT pose to journalism.

However, I wasn't fully aware of how far companies like OpenAI might go to undermine the written word—and tragically, some publishers are complicit in this destruction.

I knew ChatGPT could translate articles from Japanese to English. But a friend recently showed me the next, more alarming step: ChatGPT actively encourages users to format these translated articles for professional publication, even citing specific examples.

In an experiment, I fed ChatGPT an article from Hatena Blog about the Sega CD Classics title Space Harrier. As this chat log shows, ChatGPT translated the piece. So far, so normal, similar to using Google Translate.

But then, entirely unprompted, ChatGPT offered to reformat the piece—a copyrighted article by someone else—for "a specific site or magazine," naming Kotaku and Future Publishing's Retro Gamer magazine as examples.

ChatGPT suggesting reformatting for publications Image: Damien McFerran / Time Extension

Note my reply; I didn't approve the reformatting, yet ChatGPT produced a Retro Gamer-style article anyway. It even left the byline blank, encouraging me to insert my own name and claim credit for work based on the Hatena Blog article. My prompts never mentioned preparing it for publication; ChatGPT took that initiative and expected me to claim it as my own.

Next, ChatGPT cheerfully suggested formatting the piece for a print layout and asked if I had other articles to format in a Retro Gamer style. When I inquired if a print layout was possible, without waiting for confirmation, ChatGPT generated the article "reformatted to mimic Retro Gamer's print layout style, including callout boxes, captions, pull quotes, and a sidebar. I’ve preserved the magazine’s structure: a feature intro, crisp subheads, reader-friendly sidebars, and bits of nostalgia that Retro Gamer readers love."

While most of us are aware when a boundary is being overstepped, there are enough bad actors in the world who would jump at the chance of submitting a piece of AI-generated freelance to a website or magazine.

This behavior wasn't consistent in every chat session, but it occurred often enough to be deeply concerning. In another chat, when presented with a magazine scan, the AI suggested other outlets, including EDGE magazine, also published by Future.

If you're wondering why this is a big deal, consider that many people look for shortcuts. While most recognize ethical boundaries, some would eagerly submit AI-generated freelance work, especially when ChatGPT makes the process seem legitimate. As an editor, I've received AI-generated pitches, and other editors have too. One could argue ChatGPT isn't explicitly telling me to submit this for payment, but the implication is quite clear.

Publisher Complicity and The OpenAI Deal

That Retro Gamer and EDGE are cited by ChatGPT isn't entirely shocking. In 2024, Future entered into a "strategic partnership" with OpenAI to bring "Future’s journalism to new audiences while also enhancing the ChatGPT experience."

Future Publishing's partnership with OpenAI Image: Damien McFerran / Time Extension

Future CEO Jon Steinberg expressed enthusiasm, claiming it would help users connect with the company's 200+ publications. "ChatGPT provides a whole new avenue for people to discover our incredible specialist content,” he said. "Future is proud to be at the forefront of deploying AI, both in building new ways for users to engage with our content but also to support our staff and enhance their productivity."

However, entering a content licensing deal with OpenAI feels like charging someone to ransack your house. Publishers are likely panicking about the rise of a "zero-click" internet. Firms like Future might just be trying to get some compensation for their content being used by companies like OpenAI, as there's little they can do to stop it.

Entering into a content licencing deal with OpenAI is akin to charging someone $10 a month for permission to ransack your house.

One wonders if Steinberg, or his employees, are comfortable with ChatGPT encouraging users to leverage content they don't own to create potentially fraudulent articles that could be pitched to their own publications. While ChatGPT once asked if I wanted to "mock-up" a Retro Gamer–style zine, its phrasing ("formatted further for print layout") lacks sufficient disclaimers to prevent misuse. Pessimistically, publishers might even feed content into ChatGPT themselves, cutting out freelance writers entirely. Is this the future of games media Future envisions—one without human writers?

Training Data and The Erosion of Trust

Then there's the murky issue of ChatGPT's training data. LLMs are only as good as the information they consume. With the entire internet seemingly considered fair game, it's no surprise they've improved so quickly.

AI training data concerns Image: Damien McFerran / Time Extension

Most creatives are uneasy about AI harvesting their work to potentially replace them and are pushing for copyright safeguards. This forces companies like OpenAI, Meta, and Google to be more transparent about training data. In Future's case, its deal with OpenAI presumably allows training on its vast back catalogue—including, disturbingly, features I've written for Retro Gamer. OpenAI is, in a way, legally using my own words to potentially make me redundant.

A world with AI-created content is already here, but it's not one to aspire to. From lists of non-existent recommended books to Google's AI falsely reporting details of an air disaster, AI is currently unreliable. There's a growing sentiment that as these models become more powerful, they also hallucinate more. OpenAI's only admission of fallibility is the tiny "ChatGPT can make mistakes" message at the bottom of the screen.

It's not far-fetched to foresee editors unintentionally publishing AI-generated, plagiarized articles riddled with inaccuracies and copyright violations. While human authors are trusted based on reputation, AI-generated copy risks misinforming readers, leading to a world where extreme fact-checking is necessary for everything.

While a human author could be trusted based on their reputation and relationship with the publication or editor, AI-generated copy runs the risk of misinforming readers.

In short, Generative AI seems more like a shortcut for the untalented than a means to superior content. It could create more work for editors, lower journalism standards, and is exploitative and legally dubious from a copyright standpoint.

The Fightback Against Generative AI

If this sounds dire, it's worth noting that some companies are fighting back, despite Future and other publishers' willingness to partner with AI firms. The New York Times has been in a legal battle with OpenAI since 2023, and earlier this year, IGN owner Ziff Davis took similar action (full disclosure: Ziff Davis has a minority shareholding in Time Extension publisher Hookshot Media). More dramatically, Disney and Universal are suing AI image tool Midjourney, calling it a "bottomless pit of plagiarism." Getty is also suing Stability AI in a case many believe could have far-reaching consequences for AI law.

How these legal cases unfold will significantly impact Generative AI regulation. AI's rapid evolution has outpaced copyright systems, but commercial entities like OpenAI have often disregarded prior protections, hiding behind "fair use"—a hollow argument given ChatGPT's demonstrated ability to not only enable but actively encourage copyright theft and fraud.


We've contacted Retro Gamer, EDGE, and Kotaku for comment and will update this piece if and when we hear back.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.