Back to all posts

Australia Introduces World First AI Chatbot Safety Laws for Kids

2025-09-08Neve Brissenden, Sarah Ferguson4 minutes read
AI Regulation
Online Safety
Child Protection

In a groundbreaking global initiative, Australia is taking a stand to protect children from the potential dangers of artificial intelligence. The eSafety Commissioner, Julie Inman Grant, has announced new regulations specifically designed to prevent children from engaging in sexual, violent, or otherwise harmful conversations with AI companion chatbots.

The Alarming Rise of AI Companions for Children

The move comes in response to troubling reports from Australian schools, where children as young as 10 and 11 are reportedly spending up to six hours a day with AI companions, many of which are described as "sexualised chatbots." This growing trend has raised significant concerns about the impact of unregulated AI on vulnerable young users.

A child works on a laptop computer

Commissioner Inman Grant has been a vocal critic of the tech industry's rapid deployment of these technologies without adequate safeguards. She highlighted the urgency of the situation, stating, "We don't need to see a body count to know that this is the right thing for the companies to do." She further condemned the industry's typical approach:

"I don't want to see Australian lives ruined or lost as a result of the industry's insatiable need to move fast and break things."

Inman Grant also noted the intentionally engaging nature of these platforms, pointing out that they are "deliberately addictive by design."

A hand holds a mobile phone in a dark room.

Introducing Groundbreaking Online Safety Codes

Six new codes have been registered under Australia's Online Safety Act, compelling tech companies to take responsibility. These regulations apply to a wide range of services, including AI chatbot apps, social media platforms, app stores, and even technology manufacturers. A key requirement is that these companies must implement robust age verification systems when users attempt to access potentially harmful content.

Interestingly, the codes were drafted with input from the industry itself, including major players like Meta, Google, and Yahoo. This collaborative approach aims to ensure that the safeguards are practical and effective.

The need for such measures was recently underscored by actions from ChatGPT-owner OpenAI. Following a tragic incident where a teenager took his own life after allegedly receiving encouragement from the chatbot, OpenAI introduced new safeguards, including distress warnings for parents. This proves that companies have the capability to implement protective features.

A hand holding up a phone displaying various uses of AI chatbot ChatGPT against a leafy backdrop

Inman Grant argues that the primary motivation for tech companies has been market dominance. "What they've chosen to do is get these chatbots out to market as quickly as possible to achieve as much market share as possible," she said. "This has always been the modus operandi of the industry — we'll fix the harm later."

Can Age Verification Really Work?

A major component of the new regulations is "age assurance." After a successful trial of various technologies, the eSafety Commissioner's office has set a deadline of December 10 for platforms to be able to identify and deactivate the accounts of users under 16.

Platforms are expected to use a variety of tools to achieve this, including analyzing language and emoji use through natural language processing to identify younger users. However, the commissioner acknowledges that tech-savvy children will likely try to circumvent these measures using tools like VPNs and AI-generated deepfakes.

A phone screen showing social media apps.

"There will be a range of different circumvention measures teenagers will use," Inman Grant admitted. "We've outlined what we think those circumvention measures will be and what we expect the companies to do to prevent that circumvention."

A Wider Crackdown on Online Harms

The focus on AI chatbots is part of a broader effort to make the digital world safer. The eSafety Commission has also taken action against a UK-based company operating "nudify" websites. These sites, which allegedly have 100,000 Australian users per month, allow people to create deepfake pornographic images of others and are reportedly being used by schoolchildren against their classmates.

Hands holding a mobile phone.

Describing the company as a "pernicious and resilient bad actor," the commissioner has threatened a fine of up to $49.5 million if it fails to comply with Australian online safety laws. This demonstrates a commitment to holding even international companies accountable for the harm caused on their platforms.

Reflecting on her long career in the tech sector, Inman Grant believes there is still a long way to go in protecting children online, concluding with a stark assessment:

"Not a single one of them is doing everything they can to stop the most heinous of abuse to children being tortured and raped."

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.