1772415622723.webp

Australia Signals Tough Action on AI Platforms Over Age Verification Compliance​

eSafety Warns Search Engines and App Stores of Potential Enforcement​

Australia’s internet regulator has indicated it may move against search engines and app stores if artificial intelligence services fail to implement age verification measures ahead of a looming compliance deadline. The warning follows a review which found that more than half of leading AI platforms have not publicly outlined steps to meet the new regulatory requirements.

The move underscores one of the most assertive global efforts to regulate AI services, particularly amid rising concerns about exposure of minors to harmful content and a growing number of legal challenges against AI firms.

Australia Expands Crackdown From Social Media to AI​

In December, Australia became the first country to ban social media access for teenagers, citing mental health concerns. The government now extends similar restrictions to AI-driven services, positioning the country at the forefront of international efforts to impose age-based access controls on emerging technologies.

From March 9, internet services operating in Australia, including AI-powered search tools such as OpenAI’s ChatGPT and other companion chatbots, must prevent users under 18 from accessing content involving pornography, extreme violence, self-harm, or eating disorders. Companies that fail to comply face fines of up to A$49.5 million.

A spokesperson for the eSafety commissioner stated that the regulator will deploy its full enforcement powers in cases of non-compliance, including action directed at gatekeeper platforms such as search engines and app stores that provide access to these services.

Legal Pressures Mount on AI Companies​

AI firms are already confronting legal scrutiny. OpenAI and companion chatbot startup Character.AI have faced wrongful death lawsuits linked to interactions with young users.

This week, OpenAI confirmed that it had deactivated the ChatGPT account of a teenager suspected in a mass shooting in Canada months before the incident, without notifying authorities.

Although Australia has not reported chatbot-related violence or self-harm incidents, the regulator said it has received reports of children as young as 10 using AI interactive tools for up to six hours daily.

The eSafety office expressed concern that AI platforms may be using emotional manipulation, anthropomorphism, and similar techniques that encourage excessive engagement among young users.

Apple and Google Responses Remain Limited​

Among major app distribution platforms, Apple Inc. did not respond to requests for comment but stated on its website that it would apply reasonable methods to prevent minors from downloading 18+ apps in Australia and other jurisdictions introducing age restrictions.

Google, which operates Australia’s dominant search engine and is the second-largest app store provider, declined to comment.

Jennifer Duxbury, head of policy at Internet Industry Group DIGI, noted that while the regulator is working to inform chatbot services about the new rules, any company operating in Australia is responsible for understanding and meeting its legal obligations.

Majority of AI Platforms Yet to Implement Age Controls​

A week before the deadline, a review of the 50 most popular text-based AI products found that only nine had introduced or announced age assurance systems. The assessment examined platform responses to restricted content prompts, moderation policies, public statements, and responses to inquiries.

Another 11 platforms had implemented blanket content filters or planned to block all Australian users, effectively complying by restricting access entirely. That leaves 30 platforms with no apparent public steps toward compliance.

Major chat-based assistants such as ChatGPT, Replika, and Anthropic’s Claude have begun rolling out age verification systems or comprehensive filters. Character.AI has restricted open-ended chat access for users under 18.

Companion chatbot providers Candy AI, Pi, Kindroid, and Nomi indicated plans to comply but did not provide details. HammerAI stated it would initially block access from Australia to comply with the code.

However, compliance remains limited across the broader sector. Three-quarters of companion chatbot services reportedly have no operational or planned filtering or age verification systems. Around one-sixth do not publish an email address for reporting suspected breaches, another requirement under the rules.

Grok Under Global Scrutiny​

Elon Musk’s chat-based search tool Grok, operated by xAI, reportedly lacks age assurance measures and text-based content filters. Grok is currently under investigation in multiple jurisdictions over allegations related to the production of synthetic sexualised imagery of children. The company did not respond to requests for comment.

Concerns Over AI Safety Design​

Lisa Given, director of RMIT University’s Centre for Human-AI Information Environments, said the findings were not surprising, noting that many AI tools appear to be developed without sufficient attention to potential harms and necessary safety controls.

As Australia moves closer to enforcing its March 9 deadline, the spotlight remains firmly on AI developers, app stores, and search engines, with regulators signaling that enforcement action could extend beyond the platforms themselves to the digital infrastructure that enables public access.
 

Disclaimer: Due care and diligence have been taken in compiling and presenting news and market-related content. However, errors or omissions may arise despite such efforts.

The information provided is for general informational purposes only and does not constitute investment advice, a recommendation, or an offer to buy or sell any securities. Readers are advised to rely on their own assessment and judgment and consult appropriate financial advisers, if required, before taking any investment-related decisions.

Last edited by a moderator:
Back
Top