Australia’s online safety regulator has warned that major technology companies, including Apple, Google, Meta, and Microsoft, continue to fall short in detecting and preventing child sexual exploitation and abuse (CSEA) on their platforms, despite some recent improvements.
The findings come from a new transparency report published by Australia’s eSafety Commissioner, which assessed how eight major tech companies are addressing online threats such as AI-generated child sexual abuse material, livestreamed abuse, grooming, and sexual extortion. The report follows legally enforceable transparency notices issued in July 2024 under Australia’s Online Safety Act, requiring companies to disclose their safety practices.
The report identified critical gaps in safety protections, particularly in video calling and livestreaming services. According to the regulator, platforms such as Apple’s FaceTime, Microsoft Teams, Snapchat, WhatsApp, Discord, and Google Meet lack proactive tools to detect live abuse. Similarly, Apple was found not to use automated systems to detect newly created abuse material across any of its services, while other companies used detection tools inconsistently.
eSafety Commissioner Julie Inman Grant said the lack of progress was concerning, especially given the technological capabilities of major platforms. She emphasized that companies have both the resources and responsibility to implement more effective safeguards, particularly as live video and messaging features create new risks for exploitation.
The report also highlighted shortcomings in the use of language analysis tools designed to identify sexual extortion and grooming behaviors. Despite receiving intelligence and indicators from eSafety investigators, several companies had not deployed these tools widely across their services.
However, the regulator noted incremental improvements. Companies have expanded their detection of previously identified abuse material, improved response times to user reports, and introduced features designed to warn users about sensitive content. For example, Snap significantly reduced the time abusive material remains visible, while Apple and Google implemented safety features that blur explicit images for younger users.
The eight companies are required to submit additional transparency reports in March and August 2026. Regulators warned that failure to comply with reporting requirements could result in fines of up to AUD$825,000 per day.
Need Help?
If you’re concerned or have questions about how to navigate the global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.


