Apple services and Google's YouTube are not tracking the number of user reports they receive about child sexual abuse occurring on their platforms and cannot say how long it takes them to respond to such reports, Australia's internet watchdog revealed Wednesday.
The findings emerged in the latest transparency report from eSafety Commissioner Julie Inman Grant, which showed tech giants including Apple, Google, Meta and Microsoft continue to leave significant gaps in their efforts to combat sexual crimes against children despite years of regulatory pressure.
"When left to their own devices, these companies aren't prioritizing the protection of children and are seemingly turning a blind eye to crimes occurring on their services," Inman Grant said in a statement. "No other consumer-facing industry would be given the license to operate by enabling such heinous crimes against children on their premises, or services."
The report represents the latest salvo in Australia's escalating battle with global technology companies over child safety, coming just days after the federal government decided to include YouTube in its world-first social media ban for teenagers under 16.
eSafety's transparency report shows minimal progress has been made by some of the most well-resourced companies in the world to tackle online child sexual abuse, three years after the regulator first used its powers to expose child safety concerns across major platforms.
In July 2024, eSafety issued legally enforceable periodic transparency notices under Australia's Online Safety Act to eight companies: Apple, Google, Meta, Microsoft, Discord, WhatsApp, Snap and Skype. The notices require each company to report every six months for two years about how they are tackling child sexual abuse material, livestreamed child abuse, online grooming, sexual extortion and AI-generated child sexual abuse material.
Major Safety Gaps Persist
The report identified widespread failures across multiple platforms to implement basic child protection measures that have been available for years.
None of the eight companies used tools to detect child sexual exploitation or abuse livestreaming on all of their services, according to the findings. eSafety previously reported similar gaps for Apple's FaceTime, Discord's livestreams and voice chats, Microsoft Teams, and Skype.
Apple, Discord, Google and Microsoft failed to use hash matching technology across all parts of their services to detect known child sexual exploitation and abuse material. Hash matching is a long-standing, privacy-preserving technology that creates digital "fingerprints" allowing platforms to detect copies of previously identified abuse material with high accuracy.
"Hash matching is a form of digital fingerprinting that allows for copies of previously identified child sexual exploitation and abuse material to be detected at very high levels of accuracy," the report states.
Apple, Google and WhatsApp did not block URL links to known child abuse material on any part of their services, while Discord only scanned for such links when they appeared in user reports.
The platforms also showed significant gaps in proactive detection capabilities. Apple, Google, Microsoft, Snap and Skype did not use tools to proactively detect new child abuse material, while Apple, Discord, Google, Microsoft, Skype, WhatsApp and Snap failed to deploy grooming detection tools across all parts of their services.
Missing Basic Data
Perhaps most concerning to regulators, Apple and Google's YouTube could not provide basic metrics about their child safety operations.
"In the case of Apple services and Google's YouTube, they didn't even answer our questions about how many user reports they received about child sexual abuse on their services or details of how many Trust & Safety personnel Apple and Google have on-staff," Inman Grant said.
The failure to track such fundamental data points raised questions about the companies' commitment to child protection efforts.
"It shows that when left to their own devices, these companies aren't prioritizing the protection of children and are seemingly turning a blind eye to crimes occurring on their services," the commissioner said.
Some Progress Noted
Despite the widespread shortcomings, the report acknowledged some improvements since previous assessments in 2022 and 2023.
Discord, Microsoft and WhatsApp generally increased their use of hash-matching tools to detect known child abuse material. Apple, Discord, Snap and WhatsApp all expanded the number of sources from which they obtain hash lists for detection purposes.
Discord and Snap began using language analysis tools to detect grooming, while Discord also commenced using such tools to detect sexual extortion. Microsoft and Snap started using tools to detect new child abuse material on Xbox and Snapchat respectively.
Meta and Discord also expanded their use of detection tools compared to previous years.
"While we welcome these improvements, more can and should be done," Inman Grant said. "eSafety will continue to shine a light on this issue, highlighting shortcomings and also improvements, to raise safety standards across the industry and protect the world's children including through mandatory industry Codes and Standards."
Truth matters. Quality journalism costs.
Your subscription to The Evening Post (Australia) directly funds the investigative reporting our democracy needs. For less than a coffee per week, you enable our journalists to uncover stories that powerful interests would rather keep hidden. There is no corporate influence involved. No compromises. Just honest journalism when we need it most.
Not ready to be paid subscribe, but appreciate the newsletter ? Grab us a beer or snag the exclusive ad spot at the top of next week's newsletter.
Company Responses
Google has previously stated that abuse material has no place on its platforms and that it uses a range of industry-standard techniques to identify and remove such material. The company has maintained its anti-abuse measures include hash-matching technology and artificial intelligence.
Meta, which owns Facebook, Instagram and Threads with more than three billion users worldwide, has said it prohibits graphic videos and child abuse content.
Apple and other companies named in the report had not provided immediate responses to the latest findings.
Regulatory Escalation
The transparency report comes amid Australia's broader crackdown on social media platforms and their impact on young users. Last week, the federal government reversed its planned exemption for YouTube from the country's social media ban for children under 16, following advice from the eSafety Commissioner.
The decision marked a significant policy shift, with YouTube joining platforms like TikTok, Instagram, Facebook and Snapchat in being banned for Australian teenagers under the proposed legislation.
Prime Minister Anthony Albanese vowed not to be "intimidated" by Google after the tech giant threatened legal action over the YouTube ban decision.
The eSafety Commissioner's office, established to protect internet users, has emerged as one of the world's most aggressive tech regulators, using Australia's Online Safety Act to compel global platforms to disclose their child protection practices.
The periodic transparency notices represent an escalation from previous voluntary reporting mechanisms, with companies now legally required to provide detailed information about their safety measures every six months.
Technical Failures Highlighted
The report detailed specific technical shortcomings that increase risks for children across major platforms.
Companies failed to implement livestream monitoring despite child abuse increasingly moving to real-time video platforms. The absence of proactive detection tools means platforms rely primarily on user reports to identify new abuse material, creating delays that can allow harmful content to spread.
The failure to block known child abuse URLs is particularly concerning given that such links often circulate across multiple platforms, the report noted. Without automatic blocking, users can easily share links to abuse material between services.
Several platforms also lacked tools to detect grooming behavior, where predators build relationships with potential victims through seemingly innocent conversations that gradually become sexual.
Sexual extortion, where criminals threaten to share intimate images unless victims provide money or additional material, was also poorly detected across most services.
Industry Impact
The findings have implications beyond Australia, as the platforms operate globally and face increasing scrutiny from regulators worldwide. The European Union has implemented similar transparency requirements under its Digital Services Act, while the United Kingdom is developing new online safety legislation.
Technology companies have argued that child protection measures must balance safety with privacy rights and that overly aggressive detection systems could lead to false positives affecting legitimate users.
However, child safety advocates have called for mandatory implementation of available technologies, arguing that companies have the technical capability to better protect children but lack sufficient incentive to invest in robust safety systems.
The report's findings suggest that despite years of public pressure and regulatory attention, major platforms continue to operate with significant child safety gaps.
Global Context
Online child sexual exploitation has grown dramatically in recent years, with law enforcement agencies reporting unprecedented increases in abuse material and victims. The COVID-19 pandemic accelerated the trend as children spent more time online while traditional monitoring systems were disrupted.
Australian Federal Police reported a 122% increase in child exploitation investigations between 2019 and 2023, while international organizations have documented similar surges globally.
The persistence of safety gaps on major platforms is particularly concerning given the scale of their user bases. YouTube alone has more than 2.7 billion monthly active users, while Meta's family of apps reaches more than 3.9 billion people monthly.
Enforcement Mechanisms
Australia's Online Safety Act provides eSafety with significant enforcement powers, including the ability to issue civil penalty notices of up to 700,000 Australian dollars ($455,000) for individuals and 14 million Australian dollars ($9.1 million) for corporations.
The regulator can also seek court orders requiring companies to comply with transparency notices or implement specific safety measures.
"We need to keep the pressure on the tech industry to live up to their responsibility to protect society's most vulnerable members from the most egregious forms of harm and that's what these periodic notices are designed to encourage," Inman Grant said.
Looking Ahead
The next transparency report in this series will be published in early 2026, giving companies six months to demonstrate meaningful progress on child safety measures.
eSafety has indicated it will continue using its transparency powers to monitor industry improvements and may consider additional enforcement actions if companies fail to address identified gaps.
The regulator noted that given Skype's consumer service was retired on May 5, 2025, the upcoming report will be the last to include information about Microsoft's video calling platform.
The ongoing transparency reporting represents a significant test of whether regulatory pressure can drive meaningful improvements in child safety across global technology platforms, with implications for similar efforts worldwide.
For companies that have built their business models around user engagement and data collection, the child safety requirements represent a fundamental challenge to prioritize protection over growth metrics.
"This group of eight companies are required to report to me every six months, and in that time, I hope and expect to see some meaningful progress in making their services safer for children," Inman Grant said.
The commissioner's stark assessment reflects growing frustration among regulators globally that voluntary industry initiatives have proven insufficient to address the scale and severity of online child exploitation.
With Australia leading international efforts to regulate social media platforms and enforce child safety standards, the transparency reports serve as a critical test case for whether government intervention can succeed where industry self-regulation has failed.
Counselling and Support Services
1800 Respect, National counselling helpline: 1800 737 732
Bravehearts, counselling and support for survivors of child sexual abuse: 1800 272 831
Child Wise, counselling provider: 1800 991 099
Lifeline, 24-hour crisis support and suicide prevention: 13 11 14
Care Leavers Australia Network: 1800 008 774
PartnerSPEAK, peer support for non-offending partners: (03) 9018 7872
Got a News Tip?
Contact our editor via Proton Mail encrypted, X Direct Message, LinkedIn, or email. You can securely message him on Signal by using his username, Miko Santos.
As well as knowing you’re keeping Mencari (Australia) alive, you’ll also get:
Get breaking news AS IT HAPPENS - Gain instant access to our real-time coverage and analysis when major stories break, keeping you ahead of the curve
Unlock our COMPLETE content library - Enjoy unlimited access to every newsletter, podcast episode, and exclusive archive—all seamlessly available in your favorite podcast apps.
Join the conversation that matters - Be part of our vibrant community with full commenting privileges on all content, directly supporting The Evening Post (Australia)
Not ready to be paid subscribe, but appreciate the newsletter ? Grab us a beer or snag the exclusive ad spot at the top of next week's newsletter.