This piece is freely available to read. Become a paid subscriber today and help keep Mencari News financially afloat so that we can continue to pay our writers for their insight and expertise.
Today’s email is brought to you by Empower your podcasting vision with a suite of creative solutions at your fingertips.
Hey, Glitchers it's Friday October 24
The cultural backlash against AI is getting stronger. A new open letter signed by many famous people, such as Steve Wozniak, Sir Richard Branson, Geoffrey Hinton, and others, calls for a ban on the development of superintelligence.
If you have any thoughts or feedback, our inbox is open; contact us via email, and don't forget to sign up for this newsletter here if you haven't already. Encourage a friend to subscribe as well! - Miko Santos
In today’s There’s a Glitch :
Leading AI Pioneers Call for Superintelligence Ban
Australia Orders AI Chatbot Firms to Detail Child Safety Measures
Australia Launches First ASIC-Powered Sovereign AI Cloud
Deloitte Australia, Credibl Deploy AI Platform for Climate Disclosure Compliance
Truth matters. Quality journalism costs.
Your subscription to There’s a Glitch directly funds the investigative reporting our democracy needs. For less than a coffee per week, you enable our journalists to uncover stories that powerful interests would rather keep hidden. There is no corporate influence involved. No compromises. We provide honest journalism when it's most needed.
Not ready to be paid subscribe, but appreciate the newsletter ? Grab us a beer or snag the exclusive ad spot at the top of next week's newsletter.
🔥Leading AI Pioneers Call for Superintelligence Ban
The Breakdown : Nearly 30,000 experts, policymakers, and public figures—including AI godfathers Geoffrey Hinton and Yoshua Bengio—have signed a statement demanding prohibition of superintelligence development until scientific consensus confirms safe deployment and the public grants explicit approval. The petition directly challenges the tech industry’s stated goal of building AI systems that surpass human cognitive ability across all tasks within a decade.
The Details
High-profile backing: Turing Award winners Hinton, Bengio, and Stuart Russell headline the effort alongside Nobel laureates, former defense officials (including fmr Chairman of Joint Chiefs Mike Mullen), and public figures from Prince Harry to Steve Wozniak
Public consensus gap: Polling reveals 64% oppose development until safety is proven or believe it should never be built, while only 5% support current unregulated pace—73% demand robust regulation
Explicit concerns: Statement highlights risks ranging from economic obsolescence and loss of civil liberties to national security threats and potential extinction
Not a blanket moratorium: Signatories emphasize this targets superintelligence specifically, not AI tools that help solve diseases or practical problems—as Russell notes, “adequate safety measures for a technology that, according to its developers, has a significant chance to cause human extinction”
Cross-ideological support: Signatories span political spectrum from Steve Bannon to Susan Rice, evangelical leaders to papal advisors, reflecting broad-based concern
Why It Matters: The statement crystallizes growing unease that AI development has decoupled from safety research and public input. When the field’s most cited researchers—who built modern deep learning—publicly oppose superintelligence timelines set by their former labs, it signals fundamental technical concerns beyond philosophical debate.
The 64% public opposition contradicts industry narratives about inevitable progress, while the diverse signatory list suggests superintelligence concerns transcend traditional political divisions. This positions safety requirements not as innovation barriers but as prerequisite infrastructure—similar to nuclear regulation or drug approval processes—before unleashing systems that could permanently alter human agency and control.
TOGETHER WITH MENCARI
The News Powerful People Don't Want You to Read
Between meetings, emails, and deadlines, who has time to stay properly informed?
The Evening Post solves this. Five minutes each morning gives you everything you need about Australian politics, technology, and finance.
No endless scrolling. No clickbait. Just the essential insights that impact your work and life.
Smart professionals choose efficiency. Join hundreds of subscribers.
🤖 Australia Orders AI Chatbot Firms to Detail Child Safety Measures
The Breakdown: Australia’s internet regulator has issued formal orders requiring four AI chatbot companies to provide detailed explanations of their child protection protocols against sexual content and self-harm material—marking the first regulatory enforcement action targeting conversational AI safety infrastructure. The directive comes as regulators globally grapple with implementing guardrails for technologies that achieved mass adoption before comprehensive safety frameworks existed.
The Details:
Regulatory scope: Four unnamed AI chatbot companies received official orders to document child protection measures, enforcement mechanisms, and content filtering systems
Identified risks: Focus targets exposure to sexual material and self-harm content through conversational interactions—vulnerabilities amplified by realistic dialogue capabilities that can establish rapport with young users
Enforcement context: Australia’s eSafety Commissioner leveraging existing internet safety authority to extend oversight into AI services, establishing regulatory precedent for chatbot platforms
Industry pressure point: Action follows broader concerns about AI services launching with minimal safety testing—realistic conversational abilities created rapid user adoption while outpacing development of age-appropriate content controls
Regulatory momentum: Part of expanded enforcement posture as authorities worldwide seek mechanisms to apply existing child safety frameworks to AI systems
Why It Matters: Australia’s move establishes a regulatory template other jurisdictions will likely adopt, shifting AI safety from voluntary guidelines to mandatory compliance frameworks. The focus on child protection provides regulators with clear legal authority to intervene—existing laws around minor protection are well-established, giving authorities solid footing to demand technical safeguards without navigating complex questions about general AI governance.
For AI companies, this signals the end of self-regulation on safety issues involving vulnerable populations. The enforcement action implicitly acknowledges that conversational AI’s ability to build rapport makes it qualitatively different from traditional content platforms, requiring specialized protections beyond standard content moderation. Companies must now architect child safety measures as core infrastructure rather than post-deployment additions.
🚨 Australia Launches First ASIC-Powered Sovereign AI Cloud
The Breakdown: SouthernCrossAI and SambaNova have deployed Australia’s first sovereign AI cloud infrastructure using purpose-built ASIC chips, delivering onshore processing that keeps all data, models, and computation within national borders while targeting 10× energy efficiency gains over conventional GPU architectures. First production node operational by December 2025.
The Details:
Architecture specifications: Platform built on SambaNova SN40L ASICs delivering approximately 3× higher inference throughput versus leading GPU clouds with significantly reduced power consumption per inference operation
Sovereignty framework: All model execution, data pipelines, and system logs processed exclusively within Australian jurisdiction, meeting national privacy and regulatory compliance requirements without offshore data transfer
Model compatibility: Supports commercial and open-source foundation models including OpenAI, Meta Llama family, Google Gemma, and DeepSeek, plus customer-owned fine-tuned models with full IP control hosted domestically
Deployment timeline: Initial New South Wales node operational Q4 2025, South Australia node first half 2026, with nationwide coverage planned across all states through 2026 using existing datacenter infrastructure
Energy efficiency targets: Company projects up to 10× lower energy cost per inference compared to traditional GPU deployments, enhanced through renewable energy partnerships
Why It Matters: The SCX platform addresses a critical gap in enterprise AI infrastructure: organizations requiring both advanced model capabilities and strict data sovereignty previously faced performance compromises or regulatory constraints. Purpose-built ASIC architecture challenges GPU dominance in AI inference workloads, demonstrating viable alternatives to NVIDIA’s ecosystem while delivering measurable efficiency improvements.
For Australian government agencies and enterprises operating under data residency requirements, the platform eliminates the sovereignty-versus-capability tradeoff that has constrained AI adoption in sensitive sectors including healthcare, defense, and public administration. The onshore deployment model positions Australia among the limited jurisdictions operating sovereign AI infrastructure at scale, potentially establishing a template for other nations pursuing technological independence. Energy efficiency claims, if validated in production, could materially impact AI infrastructure economics as inference costs increasingly dominate enterprise AI budgets.
👉 If you're looking to get up to speed with podcasting in South-east Asia and around the globe in just five minutes, this is the perfect place for you! Just click here.
🚀Deloitte Australia, Credibl Deploy AI Platform for Climate Disclosure Compliance
The Breakdown: Deloitte Australia and Credibl have launched SustainNext Climate Reporting, an AI-powered platform automating climate disclosure workflows ahead of Australia’s mandatory AASB reporting framework taking effect July 2026. The collaboration integrates Credibl’s sustainability data infrastructure with Deloitte’s assurance methodology to address the resource-intensive compliance burden facing thousands of Australian companies preparing for alignment with IFRS and ISSB standards.
The Details:
Technical architecture: Platform combines Credibl’s AI-driven data aggregation stack with Deloitte’s audit framework to automate manual data collection processes across value chains, generating audit-ready reports aligned with AASB/IFRS requirements
Compliance scope: AASB framework mandates disclosure of climate-related financial risks, governance structures, risk management processes, strategy, and metrics beyond basic carbon accounting—requiring unified data infrastructure for global supply chain visibility
Regulatory timeline: July 2026 implementation deadline affects thousands of Australian entities, with standards mirroring IFRS Sustainability Disclosure Standards and ISSB framework requirements for international alignment
Value proposition: Platform targets reduction in compliance costs and reporting cycle time through automated data integration, risk pattern identification via AI analysis, and benchmark functionality against international frameworks
Market positioning: Tool positioned as strategic infrastructure rather than pure compliance software—embedding sustainability disclosure into core financial systems and decision-making processes
Why It Matters: The platform reflects the operational reality that climate disclosure has evolved from voluntary ESG reporting to auditable financial data requiring the same rigor as traditional accounting. By automating data aggregation and applying AI to pattern recognition across disparate sustainability metrics, the solution addresses a fundamental bottleneck: most organizations lack technical infrastructure to produce audit-grade climate disclosures at the speed and consistency investors now demand.
Australia’s July 2026 deadline creates immediate market pressure that positions early movers to establish competitive advantages through superior data quality and disclosure efficiency. The Deloitte partnership provides assurance credibility essential for investor confidence, while Credibl’s AI stack delivers operational scalability.
This convergence of technical automation and audit integrity mirrors broader industry shifts toward treating sustainability data as material financial information subject to the same verification standards as earnings reports. Success metrics will center on whether the platform demonstrably reduces compliance costs while improving data accuracy—establishing climate reporting as strategic capability rather than regulatory overhead.
Anthropic Claude AI Chatbot Gets Memory Feature for Pro and Max Subscribers in 2025. Anthropic is adding an automatic memory feature to its Claude AI chatbot that allows paid subscribers to save and manage conversation history across multiple chats, bringing it up to par with competitors ChatGPT and Gemini.
Amazon Unveils AI-Powered Smart Delivery Glasses for Drivers in 2025. Amazon is developing AI-powered smart glasses that give delivery drivers hands-free, heads-up navigation and package scanning capabilities through computer vision technology, eliminating the need to constantly check their phones during deliveries.
Microsoft Copilot Adds Group Chat, Real Talk Mode, and Enhanced Memory Features in 2025. Microsoft is rolling out major updates to Copilot AI including group chat functionality for up to 32 users, a “real talk” conversational mode with adaptive personality, enhanced memory capabilities with user controls, and improved health query grounding with trusted medical sources.
Google Willow Quantum Chip Achieves First Verifiable Quantum Advantage, 13,000x Faster Than Supercomputers. Google’s Willow quantum chip has achieved the first-ever verifiable quantum advantage by running the Quantum Echoes algorithm 13,000 times faster than classical supercomputers to compute molecular structures, marking a major step toward practical applications in drug discovery and materials science.
Goldman Sachs Economists Say “Jobless Growth” Is New Normal as Gen Z Faces Hiring Crisis. Goldman Sachs economists predict “jobless growth” will persist as the new normal, with AI-driven productivity gains fueling GDP expansion while job creation remains stagnant or negative outside healthcare, leaving Gen Z workers and recent graduates struggling to find employment in a “low-hire, low-fire” labor market.
Any news tip ?
A journalist's credibility is based on their sources and advice. Contact our editor via Proton Mail encryption, X Direct Message, LinkedIn, or email. You can securely message him on Signal by using his username, Miko Santos.
🛑 More on Kangaroofern Media Lab
Read our last AU Politics News: Nationals Grapple With Leadership Fracture as Joyce Sits Outside Party
Read our last West PH newsletter : Philippine Officials Warn of Disinformation Campaigns Targeting Military, West Philippine Sea Policy
Read our last Podcast newsletter : College Students Reject AI Podcasts, Citing Lack of Human Connection
Readers of There’s a Glitch receive journalism free from financial and political influence.
We set our news agenda, which is always based on facts rather than billionaire ownership or political pressure. Despite the financial challenges that our industry faces, we have decided to keep our reporting open to the public because we believe that everyone has the right to know the truth about the events that shape their world.
Thanks to the support of our readers, we can continue to provide free reporting. If you can, please choose to support Kangaroofern Media Lab Pty Ltd.
It only takes a minute to help us investigate fearlessly and expose lies and wrongdoing to hold power accountable. Thanks!















