📢 Australian Treasury Backs Light-Touch AI Regulation as Productivity Commission Opposes Strict Controls
Today’s email is brought to you by Empower your podcasting vision with a suite of creative solutions at your fingertips.
Hey, Glitchers it's Thursday, August 7 and you might also be interested to know that Google has added a new Storybook feature to the Gemini app. This new feature allows users to create personalised storybooks about anything they like, with read-aloud narration, and it's completely free.
If you have any thoughts or feedback, our inbox is open; contact us via email, and don't forget to sign up for this newsletter here if you haven't already. Encourage a friend to subscribe as well! - Miko Santos
In today’s There’s a Glitch :
Australian Treasury Backs Light-Touch AI Regulation as Productivity Commission Opposes Strict Controls
AI Expert Disputes Australia's Hands-Off Regulation Strategy
OpenAI Releases First Open-Weight Language Models Since GPT-2
Claude Opus 4.1 Advances AI Coding Performance to New Benchmark High
Atlassian Chief Warns of AI Job Losses While Company Cuts 150 Positions
Truth matters. Quality journalism costs.
Your subscription to Mencari directly funds the investigative reporting our democracy needs. For less than a coffee per week, you enable our journalists to uncover stories that powerful interests would rather keep hidden. There is no corporate influence involved. No compromises. We provide honest journalism when it's most needed.
🔥Australian Treasury Backs Light-Touch AI Regulation as Productivity Commission Opposes Strict Controls
The Breakdown: Australia's Treasury just backed the Productivity Commission's rejection of tough AI control laws, choosing a "middle path" that treats artificial intelligence as an economic enabler rather than a threat to regulate heavily — signaling a major policy direction as the government prepares for crucial economic reform roundtables with 75 CEOs this month.
The Details:
Regulatory stance shift: The Productivity Commission's latest draft report explicitly opposes introducing strict AI control laws, marking a departure from the heavy regulatory approaches being considered globally
Worker-centric implementation: Treasury is prioritizing skills training and workforce empowerment over restrictive controls, with dedicated ministerial focus on ensuring technological change creates "beneficiaries not victims"
Economic integration priority: AI development is being positioned as core to addressing Australia's persistent productivity challenges and capital deepening issues, with the technology viewed as essential for lifting living standards
Stakeholder engagement process: The policy direction emerges from extensive consultation including 900 submissions, 41 ministerial roundtables, and direct CEO engagement ahead of major economic reform discussions
Practical timeline: Implementation discussions are happening now, with policy briefs being distributed this week and state treasurer meetings scheduled, indicating rapid movement from consultation to action
Why It Matters: This positions Australia as potentially taking a more business-friendly approach to AI regulation compared to the EU's comprehensive AI Act or other restrictive frameworks being developed globally. By rejecting heavy-handed controls in favor of worker empowerment and gradual integration, Australia could attract AI investment and development while other jurisdictions impose stricter constraints.
The timing is particularly significant as this comes during broader economic reform efforts focused on productivity gains — suggesting AI policy isn't being developed in isolation but as part of a comprehensive economic modernization strategy. This approach could influence how other Asia-Pacific nations handle AI regulation, particularly as Australia positions itself as a technology hub.
The emphasis on making workers "beneficiaries not victims" through skills training rather than regulatory protection suggests a fundamental policy bet that adaptation and education will be more effective than restriction — a gamble that could either accelerate Australia's AI adoption or leave workers more vulnerable to displacement if the approach proves insufficient.
🤖 AI Expert Disputes Australia's Hands-Off Regulation Strategy
The Breakdown: Australia's AI regulation debate just got more complex as a leading expert directly challenged the government's hands-off approach, arguing that healthcare diagnostics and mental health chatbots need "really tight regulations" — creating a policy tension as the Treasury pushes for AI-driven productivity gains worth a projected $116 billion over the next decade.
The Details:
High-risk sector concerns: Healthcare applications like AI-powered cancer screening and diagnostic tools require stricter oversight due to immediate patient safety implications, despite their efficiency advantages over human analysis
Mental health chatbot warnings: AI therapy and counseling platforms lack sufficient guardrails globally, with Givens citing widespread concerns about protecting vulnerable users from inadequate AI responses
Creative industry pushback intensifying: Copyright protections are under threat as AI training data requirements clash with creative workers' livelihoods, with industries "saying absolutely not" to unrestricted use of copyrighted materials
Job displacement reality check: Translation services already seeing AI substitution with human oversight, demonstrating the practical timeline for workforce impacts across knowledge work sectors
Economic forecast skepticism: The $116 billion productivity boost projection faces scrutiny over whether freed-up human time leads to meaningful work or simply cost-cutting through job elimination
Why It Matters: This expert pushback highlights a critical fault line in Australia's AI strategy that could derail the government's productivity agenda. While Treasury talks about making workers "beneficiaries not victims," frontline AI researchers are seeing real risks that existing regulations can't address.
The healthcare angle is particularly significant because it represents where AI adoption could deliver massive economic benefits while creating the highest stakes for getting regulation wrong. Givens' concerns about diagnostic AI and mental health chatbots aren't theoretical — these applications are rolling out now while regulatory frameworks lag behind.
The creative industry copyright battle could become Australia's first major AI policy test case. If the government prioritizes AI training data access over creative worker protections, it risks undermining the "middle path" messaging and potentially facing organized resistance from cultural sectors. This tension between innovation velocity and worker protection will likely define how Australia's approach actually plays out in practice, beyond the policy rhetoric.
🚨 OpenAI Releases First Open-Weight Language Models Since GPT-2
The Breakdown: OpenAI released gpt-oss-120b and gpt-oss-20b as state-of-the-art open-weight reasoning models, achieving near-parity with proprietary o4-mini performance while running efficiently on single 80GB GPU configurations — marking the company's first open language model release since GPT-2 and establishing new benchmarks for accessible AI deployment.
The Details:
Architecture Specifications: Mixture-of-experts Transformer models with 117B/21B total parameters activating 5.1B/3.6B parameters per token, utilizing 128/32 experts with 4 active experts per layer and native 128k context support
Performance Benchmarks: gpt-oss-120b matches o4-mini on core reasoning evaluations while gpt-oss-20b exceeds o3-mini performance, with particular strength in competition mathematics (AIME), health queries (HealthBench), and agentic tool use (TauBench)
Deployment Optimization: Pre-quantized MXFP4 format enables 80GB memory operation for 120b model and 16GB for 20b variant, with reference implementations across PyTorch, Metal, and integration support from Azure, Hugging Face, vLLM, and major hardware vendors
Safety Framework: Comprehensive adversarial fine-tuning evaluation under Preparedness Framework with external expert review, demonstrating resistance to malicious fine-tuning while maintaining unsupervised chain-of-thought transparency for monitoring capabilities
Ecosystem Integration: Apache 2.0 licensing with harmony prompt format, open-sourced tokenizer (o200k_harmony), and $500,000 Red Teaming Challenge to accelerate community safety research and vulnerability discovery
Why It Matters: OpenAI's open-weight release fundamentally shifts competitive dynamics in the AI landscape, providing enterprise-grade reasoning capabilities without proprietary API dependencies or usage restrictions.
The models address critical deployment constraints for organizations requiring on-premises inference, data sovereignty, or cost-predictable scaling while maintaining safety standards comparable to frontier models.
This strategic pivot toward open development accelerates AI democratization, particularly benefiting emerging markets and resource-constrained sectors previously excluded from advanced AI capabilities. The timing positions OpenAI to establish technical standards and safety frameworks for the open-weight ecosystem while the comprehensive safety evaluation methodology sets new precedents for responsible model release protocols in an increasingly competitive environment.
🚀 Claude Opus 4.1 Advances AI Coding Performance to New Benchmark High
The Breakdown: Anthropic launched Claude Opus 4.1 today, achieving 74.5% on SWE-bench Verified coding benchmarks while advancing real-world programming and reasoning capabilities — positioning the upgrade as a substantial leap in AI-assisted software development precision.
The Details:
Benchmark Performance: Delivers 74.5% accuracy on SWE-bench Verified, representing state-of-the-art coding performance with one standard deviation improvement over Opus 4 on junior developer evaluations
Code Precision: Excels at pinpointing exact corrections within large codebases without introducing bugs or unnecessary modifications, with GitHub noting particular gains in multi-file code refactoring tasks
Enhanced Capabilities: Improves in-depth research and data analysis skills, especially around detail tracking and agentic search functionality using hybrid reasoning with extended thinking up to 64K tokens
Deployment Access: Available immediately across paid Claude subscriptions, Claude Code platform, API endpoints, Amazon Bedrock, and Google Cloud Vertex AI with identical pricing structure to Opus 4
Developer Integration: API access through claude-opus-4-1-20250805 model string, supported by comprehensive system card documentation and updated developer resources
Why It Matters: Claude Opus 4.1 addresses critical pain points in AI-assisted development, particularly the precision gap that has limited adoption in production coding environments.
The model's ability to make targeted corrections without cascading errors represents a meaningful advance toward reliable AI pair programming. This release positions Anthropic competitively against recent launches from OpenAI and Google while maintaining focus on practical, immediately deployable capabilities rather than experimental benchmarks. The upgrade signals continued momentum in enterprise-ready AI tooling, with substantial model improvements planned for the coming weeks that could further reshape developer workflows and coding productivity standards.
👉 If you're looking to get up to speed with podcasting in South-east Asia and around the globe in just five minutes, this is the perfect place for you! Just click here.
ElevenLabs Launches AI Music Generator With Strict Commercial Use Restrictions and Artist Protection Rules. ElevenLabs has released terms of service for their new AI Music generation platform that creates sound recordings from text prompts but prohibits users from referencing real artists, song titles, or lyrics, while restricting commercial music library creation and barring access from specific industries including firearms, tobacco, and political organizations.
Australian Advertisers Show Limited AI Adoption With Only 32% Operationalizing Media Planning Technology, IAB Report Finds. Australian digital advertisers are proceeding cautiously with AI adoption, with only 32% operationalizing artificial intelligence in media planning compared to 30% full integration in the US, while 92% consider data critical for success amid ongoing privacy regulation reforms, according to the IAB Australia Data State of the Nation Report 2025
Google DeepMind Unveils Genie 3 AI World Model That Creates Real-Time Interactive 3D Environments From Text Prompts. Google DeepMind's new Genie 3 world model creates real-time interactive 3D environments from text prompts or images at 720p resolution and 24fps, featuring significantly improved memory retention over previous versions but remaining limited to research applications due to high computational requirements.
UK Ministry of Defence Selects Australian AI Firm Castlepoint Systems for Data Protection After Afghan Breach. The UK Ministry of Defence has contracted Australian AI cybersecurity firm Castlepoint Systems to deploy explainable artificial intelligence technology for automated data classification and breach prevention, marking the company's first British government partnership following a major privacy incident affecting over 18,000 Afghan asylum applicants.
Nine Entertainment Reports OpenAI and AI Firms Scraping Australian News Sites 10 Times Per Second Despite Blocks. Nine Entertainment revealed that OpenAI and other AI companies are scraping their Australian news websites nearly 10 times per second or 25 million times monthly despite explicit robots.txt blocks and legal disclaimers, prompting the media company to deploy TollBit's enforcement technology to prevent unauthorized content harvesting.
Any news tip ?
A journalist's credibility is based on their sources and advice. Contact our editor via Proton Mail encryption, X Direct Message, LinkedIn, or email. You can securely message him on Signal by using his username, Miko Santos.
More on Mencari
Mencari —for nightly bite-sized news around Australia and the world.
Podwires Daily—for providing news about audio trends and podcasts.
There’s a Glitch—updated tech news and scam and fraud trends
Viewpoint 360 - An investigative report based on evidence, produced in collaboration with 360info.
Part8A Podcast features expert interviews on current political and social issues in Australia and worldwide.
Readers of There’s a Glitch receive journalism free from financial and political influence.
We set our news agenda, which is always based on facts rather than billionaire ownership or political pressure. Despite the financial challenges that our industry faces, we have decided to keep our reporting open to the public because we believe that everyone has the right to know the truth about the events that shape their world.
Thanks to the support of our readers, we can continue to provide free reporting. If you can, please choose to support The Mencari.
It only takes a minute to help us investigate fearlessly and expose lies and wrongdoing to hold power accountable. Thanks!