AI Expert Calls for Tighter Regulation in High-Risk Areas Amid $116 Billion Economic Forecast
An artificial intelligence expert is urging stronger regulation of AI technology in high-risk sectors, even as Australia's Productivity Commission projects the technology could deliver a $116 billion economic boost over the next decade.
Lisa Givens, Professor of Information Sciences at RMIT University, said AI regulation should not be viewed as opposing productivity gains, particularly in areas where people's health and safety are at stake.
"I don't think that we have to see AI regulation necessarily as in opposition to efficiencies or productivity work," Givens told ABC News Australia. "I think particularly in areas around high risk use of AI contexts where people's health and safety might be at risk, we do need to take a closer look and we should ensure that we have really tight regulations in those areas."
The comments come as organizations across Australia grapple with implementing AI while addressing safety and responsibility concerns. The Productivity Commission report recommends treating AI-specific regulation as a last resort, favoring a softer approach of checking gaps in current regulation.
Givens identified medical diagnostics and mental health chatbots as prime examples of high-risk AI applications requiring stricter oversight.
"The minute you're talking about diagnoses, you're talking about people's health immediately," she said. "I think that that's an area that would be potentially deemed as high risk."
She pointed to global concerns about AI chatbots being used for mental health issues, questioning whether sufficient guardrails exist to protect users of such technologies.
While acknowledging AI's potential benefits in healthcare—including more efficient cancer screening and document analysis—Givens emphasized the need for careful regulation when human welfare is involved.
The debate centers on whether existing regulations can adequately cover AI applications or if new, technology-specific rules are needed. Givens said the rapid pace of AI development complicates this approach.
"AI technologies are moving very, very quickly," she said. "And in some cases, because the technology is adaptive, but in ways that we don't quite always know what the outcome is going to be, there is a belief that in high-risk areas in particular, we do need to take a deeper dive."
The challenge involves balancing innovation with safety while allowing businesses to implement AI freely to boost productivity.
Truth matters. Quality journalism costs.
Your subscription to The Evening Post (Australia) directly funds the investigative reporting our democracy needs. For less than a coffee per week, you enable our journalists to uncover stories that powerful interests would rather keep hidden. There is no corporate influence involved. No compromises. Just honest journalism when we need it most.
Not ready to be paid subscribe, but appreciate the newsletter ? Grab us a beer or snag the exclusive ad spot at the top of next week's newsletter.
Creative industries face particular alarm over the report's recommendation to explore whether copyright law barriers hinder AI model development. Givens said concerns have persisted for at least two years about job losses and intellectual property protection.
"Many people that work in the creative industries are really not only just worried about future job loss and what it might mean for the creative industries, but copyright is something that definitely protects creative in terms of, you know, that's how they're going to pay the bills," she said.
The copyright debate centers on whether companies should freely access copyrighted materials for AI training datasets. Creative industry workers worldwide oppose this approach, Givens said.
"People in the creative industries worldwide are saying absolutely not," she said. "We need to actually look at our copyright laws and assess how do we protect people's creative rights in this new environment."
Several sectors already show evidence of AI-driven job displacement. Givens cited translation services as an example where AI tools are replacing human translators, with companies using AI for initial translation followed by human oversight for quality control.
"So there are some innovations like that that will definitely potentially increase productivity on the one side but could create job losses in another area," she said.
The pattern extends across industries where companies previously outsourced work to human professionals. AI tools now handle many tasks more quickly, efficiently and at lower cost than traditional methods.
Regarding the Productivity Commission's $116 billion economic forecast, Givens expressed caution about such projections.
"I'm not an economist, but at the end of the day, I think a lot of this is really about speculation," she said. "It's a matter of how can we actually integrate these things in ways that will boost productivity?"
She questioned what productivity gains would ultimately mean for workers and whether time saved through AI automation would result in job losses or free employees for more engaging work.
"Many people are looking to AI, particularly around some of the mundane tasks that people do," Givens said. "And there's no doubt in almost every sector, every workforce, we are sometimes buried in paperwork and things that really could be done by a machine."
The critical question, she said, involves how organizations use the time gained through AI efficiency.
"Is that going to simply mean job losses and better bottom line for businesses that might be hoping to boost their bottom line?" she asked. "Or is that going to mean that people are now free to do more interesting, in-depth work that really does require a human being?"
The Productivity Commission's approach favors applying existing regulations to AI rather than creating new technology-specific rules. This reflects an argument that current regulatory frameworks can adequately address AI applications without treating the technology differently.
However, Givens suggested this approach may prove insufficient given AI's adaptive nature and unpredictable outcomes, particularly in high-risk contexts where human safety is paramount.
The debate reflects broader tensions between fostering innovation and ensuring responsible AI deployment as the technology rapidly transforms multiple sectors of the Australian economy.
As organizations continue implementing AI tools across industries, the balance between regulation and innovation remains a central challenge for policymakers and businesses navigating this technological transformation.
Got a News Tip?
Contact our editor via Proton Mail encrypted, X Direct Message, LinkedIn, or email. You can securely message him on Signal by using his username, Miko Santos.
As well as knowing you’re keeping Mencari (Australia) alive, you’ll also get:
Get breaking news AS IT HAPPENS - Gain instant access to our real-time coverage and analysis when major stories break, keeping you ahead of the curve
Unlock our COMPLETE content library - Enjoy unlimited access to every newsletter, podcast episode, and exclusive archive—all seamlessly available in your favorite podcast apps.
Join the conversation that matters - Be part of our vibrant community with full commenting privileges on all content, directly supporting The Evening Post (Australia)
Not ready to be paid subscribe, but appreciate the newsletter ? Grab us a beer or snag the exclusive ad spot at the top of next week's newsletter.