The Australian government has unveiled new measures to enhance the safe use and regulation of artificial intelligence (AI) in the country, following a year of consultations with the public and industry stakeholders about the technology. According to the Australian Minister for Industry and Science, Ed Husic, businesses sought greater clarity on AI regulations to better harness the opportunities presented by this technology.
The Tech Council of Australia has estimated that generative AI alone could contribute between A$45bn ($30.27bn) and A$115bn ($77.36bn) annually to the Australian economy by 2030.
To address these needs and provide guidance on the next steps, the government appointed an AI expert group earlier this year. The group’s recommendations have culminated in the “Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings”.
This paper outlines a proposed definition of high-risk AI, 10 mandatory guardrails, and three regulatory options for implementing these measures.
The regulatory approaches will encompass integrating the guardrails within existing regulatory frameworks, introducing new framework legislation to adapt current regulations. Alternatively, a new cross-economy AI-specific law, such as an Australian AI Act, could be established.
Pushing for an AI regulatory framework
In addition to these proposals, the government has introduced a new Voluntary AI Safety Standard, with immediate effect. This standard offers practical guidance for businesses engaged in high-risk AI activities, allowing them to implement best practices ahead of mandatory regulations.
The new standard aims to provide businesses with certainty, support their growth, attract investment, and ensure Australians can access the benefits of AI while managing its risks.
Commissioned by the National AI Centre, the Responsible AI Index 2024 highlighted significant gaps in current AI practices among Australian businesses. The report revealed that while 78% of Australian businesses believe they are implementing AI safely and responsibly, only 29% of those practices were found to be compliant.
The index surveyed 413 executive decision-makers across various sectors, including financial services, government, health, education, telecommunications, retail, hospitality, utilities, and transport.
Businesses were examined on 38 identified responsible AI practices across five dimensions. These include accountability and oversight, safety and resilience, fairness, transparency and explainability, and contestability. On average, organisations adopted only 12 of these practices.
“We know AI can be hugely helpful for Australian business, but it needs used safely and responsibly,” said Husic. “The Albanese government has worked with business to develop standards that help identify and manage risks.
“This is a practical process that they can put into use immediately, so protections will be in place.”
He also noted that AI is anticipated to generate up to 200,000 AI-related jobs in Australia by 2030 and contribute between A$170bn ($114.4bn) and A$600bn ($403.61bn) to gross domestic product (GDP).
Approaching international regulatory alignment on AI
The Australian government said that the Voluntary AI Safety Standard will be updated over time to stay aligned with international best practices, similar to measures taken by the European Union (EU), Japan, Singapore, and the US.
The consultation period for the paper is now open and will close on 4th October 2024.
Last month, Australia’s national science agency, Commonwealth Scientific and Industrial Research Organisation (CSIRO), announced a research partnership with Google to secure the country’s critical infrastructure (CI) from risky software components.
The collaboration is part of CSIRO’s Critical Infrastructure Protection and Resilience programme and Google’s Digital Future Initiative. It aims to address the critical gaps in how Australia’s CI operators identify, understand, and resolve vulnerabilities in their software supply chains.