The Australian AI governance debate starts to heat up


Australian ministers debate whether implementing regulations for AI will address the challenges caused by a lack of skilled AI talent.

The Australian AI governance debate starts to heat up
Image: stnazkul/Adobe Stock

A lack of properly defined artificial intelligence governance and policy is causing bipartisan concern in Australian politics, with both major parties recently speaking out about the need to move urgently on the matter.

While regulation is often seen as an inhibitor to innovation, there is a real fear that Australia is falling behind on AI, lacking the resources and skills to manage the technology. Increased activity by the government will help to crystallize a national strategy. This will lead to better opportunities for AI technologists and companies.

SEE: Explore TechRepublic Premium’s artificial intelligence ethics policy.

As recently highlighted at a The Australian Financial Review Future Briefings event by Australian deep-tech incubator Cicada Innovations CEO Sally-Ann Williams, Australian companies “dramatically overestimate the level of relevant technology expertise they have within their ranks.”

“People say to me, ‘I have 150 machine learning experts in my business’, to which I say, ‘you absolutely don’t,’” Williams said.

Developing regulations and a national vision for AI will help the industry address these challenges.

Australian ministers propose regulatory efforts to capitalize on AI

Writing in The Mandarin in early June, Labor Minister Julian Hill argued for the establishment of an AI commission.

“AI will shape our perception of life as it influences what we see, think and experience online and offline. Our everyday life will be augmented by having a super bright intern always by our side,” Hilled noted. “Yet over the next generation, living with non-human pseudo-intelligence will challenge established notions of what it is to be human … Citizens and policymakers have to urgently get a grip.

“AI is bringing super high IQ but low (or no) EQ to all manner of things and will make some corporations a ton of money. But, exponentially more powerful AI technologies unaligned with human ethics and goals bring unacceptable risks; individual, societal, catastrophic — and perhaps one day existential — risks.”

Hill’s sentiments were shared by Shadow Communications Minister David Coleman in an interview on Sky News a day later.

“The laws of Australia should continue to apply in an AI world,” Coleman said. “What we want to do is not step on the technology, not overregulate because that would be bad, but also ensure, in a sense, that the sovereignty of nations like Australia remains in place.”

Both ministers were responding to a report commissioned by the Australian government that found that the nation is “relatively weak” at AI and lacks the skilled workers and computing power to capitalize on the opportunities of AI.

Understanding the need to move urgently on this, Australia will likely focus its regulatory efforts regarding AI in two areas: protecting privacy and human rights without inhibiting innovation and ensuring the country has the infrastructure and skills to capitalize on the opportunities of AI.

What might a regulated environment look like?

Australia is not the only nation grappling with AI regulation. Japan, for example, is preparing to invest heavily in skills development to promote AI in medicine, education, finance, manufacturing and administrative work, as it seeks to battle an aging and declining population. Citing concerns about the risks to privacy and security, disinformation and copyright infringement, Japan is putting AI at the center of its labor market reform.

SEE: Discover how the White House addresses AI’s risks and rewards amidst concerns of malicious use.

The EU, meanwhile, is leading the way with AI regulation, drafting the first laws specifically governing the application of AI. Under these laws, the development of AI will be limited according to its “trustworthiness” and as follows:

  • “AI systems that are considered a clear threat to the safety, livelihoods and rights of people will be banned. This includes AI applications that manipulate human behavior to circumvent users’ free will and systems that allow ‘social scoring’ by governments.”
  • High-risk AI applications — which run a gamut of applications, including self-driving cars, applications that score exams or assist with recruitment, AI-assisted surgery, and legal applications — will be subjected to strict obligations, including the provision of documentation, the guarantee to human oversight and the logging of activity to trace results.
  • For low-risk systems, such as chatbots, the EU wants transparency, so users know they are interacting with an AI and can choose to discontinue if they so wish.

Meanwhile, another leader in regulating AI is China. China has moved to build a framework for generative AI — technology such as ChatGPT, Stable Diffusion and others that leverage AI to create text or visual assets.

SEE: G2 report predicts big spending on generative AI.

Concerned with IP holders’ rights and the potential for abuse, China’s regulations will require providers of generative AI to register with the government and provide a watermark that will be applied to all assets created by these systems. The providers will also be required to bear responsibility for the content generated by the product by others, meaning that for the first time AI application providers will be obligated to make sure their platforms are being used responsibly.

For now, Australia is still formulating its approach to AI. The government has opened a public consultation on the responsible approach to AI (which closes on July 26), and these responses will be used to continue to build on the multimillion investment in responsible AI that it announced in the 2023–2024 budget.



Source link