On 17 January 2024, the Australian Government released its interim response (the Response) to consultations held on Safe and Responsible AI in Australia which provides an indication of the steps the Government will be taking to ensure that AI is designed, developed and deployed safely and responsibly in Australia while maximising the opportunities that AI presents.

The Response notes some of the Government’s existing efforts and work to strengthen existing laws in areas that will help to address known harms with AI – such as through the implementation of privacy law reforms, a review of the Online Safety Act 2021, and introduction of new laws relating to misinformation and disinformation, while also acknowledging that our existing laws do not sufficiently prevent harms from the deployment of AI systems in legitimate but high-risk contexts where harms can be difficult or impossible to reverse.[i]

Some of the most significant, urgent and probable risks of AI identified through the consultation process were broadly categorised as technical risks, unpredictability and opacity, contextual risks, systemic risks and unforeseen risks[ii].

The Government’s immediate focus will therefore be on the use of AI in “high-risk settings” (such as law enforcement, healthcare and job recruitment) with consultations continuing on possible ‘mandatory guardrails’ (whether through changes to existing laws or the creation of new AI specific laws[iii]) while also taking some immediate actions including:-

  • working with industry to develop a voluntary AI Safety Standard;
  • working with industry to develop options for voluntary labelling and watermarking of AI-generated materials;
  • establishing an expert advisory group to support the development of options for mandatory guardrails.[iv]

Whilst the definition and ambit of ‘high-risk settings’ requires more work and is being further considered – including to define the criteria of risk categorisation generally[v] – the Government’s underlying aim seeks to ensure that the vast majority of ‘low risk AI use’ ((for example, the optimisation of business operations) continues to flourish in a largely unimpeded manner. [vi]

Next steps also being considered for the safe design, development and deployment of AI systems in high risk settings include introducing requirements relating to testing of products (for safety, before and after release), transparency and accountability (which could include training for developers and deployers of AI systems and a certification system).[vii]


Disclaimer: This information sheet is not intended to be a substitute for obtaining legal advice.

© Stephens Lawyers & Consultants, 31 January, 2024;  Authored by Rochina Iannella, Lawyer, Stephens Lawyers & Consultants.

For further information contact:

Katarina Klaric

Principal

Stephens Lawyers & Consultants

Melbourne Head Office

Suite 205, 546 Collins Street, Melbourne VIC 3000

Phone: (03) 8636 9100  Fax: (03) 8636 9199  

Sydney Office

Level 29, Chifley Tower, 2 Chifley Square, Sydney, N.S.W. 2000
Phone: (02) 9238 8028

Email: [email protected]

Website: www.stephens.com.au

All Correspondence to:

PO Box 16010 Collins Street West Melbourne VIC 8007

To register for newsletter updates and to send your comments and feedback, please email [email protected]  


[i] Commonwealth of Australia, Australian Government Department of Industry, Sciences and Resources, “Safe and Responsible AI in Australia consultation – Australian Government’s interim response” 2024 at pages 5 – 6.

[ii] Ibid. at page 11

[iii] Ibid. at page 15

[iv] The Hon. Ed Husic MP, Minister for Industry and Science, Media Release, “Action to help ensure AI is safe and responsible” (17 January, 2024) – Action to help ensure AI is safe and responsible | Ministers for the Department of Industry, Science and Resources

[v] Commonwealth of Australia, Australian Government Department of Industry, Sciences and Resources, “Safe and Responsible AI in Australia consultation – Australian Government’s interim response” 2024 at page 14 – which also looks at the EU and Canadian definitions of ‘high risk’ in this context.

[vi] Ibid. at page 6.

[vii] Ibid. at page 20.