Katarina Klaric, Principal, Stephens Lawyers & Consultants (April 2024)

AI technologies deployed in Australia largely originate from the United States, Europe, China and Japan with these countries being the innovators in the AI field having the highest patent filings globally[i]. The Australian government has recognised that to take advantage of globally supplied AI technologies and to support safe AI development and adoption, regulatory and governance frameworks are required that are consistent with global regulatory approaches[ii].  Australia is a participant in a number of global forums on AI regulation and governance[iii].

While the Australian government continues to undertake consultative processes into reforms required to laws to regulate AI technologies, on 13 March 2024, the European parliament approved the Artificial Intelligence Act. The new laws prohibit certain AI systems which are considered as contravening the values of the European Union and violating the fundamental rights of its citizens.  Prohibited AI systems include those that –

  • deploy subliminal manipulative or deceptive techniques distorting behaviour of people, impairing their ability to make informed decisions;
  • exploit the vulnerabilities of people due to their age, disabilities or economic situation;
  • create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;
  • categorise people based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation, subject to certain law enforcement exceptions;
  • use ‘real-time’ remote biometric identification systems in public places subject to specific law enforcement exceptions[iv].

The laws also regulate high-risk AI systems, which will be the subject of evaluation, conformity assessment and reporting requirements. Generative AI, such as ChatGPT is not classified as high-risk but will have to comply with transparency requirements including compliance with EU copyright laws.[v].

This Article provides an overview of Australia’s existing regulatory framework used to regulate AI technologies, the ongoing government consultations and proposed reforms to the laws.

The failed Robodebt automated decision-making system is used as a case study in this Article and provides a warning to government agencies and organisations globally of the risks associated with failed AI systems, including claims for compensation in the billions of dollars and reputational damage that follows.

Overview of Australia’s regulatory framework

Existing legal framework

Australia’s existing legal framework is used to regulate AI technologies across all industries. The laws include:

  1. Australian Competition and Consumer laws – these are administered by the Australian Competition and Consumer Commission (ACCC) and regulate competition, anti-competitive conduct and unfair trade practices.
  2. Corporations laws – these are administered by the Australian Securities and Investment Commission (ASIC) and regulate companies and the financial services market sector.
  3. Data protection and privacy laws – these are administered by the Office of the Australian Information Commissioner (OAIC) and State Privacy Commissioners.
  4. Online safety laws (Online Safety Act 2021) – Australia established the first world eSafety Commissioner who is responsible for the administration of the law. The law includes a mechanism to address online safety issues from cyberbullying to image abuse (including fake images, deepfake pornography and child exploitation material) and other kinds of material which affects online safety, some of which may be generated by the use of AI. The eSafety Commissioner has extensive powers to have illegal and harmful on-line material removed from on-line.
  5. Media and Communications laws – these are administered by the Australian Media and Communications Authority (AMCA).
  6. Criminal laws.
  7. Discrimination laws.
  8. Copyright laws.

Australia also has industry specific laws which cover the use of AI technology and potential risk in health, road transportation vehicle[vi] and aviation industries.  In the high-risk area of health, the Therapeutic Goods Act and regulations were amended in 2021 to cover software (including Al technology) which is used for medical purposes and comes within the definition of “medical device” and is not exempt from the regulations. However, these regulations do not extend to all software used in the health sector. Software excluded from regulations as a medical device includes:

  1. Software intended for self-management of a disease or condition that is not serious (without providing any specific treatment or suggestions);
  2. Consumer health and wellness products, which do not make claims about serious diseases or conditions;
  3. Communications software that enables telehealth consultations including transmission of patient information for the purpose of support health services delivery;
  4. Software intended to administer or manage health processes or facilities rather than patient clinical use cases;
  5. Clinical workflow management software used in the delivery of health services;
  6. Systems intended to only store and transmit patient images;
  7. Software intended to provide alerts or additional information to health professionals in relation to patient care, with the health professional exercising their own judgement in determining whether to action the alert or information[vii].

Artificial Intelligence (AI) Ethics Principles and Standards

Australia is a signatory to the OECD AI Principles, which are designed to encourage organisations to have ethical practices and good governance when developing and using AI and has adopted international standards for management and governance of AI systems[viii].

To complement the existing regulatory framework, government departments and agencies have adopted voluntary AI ethics framework – Australia’s Artificial Intelligence (AI) Ethics Principles, which are designed to ensure AI is safe, secure and reliable.  In summary the Principles are:

  1. Human, societal and environmental wellbeing: AI systems should benefit individuals, society and the environment.
  2. Human-centred values: AI systems should respect human rights, diversity, and the autonomy of individuals.
  3. Fairness: AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.
  4. Privacy protection and security: AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.
  5. Reliability and safety: AI systems should reliably operate in accordance with their intended purpose.
  6. Transparency and explainability: There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI, and can find out when an AI system is engaging with them.
  7. Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system.
  8. Accountability: People responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled[ix].

Case Example – Robodebt Scheme

Generally, Australians’ trust and confidence in AI technologies and systems is low with issues concerning privacy, safety, bias, fairness, integrity, the lack of transparency and accountability[x].

The failed Robodebt automated decision-making system developed and implemented by the Australian Department of Human Services (DHS) amplifies the mistrust and human and economic costs that result where appropriate legal and governance frameworks are not followed[xi].

The Robotdebt scheme was developed by the Department of Human Services (DHS) and put forward by the government as a budget measure in 2015 and began as a pilot in that year and continued until June 2020. The scheme was designed to recover overpayments to welfare recipients going back to the financial year 2010-2011. Robodebt was an automated system which involved data matching of income earned by the welfare recipients as reported by the employer to the Australian Taxation Office (ATO) with what the welfare recipient had declared to DHS. If there was a discrepancy the system would issue a notice requesting the recipient to explain the discrepancy using the on-line system. If the recipient did not respond or provide details or agreed with the income data from the ATO, the system used a process of “income averaging” to calculate overpayments rather than looking at the actual income earnt and welfare payment received over the relevant fortnight as required by relevant law. The system issued debt notices and debt collectors were engaged.

The Robodebt system was implemented without appropriate design, human welfare and fairness considerations or testing of the system including user testing. This resulted in errors with debt notices being illegally and unfairly issued.  Before automation, the process had been undertaken by compliance officers who reviewed each of the files and had personal contact with the recipients.  The Robodebt scheme was implemented although internal lawyers had provided advice to DHS in 2014, that  “income averaging” could not be used and that “actual benefits” and “actual income” received during the relevant fortnight had to be used to calculate whether there had been any over payment.

The system came under significant criticism in the media and was subject to investigation by the Ombudsman. DHS, to support the legality of the Robodebt system, obtained a second legal advice from an in-house lawyer in 2017, who expressed the view that it was open for DHS “as last resort” to act on average income to raise and recover debts from welfare recipients. DHS proceeded to cover up the first legal advice regarding the illegality “income average” and the scheme and only disclosed the second advice. Class actions followed.  Despite criticism of the system, the government continued with the system until June of 2020, when the Federal Court of Australia declared the system as illegal.

Some $721 million was wrongfully taken from around 381,000 people under the scheme[xii].

Robodebt Scheme class actions

In 2020, the government agreed to settle a class action brought on by 400,000 victims – paying a compensation of $112 million in compensation in addition to making repayments to individuals who had paid the debt demanded[xiii].

In another class action, the Federal Court of Australia had found the scheme was unlawful and there was no way for Centrelink to have been satisfied the debts were actually correct in issuing debt collection notices to welfare recipients. In June 2021, Justice Murphy approved a settlement sum of $1.8billon and described the Robodebt scheme as a “shameful chapter” in Australia’s social security scheme[xiv].

Robodebt Royal Commission

The Robodebt Royal Commission was established on 18 August 2022 to enquire into the establishment, design and implementation of the Robodebt scheme; the use of third-party debt collectors under the Robodebt scheme; concerns raised following the implementation of the Robodebt scheme; and the intended or actual outcomes of the Robodebt scheme.  On 7 July 2023 Royal Commissioner Catherine Holmes handed a 990-page report, which includes 57 recommendations.  The Robodebt scheme was described by the Commissioner as a “crude and cruel mechanism, neither fair or legal, and it made many people feel like criminals”.  Many people were traumatised with reported cases of self-harm and suicide. The Commission heard numerous accounts from victims who complained that Centrelink was difficult to engage with after they received a debt notice and aggressive in pursuing the debts. The Royal Commission’s recommendations also included criminal and civil charges against individuals involved. The Commissioner stated the Robodebt scheme was a “costly failure of public administration in both human and economic terms”[xv].

Royal Commission Recommendations

The Royal Commission’s recommendations included a specific recommendation dealing with automated decision-making and recommended that the government should consider legislative reform to introduce a consistent legal framework in which automation in government services can operate. Where automated decision-making is implemented:

  1. there should be a clear path for those affected by decisions to seek review;
  2. departmental websites should contain information advising that automated decision-making is used and explaining in plain language how the process works;
  3. business rules and algorithms should be made available, to enable independent expert scrutiny[xvi].

The Royal Commission also recommended the establishment of a body or the expanding of the powers of an existing body to monitor and audit automated decision-making by government, with the power to monitor and audit automated decision-making processes with regard to their technical aspects and their impact in respect of fairness, the avoiding of bias, and client usability[xvii].

Government inquiries, consultations and proposed regulatory reforms

Digital platform enquiries

Since 2017, the ACCC at the direction of the Australian government, has been conducting inquiries into competition and consumer impact of digital platform, digital platform services and digital markets and regulatory reforms required. These inquiries have resulted in eight reports and recommendations for regulatory reforms to deal with anti-competitive practices and unfair trade practices including those from the use of AI systems.  ACCC’s Digital Platform Services Inquiry Interim Report – September 2022 Regulatory Reforms, recommended –

  1. Reforms to address the prevalence of scams, fake reviews and harmful applications (some of which originate from the use of AI technology), the establishment of mandatory processes including notice-and-action processes, reporting processes, verification of certain business users and dispute resolution processes;
  2. Establishment of a new independent Ombudsman Scheme to resolve disputes between digital platforms, consumers and small businesses;
  3. Amendments to the Australian Consumer Law to prohibit economy-wide unfair trading practices including those occurring on digital platforms and/or resulting from the use of Al systems[xviii];
  4. Introduction of service-specific codes of conduct to address anti-competitive conduct engaged by digital platforms through the use of AI algorithms. Such conduct includes – self-preferencing and tying, setting prices, determining bids, or market sharing resulting in harmful algorithmic collusion – competing algorithms simultaneously learn to set higher prices collectively maximizing profit. The code would impose targeted competition obligations, which would apply to designated digital platforms who have the ability and incentive to engage in such conduct[xix].

Digital Platforms Forum

Australia’s four main regulators are also working collaboratively with a shared goal of ensuring that Australia’s digital economy is a “safe, trusted, fair, innovative and competitive space”, with the establishment of the Digital Platform Regulator Forum[xx]. This Forum is similar to organisations set up in other jurisdictions such as the Digital Cooperation Forum in the United Kingdom and the Digital Regulation Cooperation Platform in the Netherlands. The Forum’s strategic priorities for 2023/24 include:

  1. Assessing the impact of algorithms on Australians in areas including algorithmic recommendations and profiling, moderation algorithms, promotion of disinformation, harmful content, and product ranking and displays on digital platforms such as online marketplaces;
  2. Improving transparency of digital platforms’ activities and how they are protecting users from potential harm including how consumer data is being handled and the impact of the platform’s activities to address misinformation. The transparency issues are of particular concern given the power and information asymmetries between digital platforms and users; and
  3. Increased collaboration and capacity building between the four members including joint engagement with stakeholders, submissions and advice to government, and training and other capability programs, sharing information, and coordinating on matters relating to digital platforms regulation; and
  4. A new focus on understanding and assessing the benefits, risks and harms of generative AI[xxi].

Reforms to Australian Privacy and Data Protection Laws

Australia has also undertaken a review of its privacy and data protection laws with the release of the Privacy Act Review Report in February 2023, which included recommendation for amendments to Australia’s privacy law to enhance the level of protection given the development in digital technologies. In September 2023, the Government’s Response to Privacy Act Review Report was released, accepting the recommendations in the Report. The Government’s response specifically addresses automated decision-making and acknowledges that safe and responsible development and deployment of automated decision-making technologies “presents significant opportunities for enhancing productivity and facilitating economic growth and improving outcomes across health, environment, defence and national security”.

The Government agreed to amendments to the privacy laws to provide for transparency in relation to the use of automated decision-making technologies and to ensure integrity of the decisions made. More, specifically, the Government agreed that:

  1. privacy policies should set out the type of personal information that will be used in substantially automated decisions which have legal or similarly significant effect on an individual’s rights and that high level indicators of the type of decisions with a legal or similarly significant effect on individuals’ rights should be included in the Privacy Act or supplemented by guidance issued by the OAIC.
  2. Individuals should have a right to request meaningful information about how automated decisions with legal or similarly significant effect on the individual are made and this information should be provided jargon free, comprehensibly and without revealing any sensitive confidential information.

The Government also acknowledged the recommendations made by the Royal Commission into the Robodebt scheme in relation to automated decision-making and is considering how best to implement the Commissioner’s recommendations having regard to ongoing consultations into safe and responsible AI[xxii].

Government Consultation – Safe and Responsible AI in Australia

In June 2023, coinciding with the release of the Royal Commission Report into Robodebt Scheme, the Australian government released Discussion Paper – Safe and Responsible AI in Australia and commenced a public consultation process which considered the adequacy of the existing legal and governance framework to address the potential risks associated with AI technologies and the safeguards required. As a part of the consultation process the government is looking at the gaps in the existing regulatory regime and how best to respond to these having regard to global regulatory developments.  In response to submissions to the Discussion Paper, the Australian government, in January 2024, established an AI Expert Group to provide advice by the end of June 2024, on options for development of “mandatory guardrails to ensure the design, development and deployment of AI systems in high-risk settings is safe”. The Group is also to advise on testing, transparency, and accountability measures for AI in legitimate, but high-risk settings to ensure AI systems are safe.

Submissions raised concerns about the use of AI in legitimate, but high-risk contexts, where harm may be difficult or impossible to reverse, and the need for mandatory guardrails[xxiii]. A Digital Market Forum comprising of the four major regulators, in their joint submission supported the Government considering how the existing framework can be strengthened and enhanced (including through existing regulatory reforms proposals) before consideration is given to a separate regime specific to AI technology.  The regulators are favouring reforms to existing laws to address identified gaps and the use of mandatory “Codes” to impose specific obligations in respect of the use of AI in an ethical, safe and transparent manner and to address potential harm resulting from the use of AI[xxiv].   Codes are part of existing legislative instruments and a breach of the code results in the breach of that instrument. Codes are favoured by regulators because they can be easily adapted and changed with emerging issues.

Senate Select Committee on Adopting Artificial Intelligence (AI)

On 26 March 2024, the Australian Government Senate established the Senate Select Committee on Adopting Artificial Intelligence (AI) “to inquire into and report on the opportunities and impacts for Australia arising out of the uptake of AI technologies in Australia.”[xxv] Written submissions to the Senate Select Committee can be made until 10 May 2024 with the Committee’s report to Parliament expected by 19 September 2024.

Copyright and AI Reference Group

To complement the government consultative process on “safe and responsible use of AI” and regulatory reform required, in December 2023, the Australian Government established a copyright and artificial intelligence (AI) reference group. The Reference Group is to have ongoing engagement and consultation with stakeholders across sectors including – creative arts, media, film, education, research and technology to enable the Government to prepare for and respond to existing and future challenges to copyright from AI.    The Government has recognised that AI has given rise to a number of copyright issues including –

  1. the use of copyright material to train AI models and whether this should be permissible and if so, the licensing models required to compensate rights holders;
  2. the mining of websites for text, image and data and whether this should be permissible and if so, how are rights holders to be protected and compensated;
  3. the transparency, disclosure and attribution where content has been created by the use of AI generative tools or where existing copyright material has been used to train AI models;
  4. the use of AI to create imitations of existing copyright works;
  5. whether AI-generated works should be given copyright protection?[xxvi]

What Next?

The dynamic nature of the digital environment and development and use of AI technologies is and will continue to outpace regulatory reforms in Australia. Any regulatory reforms must be agile and flexible to adapt with evolution of existing and emerging technology providing adequate safe guards from potential harm and transparency. For the laws to be effective they must be able to be capable of enforcement in a quick and cost-efficient manner, with appropriate mechanisms for complaint resolution.

Disclaimer: This Article is not intended to replace obtaining legal advice 

Authored by Katarina Klaric, Principal, Stephens Lawyers & Consultants

© 29 October 2023 and 7 April 2024 — Stephens Lawyers & Consultants 

For Further Information contact:

Katarina Klaric
Stephens Lawyers & Consultants

Melbourne Head Office

Suite 205, 546 Collins Street, Melbourne, VIC. 3000
Phone: +61 3 8636 9100   Fax: +61 3 8636 9199

Sydney Office

Level 29, Chifley Tower, 2 Chifley Square, Sydney, N.S.W. 2000
Phone: +61 2 9238 8028

Email: [email protected]

Website: www.stephens.com.au

All Correspondence to:

PO Box 16010
Collins Street West
Melbourne VIC 8007

To register for newsletter updates and to send your comments and feedback, please email [email protected]  


[i] WIPO, Technology Trends 2019, Artificial Intelligence 2019,

[ii] Australian Government- Department of Industry, Science and Resources. “Safe and responsible AI in Australia” – Discussion paper – June 2023. p.3

[iii] In November 2023, Australia together with the EU and 27 countries, including the US, UK, Japan, China, Brazil and Chile signed the Bletchley Declaration affirming that, “AI should be designed, developed, deployed, and used in a manner that is safe, human-centric, trustworthy and responsible”.

[iv] Artificial Intelligence Act, Chapter II, Prohibited Artificial Intelligence Practices, Article 5.

[v] Artificial Intelligence Act, Chapters III and IV,  Also see: European Parliament Press Release – https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law.

[vi] Before road vehicles can be supplied onto the Australian market, they must meet the Road Vehicles Standard Act 2018 and Road Vehicle Standards Rules 2019. This regulated framework was implemented on 1 July 2021.

[vii] Examples of regulated and unregulated software (excluded) software based medical devices, October 2021, Australian Government, Department of Health, Therapeutic Goods Administration, pp4-6.

[viii] ISO/IEC 5339:2024: Information technology – Artificial intelligence – Guidance for AI applications

ISO/IEC 5392:2024: Information technology – Artificial intelligence – Reference architecture of knowledge engineering.

ISO/IEC 5338:2023: Information technology – Artificial intelligence – AI system life cycle processes

AS ISO/IEC 42001:2023- Information Technology- Artificial Intelligence -Management systems, December 2023.

AS ISO/IEC 23894:2023: Information technology – Artificial intelligence – Guidance on risk management.

ISO/IEC 8183:2023: Information technology – Artificial intelligence – Data life cycle framework.

AS ISO/IEC 23053:2023: Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)
ISO/IEC 24668:2022: Information technology – Artificial intelligence – Process management framework for big data analytics.
ISO/IEC 22989:2022: Information technology – Artificial intelligence – Artificial intelligence concepts and terminology.

AS ISO/IEC 38507:2022: Information technology – Governance of IT – Governance implications of the use of artificial intelligence by organizations.

[ix] Australian Government, Department of Industry, Science and Resources, “Australia’s Artificial Intelligence Ethics Framework- Australia’s AI Ethics Principles. https://www.industry.gov.au/public.

[x]  Australian Government, Department of Industry, Science and Resources-Safe and Responsible AI in Australia, Discussion Paper, June 2023, p3; Gillespie N, Lockey S, Curtis C, Pool J, Akbari A. (2023), Trust in Artificial Intelligence: A Global Study. The University of Queensland and KPMG Australia.

[xi] Royal Commission into Robodebt Scheme, Report.

[xii] Ibid.

[xiii]  Australian Government, Services Australia: “Information for Customers, Explaining class action settlement payments (VID1252/2019); https://www.servicesaustralia.gov.au/sites/default/files/2022-09/explaining-class-action-settlement-payments.pdf

[xiv] Prygodicz v. Commonwealth of Australia (No.20 [2021]FCA 634 (21 June 2021) at [5]; see court orders attached Decision.

[xv] Royal Commission into Robodebt Scheme, Report

[xvi] Royal Commission into Robodebt Scheme, Report -Automated decision-making Recommendation 17.1

[xvii] Ibid

[xviii] ACCC, Digital platform services enquiry- September 2022 Interim report –Regulatory reform. Chapter 4. September 2022. DP-REG Joint Submission to Department of Industry, Science and Resources-‘Safe and Responsible AI in Australia Discussion Paper” dated 26 July 2023.

[xix] ACCC, Digital platform services enquiry- September 2022 Interim report –Regulatory reform. Chapter 4. September 2022.

[xx] The forum members are: The Australian Competition and Consumer Commission (ACCC), Australian Communications and Media Authority (ACMA), eSafety Commissioner (eSafety) and Office of the Australian Information Commissioner (OAIC).

[xxi] DP-REG Joint Submission to Department of Industry, Science and Resources-‘Safe and Responsible AI in Australia Discussion Paper” dated 26 July 2023.

[xxii] Government Response – Privacy Act Review Report, Australian Government, September 2023, p11

[xxiii] AI Expert Group terms of reference, https://www.industry.gov.au/science-technology-and-innovation/technology/artificial-intelligence/ai-expert-group-terms-reference; https://www.industry.gov.au/news/new-expert-group-will-help-guide-future-safe-and-responsible-ai-australia.

[xxiv] DP-REG Joint Submission to Department of Industry, Science and Resources-‘Safe and Responsible AI in Australia Discussion Paper” dated 26 July 2023.

[xxv] Parliament of Australia website (www.aph.gov.au – Accessed 10 April 2024) – Senate Select Committee on Adopting Artificial Intelligence (AI) –  Select Committee on Adopting Artificial Intelligence (AI) – Parliament of Australia (aph.gov.au)

[xxvi] Media Release – Attorney General. The Hon Mark Dreyfus KC MP,5 December 2023. https://ministers.ag.gov.au/media-centre/copyright-and-ai-reference-group-be-established-05-12-2023; Australian Government. Attorney-General’s Department – Issue Summary Paper- Artificial Intelligence (AI) and Copyright.