LSHC Horizons Brochure 2024 - Flipbook - Page 19
Hogan Lovells | 2024 Life Sciences and Health Care Horizons | Digital Health and AI
19
The future of safe and responsible AI in Australia
Currently, there is no law that specifically deals
with AI in Australia. Instead, depending on its
use, AI may be captured under existing laws
(such as, for example, privacy and consumer
laws), and through sector-specific regulations
in industries such as health and medical
applications, therapeutic goods, food, financial
services, motor vehicles, and airline safety.
Additionally, there are a number of voluntary
frameworks in place, including the national AI
Ethics Framework, which was released in 2019
to help guide businesses to responsibly design,
develop and implement AI.
In June 2023, the Government released
the Safe and Responsible Use of Artificial
Intelligence in Australia Discussion
Paper which set out a number of potential
mechanisms and regulatory approaches
through which AI can be regulated in
Australia and sought industry input into how
best to implement appropriate governance
mechanisms and regulatory responses to
ensure AI is used safely and responsibly. It
was announced that artificial intelligence will
be a priority in 2024 with a commitment of
AU$41.2 million to support the responsible
deployment of AI in the national economy.
On 1 November 2023, Australia alongside the
EU and 27 countries, including the U.S., UK
and China, signed the Bletchley Declaration on
AI. The Bletchley Declaration affirms that AI
should be designed, developed, deployed, and
used in a manner that is safe, human-centric,
trustworthy and responsible.
Mandi Jacobson
Partner
Sydney
On 17 January 2024, the Australian
Government published its interim response to
the consultation on safe and responsible AI in
Australia which called for further guardrails on
legitimate but high-risk uses of AI, as current
regulatory risks do not fully address potential
risks. High-risk settings would include many
health and medical applications, including
medical devices, AI-enabled robots for medical
surgery, or those involving data analytics and
privacy. The Government’s proposed next
steps include (among other things) consulting
on the form of new mandatory guardrails for
organizations developing and deploying AI
systems in high-risk settings (such as in the life
sciences sector).
Life sciences stakeholders should be on the
lookout in the near term for ongoing guidance
from the Government on:
• using testing, transparency and
accountability measures to prevent harms
from occurring in high-risk settings;
• clarifying and strengthening laws to
safeguard citizens;
• working internationally to support the safe
development and deployment of AI; and
• maximizing the benefits of AI.
Angell Zhang
Senior Associate
Sydney
Bonnie Liu
Associate
Sydney
Visit our website
to read more on
Digital Health
and AI