Lumen Spring Summer 2023 - Flipbook - Page 38
The rise of the machines –
cinema and AI
By Ben McCann
Remember when robots on film were cute
and non-threatening? The bickering R2-D2
and C-3PO were the Laurel and Hardy of
the Star Wars universe.
All Johnny 5 in Short Circuit (1986) had was
a thirst for knowledge, which he lovingly
called ‘input.’ And let’s not forget the very
helpful Wall-E in 2008 – with his binocularlike eyes and clamps for hands – who sped
around compacting trash and cleaning up
the planet.
three decades, you’ll know that every bone
in your body tells you that AI is bad.
Everything we’ve been told about AI in
cinema is bad, really bad. Robots are
bad. Sentient machines are bad. And our
confidence in our ability to control and
manage these computers is brittle.
We have been warned.
All of these robots were curious. They
wanted to know anything about everything
and everything about anything, and they
generated instant audience empathy.
In his 1942 short story Runaround, science
fiction writer Isaac Asimov introduced a
set of fictional rules known as the Three
Laws of Robotics. These laws were designed
to govern the behaviour of robots and
artificial intelligence in Asimov’s fictional
universe, providing a firm moral and ethical
framework for their actions.
“Everything we’ve been
told about AI in cinema
is bad, really bad.
Robots are bad. Sentient
machines are bad.”
According to these laws: 1) A robot may not
injure a human being or, through inaction,
allow a human being to come to harm; 2)
A robot must obey the orders given to it by
human beings, except where such orders
would conflict with the first law; and 3) A
robot must protect its own existence, as long
as such protection does not conflict with the
first or second law.
As I write this, the mood around robots,
sentient machines, and the role of artificial
intelligence in the film industry has turned
decidedly darker. While coders, computer
scientists and programmers extol the virtues
of AI in liberating artistic possibilities, others
– including many in Hollywood – are not so
convinced.
In July, the Screen Actors Guild – the union
representing approximately 160,000 actors
in American film and television – went
on strike for the first time since 1980. At
the forefront of their demands was the
prevention of computer-generated voices
and faces and the guarantee that generative
artificial intelligence and ‘deep-fake’
technology will not be used to replace actors.
Slowly but surely, SAG argues, AI is
rendering actors redundant. Or, to quote
union boss Fran Drescher, whose blistering
broadside to Hollywood studio bosses on the
eve of the strike went viral: “Actors cannot
keep being dwindled and marginalised
and disrespected and dishonoured. The
entire business model has been changed
by streaming, digital, [and] artificial
intelligence.”
The machines, it seems, are rising.
Indeed, if like me you’ve been watching
Hollywood plotlines closely for the past
38
THE UNIVERSITY OF ADELAIDE
These laws became a central theme in many
of Asimov’s subsequent works, including his
1950 work, I Robot, which was eventually
adapted into the 2004 film. The story, set
in a futuristic world where robots are an
integral part of society, follows Will Smith
as a detective who becomes suspicious that
a highly advanced robot named Sonny may
have been involved in a prominent scientist’s
death, and would thus be in violation of the
three laws.
Countless films made since about artificial
intelligence have interfaced with Asimov’s
Third Law in particular; namely ensuring
the self-preservation of robots. Robots
are programmed to safeguard their own
existence and continue to function as long
as that safeguarding does not clash with the
higher need to guarantee human safety and
follow human orders.
Most Hollywood films about robots and
machines follow familiar plot coordinates;
the machines are usually presented as
highly sophisticated computer systems that
exhibit both intelligence and defects. The
interactions between the machines and
the humans in these stories raise profound
questions about the nature of consciousness,
the relationship between technology and
freedom, and the conceivable perils and
philosophical implications of AI.
So, in The Terminator (1984), James
Cameron depicts a future where Skynet –
an AI system – has become self-aware and
launches a war against humanity.
The dystopian noir Blade Runner (1982)
featured advanced androids known as
“replicants” who go rogue.
More recently, films such as Ex Machina
(2014) have explored the hazardous
relationships that emerge between humans
and human-type robots, while Her (2013)
features a lonely writer who falls in love
with the Siri-like virtual assistant on his
mobile phone operating system. Both raise
weighty ethical questions about the nature
of consciousness and the potential dangers
of AI.
And let’s not forget The Matrix (1999),
where the humanity of the future is trapped
in a simulated reality created by advanced
AI systems. Only a small band of humans
(led, of course, by Keanu Reeves) can rebel
against their machine overlords and fight for
their freedom.
All of these plotlines can be traced back to
1968, and Stanley Kubrick’s astonishingly
influential 2001: A Space Odyssey. This
was the first mainstream film to seriously
tackle the risks that AI poses regarding
transparency and accountability in AI
decision-making.
Kubrick always had a deep interest in AI,
and told an interviewer in 1969 that one
of the things he was trying to convey in
2001 was the reality of a world soon to
be populated by machines like the supercomputer HAL “who have as much, or
more, intelligence as human beings, and who
have the same emotional potentialities in
their personalities as human beings”.
Kubrick, like James Cameron and the
Wachowski sisters would do in The
Terminator and The Matrix, frames
2001 as a cautionary tale. He and coscreenwriter Arthur C. Clarke (himself
an early proponent of the benefits of AI)
explore the boundaries and consequences
of human interaction with intelligent
machines and suggest that machines have
the unfailing capacity to evolve beyond their
programmed intentions and develop their
own motivations and actions.
HAL’s subsequent malfunction in 2001
and its subsequent attempts to eliminate
the human crew highlight the potential
threats and ethical considerations associated