24.03 Liontrust Global Innovation Report - The Rise of AI 04.24 - Flipbook - Page 23
I N FR AS TRUC TU RE L AYER – NVI D IA
In the past 18 months, Nvidia’s CEO and
founder, Jensen Huang, has propelled
himself into the league of iconic business
trailblazers such as Steve Jobs and Bill
Gates. Huang has made it clear: Nvidia isn’t merely an entrant in
the race to infuse AI into the global computing matrix, it’s leading
the pack and arguably dominating the field.
IN FR
AS TR
U C TU
RE
The meteoric surge in Nvidia’s stock price over this period calls
for a deep dive into its fundamentals. Huang has underscored
the vast potential of the data centre market – a $300 billion
opportunity primed for Nvidia’s state-of-the-art, cost-effective, and
blisteringly powerful computing chips and architecture. Nvidia’s
bold undertaking to replace the outdated CPU-driven infrastructure
with its advanced computing stack will be a decade-long
endeavour. As the below chart shows, Nvidia is still in the early
days of this journey, with the company achieving $48 billion of
revenue from the data centre in 2024. Still, this is only around
15% of the overall potential.
Why not Intel or AMD? By their own admission, both of these giants
are finding it more than challenging to surmount the barrier that
Nvidia’s software offering, CUDA, presents. Over the past decade,
Huang has focused on building a robust software ecosystem that now
boasts over four million developers around the world. This enormous
army of innovators creates software libraries that enable Nvidia’s
cutting-edge chips (the latest being the H200 Hopper, with the B100
Blackwell currently in the pipeline) to be harnessed in industries across
the whole economy, from life sciences and aeronautics to customer
service help desks. Indeed, given this ecosystem, Nvidia’s leadership
in supplying the tools for AI computing may by now be unassailable.
This convergence of hardware and software has led Huang to liken it to
the “iPhone moment”—a juncture where the supply of technology meets
massive broad-based demand and creates an unanticipated revolution.
Nvidia Datacentre Revenues ($bn)
$60
$48
$50
$40
$30
$20
$15
$10
$0
$0
$0
$0
$1
$2
$3
$3
FY14
FY15
FY16
FY17
FY18
FY19
FY20
$7
$11
FY21
FY22
FY23
FY24
Source: Nvidia and Liontrust, as at end of February 2024. Past performance does not predict future returns.
MO D EL S L AYER – LL MS
The fundamental addition to the AI technology
stack is the large language model layer,
which serves as the brain for generative AI
tasks. Like us, you may have used the likes
of OpenAI’s ChatGPT and other leading models such as Google’s
LaMDA and Gemini models, Meta’s LLaMa models and Anthropic’s
Claude models through accessible ‘chat-bot’ style interfaces.
MO D
EL S
How LLMs understand and generate language: LLMs learn to
understand and generate language through the training process.
They are fed vast amounts of text data, from which they learn
patterns and relationships between words. This training involves
adjusting model parameters to optimise the model’s accuracy. The
more data they are trained on, and the more parameters they have,
the better they become at predicting and generating coherent and
contextually relevant text.
The transformer model: The Transformer Model, introduced in a seminal
paper titled “Attention is All You Need” in 2017 by Google researchers,
revolutionised the computational analysis and manipulation of natural
language. Unlike previous models that processed words in sequence,
transformers can process all parts of a sentence simultaneously. This
parallel processing capability significantly enhances the model’s ability
to grasp the complexities of language.
LLMs and context: One of the most impressive aspects of LLMs is their
ability to consider context. Large and growing ‘context windows’,
the amount of text they can consider at a time, are enabling LLMs to
generate responses that are not only grammatically correct but also
contextually appropriate. This ability is crucial for the development
of a form of AI that is much closer to human-like intelligence.
Practical challenges: While LLMs are powerful, they are not without
challenges. One major issue is the potential for bias in their outputs,
as they can only learn from the data they are trained upon or later
encounter during inference. If this data are biased, the model’s
outputs will likely be biased too. Additionally, the environmental
impact of training and running large models is a growing concern
given their substantial energy requirements.
The future of LLMs: The future of LLMs is both exciting and
uncertain in terms of which models and companies will dominate
on performance and adoption and the degree of homogenisation
or diversity of foundational models over the longer term. But as
they continue to evolve, we believe they will become increasingly
integrated into industries and our daily lives.
The rise of AI: Technology and Innovation Report - 23