Hogan Lovells 2024-2025 AI Trends Guide - Flipbook - Page 49
Education
Shaping the future of education: AI’s impact and the need for
proactive planning and policies
It is indisputable that AI will transform the world of
education; indeed, it already has. Like many technological
advancements before it, generative AI may either be utilized
positively or negatively in academia. AI presents significant
opportunities, but also challenges and even threats.
Institutions that work proactively to embrace the challenges
and navigate the threats will be in the best position to
continue to thrive. As it stands, guidance is necessary –
at federal, state, and institutional levels – to confirm AI
works for, rather than against, educational purposes, and
education-sector organizations should consider developing
internal policies sooner rather than later.
To provide just a couple of many examples, we describe below the
risks and benefits of AI to two core academic functions: admissions
and instruction.
Admissions
AI has powerful potential to support colleges and universities
in their admissions processes, enhancing both the efficiency
and fairness of applicant evaluation. Many schools are already
utilizing AI to perform perfunctory tasks – like automating GPA
re-calculation – saving admissions teams thousands of hours of
quantitative labor. Schools can also use AI for more subjective
tasks, such as summarizing essays and recommendation letters
to identify personality traits and soft skills. And by analyzing
a broader set of data points, schools can look beyond numbers
and more comprehensively predict an applicant’s likelihood
of success. But AI’s ability to create a more holistic admissions
process requires thoughtful implementation. Because AI models
are typically trained on historical data, it is possible for the outputs
of those models to reflect biases present in the historical training
data. And while it is impossible to remove bias entirely from the
admissions process, thoughtless or unchecked reliance on AI
models opens institutions to legal and ethical risk. As AI companies
self-regulate to minimize this bias, internal policies to understand
how AI tools operate, including with respect to bias, and to confirm
that AI is used to complement, rather than replace, human
judgment in the admissions process may be warranted.
Instruction
AI also has powerful potential to support educators and students
inside the classroom. For example, many AI platforms generate
questions that are responsive to students’ individual needs and
performance, thus improving the quality of academic assessment.
But many stakeholders fear the consequences of student
dependence on this technological advancement and the potential
impacts of AI tools on assessment integrity. As institutions
determine whether and to what extent generative AI tools can be
used in the classroom, zero-tolerance policies that provide little
guidance on practical ethical usage may not be the appropriate
approach. AI is here to stay, and will be an increasingly important
tool in almost every industry sector; thus, it is critical to train
students to ethically maximize its utility. Rather than resist this
changing landscape, institutions should deploy responsive policies
that encourage responsible integration of AI in instruction and
learning consistent with all applicable privacy, accreditation, and
academic integrity requirements.
Given AI’s transformative influence in the world of education, it
is imperative for education professionals to pay close attention
to how this evolving technology will shape the broader industry.
Institutions that seem likely to win the future will find ways to
harness AI to drive forward their missions without creating legal
liability.
Melody Zargari and Greg Kimak contributed to this trend.
Authors
Joel Buckman
Partner
Washington, D.C.
William Ferreira
Partner
Washington, D.C.
Stephanie Gold
Partner
Washington, D.C.
Vassi Iliadis
Partner
Los Angeles,
San Francisco,
and Silicon Valley
Lance Murashige
Counsel
Washington, D.C.
Elliot Herzig
Associate
Los Angeles