annual review indst 2024 public - Flipbook - Side 26
26
EOS
Developing and agreeing on ethical
AI and data governance principles
is important to a company’s own
internal understanding of how best
to manage the associated risks.
To protect privacy and freedom of expression, we expect
companies to obtain user consent in a clear and
transparent manner for the collection, storage, and use
of data, including targeted advertising, and ensure the
responsible use of facial recognition technology. We also
encourage companies to endorse the Global Network
Initiative (GNI),7 a multistakeholder forum for
accountability, collective advocacy and practices at the
intersection of technology and human rights.
We ask that companies seek to understand where their
business models generate or contribute to negative
social impacts and be transparent about the findings.
They should take steps to mitigate negative societal
impacts, and cede the appropriate authority to
regulators where appropriate. We encourage companies
to prioritise children and young people when considering
potential negative societal impacts.
Q. Can you give some examples of positive
engagement outcomes?
A. We have been engaging with Baidu on AI and digital
rights since 2019, when we first encouraged the company
to establish and publish AI governance principles. As a
leading Chinese AI company, with products and services
that reach over a billion devices monthly, we believed
this would provide important reassurance to investors
that the company was appropriately mitigating AI risks.
In 2020, in a letter to the chair and CEO, we outlined our
concerns regarding AI and data governance and the
company subsequently provided us with a detailed
response. It explained its processes for blocking harmful
content, training customers on AI use, and its approach
to collecting user feedback and reducing algorithmic bias.
We have been engaging
with Baidu on AI and digital
rights since 2019.
In 2022, when we published our Digital Rights
Principles, we shared these with Baidu and reiterated
our suggestions regarding AI governance principles.
Again, in 2023, we explained that investors would
welcome comprehensive AI ethical use principles from
the company. In 2024, it published its Baidu AI Ethics
Measures,8 which cover many of the key aspects that we
expect to see, including core principles, oversight by
the technology ethics committee, ongoing AI training,
participation in the development of industry standards,
and stakeholder engagement.
In 2022, we asked Meta to strengthen its children and
teenagers safety policy to go beyond the prevention of
exploitation and make an explicit commitment to acting
in the best interests of children and teens. We repeated
this expectation in our feedback to the company’s first
human rights report, released later in 2022, saying that we
would like it to address mental health, device addiction,
and other emerging issues that more holistically impact
young users’ safety, health, and well-being.
During 2023, the company faced increased scrutiny and
legal action on this issue after the US surgeon general
issued a warning on social media harms to adolescent
mental health. That same year, Meta published its second
7
8
https://globalnetworkinitiative.org/
https://esg.baidu.com/detail/560.html#:~:text=Baidu%20has%20formulated%20its%20AI,equality%2C%20empowerment%2C%20and%20freedom.