AI Report Digital - Flipbook - Page 16
NAVIGATING RULES & ETHICS
By: Kiah Lau Haslett
Federal regulators so far have adopted an observational stance toward AI. But its use — and
risks — are on their radar.
The Financial Stability Oversight Council wrote in its 2023 annual report that AI can introduce risks to institutions, including to their models, cyber environments and overall safety and
soundness. It can also have specific consumer compliance, consumer protection and fair lending
risks, making it necessary for an institution to explain an outcome when AI is used in credit and
underwriting decisions. A lack of explainability “can make it difficult to assess the system’s conceptual soundness, increasing uncertainty about their suitability and reliability,” and potentially
generate biased or inaccurate results, the report read.
“It is the responsibility of financial institutions using AI to address the challenges related to
explainability and monitor the quality and applicability of AI’s output, and regulators can help
to ensure that they do so,” it added.
In the absence of more direct guidelines from regulators, financial institutions can apply existing
regulatory requirements and guidance like risk management frameworks and fair lending rules
to their AI applications and use cases.
The Office of the Comptroller of the Currency took that approach in its Fall 2023 Semiannual
Regulatory Risk
Emerging From AI
• Data security.
Risk Perspective, highlighting its existing supervisory expectations for banks, no matter the technology they use.
“It is important for banks to identify, measure, monitor, and control risks arising from AI use
• Consumer protection.
as they would for the use of any other technology,” the OCC wrote, adding that these activities
• Regulatory compliance.
should be “commensurate with the materiality and complexity of the particular risk of the activ-
• Convincingly delivered
ity or business process(es) supported by AI usage.”
output that is erroneous
FSOC recommended that financial institutions consider how they can update and strengthen
or flawed.
their oversight structures to keep up with emerging AI risks. It also advised that institutions and
• A lack of consistency in
their regulators further build their capacity to keep up with AI innovations, usage and risks. The
responses over time.
OCC reminded banks specifically of their duty to conduct appropriate due diligence, follow their
• Unclear sourcing used
to generate responses.
Source: Financial Stability
Oversight Council
change management process and engage in risk management as they consider new or changing
products, services and operating environments.
While not subject to European Union regulations, U.S. banks can also adopt a type of AI risk
continuum that the EU articulated in its Artificial Intelligence Act, suggests Capgemini’s Ashvin
Parmar. The continuum sorts use cases by output risk, from limited to high risk to unacceptable,
and the amount of oversight and controls that would be necessary for each one. Low-risk applications could be used widely; high-risk ones should be used carefully and sparingly.
“While there’s nothing clear cut and readily available [in the U.S.], there’s enough frameworks
out there [that] clearly articulate your obligations in terms of what you should be ready for,”
Parmar says.
14 | FINXTECH INTELLIGENCE REPORT