Chapter 5: Regulator/GovernanceLandscape (cont.)Regulation as a disruptive influenceRegulation is no longer a slow-moving beast thatresponds a昀琀er the fact. Today, it is a proactive andincreasingly disruptive influence in the market, notafraid to make demands and to punish those whodon’t meet them. The defence against this remainsunchanged – e昀昀ective compliance – but the reportingobligations make this a far more onerous task than ithas been.Regulation is no longer a slow-movingbeast that responds a昀琀er the fact.Today, it is a proactive and increasinglydisruptive influence in the market, notafraid to make demands and to punishthose who don’t meet them.AI ethicsAt every turn, from the introduction of the GDPR/UK Data Protection Act 2018 to the arrival of FairValue rules, insurers have turned to technology, andunderstandably so. AI and machine learning are theperfect tools to identify, gather, clean and reportthe necessary data, and as those data and reportingdemands increase, the expectation is that technologywill adapt to respond.Unfortunately, while technology might secure somecrucial breathing space, it only really addresses theproblem of scale. As more data is collated and moreAI employed to manage it, new regulatory risks arecreated. The use of technology to meet the regulatoryrequirements is, ironically, creating even greaterregulatory scrutiny.The application of AI to a growing list of activitieshas raised concerns around the ethics of its use.The fairness of decisions made by AI is a crucialconsideration, as baked-in biases could result inadverse outcomes for customers and failure to complywith FCA rules.“Do no damage to humans.”Overwhelming sentiment from all AIRegulatory guidance to dateAccountability& governanceSafety,security &robustnessContestability& redressPrinciplesFairnessAppropriatetransparency &explainabilityUnderwriter considerations• The application of AI models to supportunderwriting is corporately governedand reviewed• Any use of AI in underwriting appetiteor decisions must be documented andexplainable• The use of AI must be able to be provedit doesn’t create anomalies or bias,given consistent underwriting inputs• The use and application must be able tobe defended and to be adjustedFigure 5.2: Ethics of Arti昀椀cial Intelligence30
It seems that your browser's pop-up blocker has prevented us from opening a new window/tab. Please click the button below to open the link manually.
In addition to the Paperturn Privacy Policy, please confirm you agree with the Altus Consulting privacy policy.
Unfortunately an error occurred - please try again.