The EU AI Act is the first piece of legislation to tackle AI systems. Cognizant’s Hellen Beveridge looks at how insurers can innovate with AI as regulations tighten.
AI is fast taking us into new realms of possibility. Each day seems to bring new leaps forward—and new horror stories. For every AI model that can detect a tumour or predict the location of a landmine, there’s one that can fake an insurance claim or misidentify a shopper as a thief.
The potential for innovation may be huge, but so is the potential for harm—which is why we’re starting to see tighter regulation around the use of AI. The European Union AI Act will be the first to impact innovators, but others won’t be far behind.
Balancing innovation and regulation
AI pioneers in the insurance sector will need to find a balance between innovation and regulation: between developing valuable new applications of AI and ensuring those applications meet regulatory standards for safety, security, privacy, inclusivity, ethics and consumer rights.
This needn’t mean pausing innovation initiatives. The COVID vaccine showed how even the most highly-regulated products can be launched fast and safely. Insurers developing AI-enabled solutions stand to reap substantial benefits, so it makes sense to keep moving forwards. But to make the most of AI, software engineering processes may have to change. Let’s look at why and how.
Know the risk level of your AI systems
For all other uses of AI, the Act takes a risk-based approach. This requires developers and implementers to assess, monitor and disclose the risk of any AI system—and especially to identify whether it falls into a ‘high-risk’ category. Insurance is listed as a high-risk category—making AI risk management a non-negotiable activity for insurers.
The Act is also concerned about the use of general-purpose AI models as software architectures underpinning AI products. Developers using models like Open AI’s Chat GPT 4 will need to keep detailed documentation, educate partners on the functionality and limits of the tools, and identify and label models that carry ‘systemic risk’ – i.e. that could cause catastrophic events.
A new approach to risk and compliance
For many digital innovators in insurance, the Act will require a new approach to risk management. A best-practice risk management framework is one that is flexible enough to adapt to different (and evolving) regulatory regimes, and that spans the whole risk management lifecycle, from identifying and assessing risks to mitigating, monitoring and reporting on them.
Compliance processes may also need an overhaul. The Act emphasises the need for transparency around AI systems, making it vital to maintain detailed documentation and implement robust data governance. That means ensuring that data used to train or operate AI systems is properly managed, stored, and used, and that privacy and security are protected.
Innovate fast and safely with ‘agile compliance’
Insurers who start now with AI innovation can refine their models, learn fast, iterate fast and stay ahead of the game. But being fast must be matched with being smart and compliant. No insurer wants to see their investment go down the drain or have a project derailed by a compliance issue.
The way forward is to change the way risk and compliance are involved in the software development process. Today, it works a lot like the waterfall software development model of old: first the product gets built, then it goes ‘over the wall’ to QA – or, in this case, to risk and compliance.
A better way is to move to ‘agile compliance’ – having risk and compliance professionals and developers collaborate from the start, so AI systems are risk-managed at every stage, all the right documentation is produced, and there are no delays or nasty surprises at the end.
With regulations tightening, clients are already asking us to work with them like this. It could mean the difference between launching on time or being beaten to it by a competitor.
Hellen is a Fellow of Information Privacy and has a Masters in technology law, specialising in technoethics and AI regulation. She plays a crucial role in ensuring that data practices align with ethical standards and privacy regulations. Her expertise lies at the intersection of data privacy, ethics, and responsible AI. Her pragmatic approach to these complex issues ensures that organizations navigate these complexities while driving innovation and maintaining ethical practices.