Executive Summary: With over 70% of insurers planning to deploy gen-AI tools over the next two years, AI promises much, and the keen adopters likely will be rewarded – but it’s a path that must be navigated carefully, keeping a close eye on regulatory compliance in markets across the globe, says Erez Barak, chief technology officer, Earnix, the software-as-a-service (SaaS) solution provider.
The use of artificial intelligence grew in leaps and bounds during 2024, and regulators have taken note, increasing their oversight across the globe.
It’s a technological tool that holds immense potential for the insurance industry, but companies must navigate the fine line between innovation and regulatory compliance.
A recent found that 51% of respondents said their company had to pay a fine or issue refunds due to errors in the use of AI in the last year.
The survey also found that more companies plan to spend more care and time on regulatory compliance in 2025, to avoid fines in the future and ensure their use of AI meets the guidelines. (See related article: Viewpoint: Why Insurers Should Have Confidence in Machine Learning Capability).
It is interesting to note that insurers in Europe and Australia may already be off to a faster start than other territories. A majority of European and Australian insurance companies (68%) said that they are spending more or significantly more time on regulatory compliance this year (versus last year) as compared to 62% of North American firms who reported the same.
In the case of the European Union, this may be due to the fact that insurers face an increasingly stringent and complex regulatory environment. For example, the Solvency II Directive imposes extensive capital, risk management, and reporting requirements. As a result, insurers in Europe and Australia may have a slight head start when it comes to implementing new technology to help with compliance.
That said, the pace of change in AI makes it difficult to define exactly where the industry is at with getting on top of regulation. , the ever-evolving rise of AI means it’s barely possible for most regulators to prescribe rules. Thus, most commissioners have turned to a principles-based approach instead.
For example, the National Association of Å˽ðÁ«´«Ã½Ó³» Commissioners (NAIC’s) Model Bulletin: , takes a principles-based approach that has been endorsed by the insurance industry, which was adopted on Dec. 4, 2023. The AI model bulletin creates no new laws nor seeks to at this time. It does encourage, but does not mandate, testing for unbiased outcomes. It is meant as guidance, not as a model law.
Ethical Application of AI
Among the many talking points of AI are the ethics surrounding its outputs and the decisions that are made as a result. This is rightly a live topic of debate and discussion in the insurance sector. The European Commission’s advisory body has determined that for an ethical use of AI, : Human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being; and accountability.
Such guidelines are helpful for businesses when implementing their own processes around the use of AI, and the more live examples that exist and are shared throughout the market, the stronger these will become.
Significantly, of course, these all require human intervention. Indeed, AI is not yet capable of replacing human employees and is best used to complement employees’ strengths. This is the concept of “human-centric AI,” or the idea of using AI to augment human capabilities by automating tasks, improving speed and efficiency, and enabling them to do their jobs better.
Above all else, human-centric AI recognizes that people are critical to its design, operation, and use of technology, which reinforces and even improves employees’ abilities – and doesn’t look to replace them. It also highlights the importance of regulation with clear guidelines for businesses to follow.
Human-centric AI encompasses human controls, transparency, fairness, explainability, inclusiveness, education, and other related ideas.
Of these sub-topics, “explainability” will be key in helping insurers work proactively with regulators to explain the outputs of their AI tools. Explainability provides businesses the opportunity to communicate more clearly with their customers too – a win-win in all scenarios.
Misuse of AI
Many stories already exist about the challenging security landscape we now face, fueled in part by the increased use of AI by developers, and the emergence of equally powerful AI-enabled attackers. We’ve seen a growing number of sophisticated AI-powered attacks matched by AI-based defense mechanisms such as real-time anomaly detection and automated incident response.
Stories about deep fakes and company-wide cyberattacks make headlines – and for good reason – but maintaining the ethical use of AI, within companies showcasing strong morals and responsibility, are where the rules need to be easy to follow. Hence there is a need for regulation and clear guidelines.
AI promises much, and the keen adopters likely will be rewarded, but it’s a path we must tread carefully. The doors have been opened to a new world, but only a very small percentage of the landscape has been trodden thus far. Taking guidance on where to move next, and how, is essential.
It’s important to follow the guidelines that exist at present, create your own in-house guidelines that mirror closely that of the regulators, and ensure explainability is at the heart of AI models. Together, these will ensure clear communication with the regulators and will help with adaptation as regulation evolves.
Indeed, the surge in fines for AI misuse issued by regulators around the world, who are keen to keep pace with technological developments, only serves to further underscore the importance of proactive accountability ethical practices in AI development. While AI holds immense potential for insurance, companies must navigate the fine line between innovation and responsibility.
By following existing guidelines and focusing on transparency, businesses can foster trust and ensure long-term success in an increasingly regulated landscape, which is set to become even more stringent in 2025.
Regulators Respond
In the EU, an AI Act has been , which passed in March of this year. It is the first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally.
The AI Act aims to provide AI developers and deployers with clear requirements and obligations regarding specific uses of AI. At the same time, the regulation seeks to reduce administrative and financial burdens for businesses. It is part of a wider package of policy measures to support the development of trustworthy AI, which also includes the AI Innovation Package and the Coordinated Plan on AI. These measures seek to guarantee the safety and fundamental rights of people and businesses when it comes to AI. The rules also aim to strengthen uptake, investment and innovation in AI across the EU.
Å˽ðÁ«´«Ã½Ó³» Europe, the European insurance and reinsurance federation, was among the first to support the objectives of the AI Act, stating it welcomed “the overall objective of the European Commission to create a proportionate and principles-based horizontal framework of requirements that AI systems must comply with in the EU, without unduly constraining or hindering technological development and innovation.”
The AI Act classifies AI systems based on risk levels—unacceptable, high, limited, and minimal—with stringent regulations applied to high-risk AI technologies used in sectors like healthcare and law enforcement. The act also aligns with the EU’s General Data Protection Regulation (GDPR) to ensure AI systems respect data privacy and individual rights.
In the United States, the government’s efforts include the AI Bill of Rights, which outlines principles aimed at protecting citizens from algorithmic discrimination while ensuring transparency and human-centred AI design. The National Institute of Standards and Technology (NIST) has also developed frameworks to manage AI risks. The Federal Trade Commission (FTC) is focusing on preventing deceptive AI practices, particularly in consumer products and data privacy.
China, meanwhile, has implemented AI Ethics Guidelines to promote responsible AI development, with a focus on fairness and accountability. Additionally, the country has issued new regulations for generative AI to ensure that AI-generated content aligns with national security and societal values.
The United Kingdom is taking a pro-innovation approach to regulation with its AI Regulation White Paper, promoting safe AI development. The UK also hosted an AI Safety Summit in November 2023 to discuss global standards for AI safety, risk management, and accountability.
Canada introduced the Artificial Intelligence and Data Act (AIDA) to regulate the use of high-impact AI systems, emphasizing transparency and accountability in decision-making processes. Additionally, AI systems must comply with Canada’s existing data privacy laws.
Australia’s AI Ethics Framework promotes fairness, accountability, and transparency in AI, while Japan’s AI Governance Guidelines focus on ensuring AI aligns with human rights and societal values, emphasizing safety and societal benefits.
These global regulatory efforts aim to create a framework that ensures AI technologies are used responsibly, mitigating risks such as bias, discrimination, and privacy violations, particularly in high-impact areas like facial recognition, automated decision-making, and generative AI.
Hefty Fines for AI Misuse
At the same time, a raft of fines have been imposed for the misuse of AI, especially when it involves violations of data privacy, discrimination, or lack of transparency.
In the EU, several companies have been fined under the GDPR for AI-related data misuse. A notable case involved Google, which was fined €50 million in 2019 by French authorities for not clearly informing users about how AI algorithms processed their data for targeted advertising. AI-powered facial recognition systems have also led to the aforementioned penalties when used in ways that infringe on privacy rights.
In the US, while there is no comprehensive AI-specific regulation yet, existing laws have been used to penalize AI misuse. For instance, the Federal Trade Commission (FTC) took action against Everalbum, a company that used AI-powered facial recognition without proper user consent. Although no monetary fine was imposed, the company was required to delete both the AI models and the data used, highlighting the growing scrutiny over AI applications.
In the UK, in 2022 by the Information Commissioner’s Office (ICO) for collecting images of UK citizens without their consent. The company was ordered to stop using the data, marking a significant case of regulatory enforcement in the AI space. These examples demonstrate how governments are increasingly holding companies accountable for AI misuse, particularly in areas like privacy and data protection.
It’s hard to remember a more prominent talking point in the global insurance market than artificial intelligence (AI). The sheer curiosity in the applications and potential of AI is palpable, with rarely a day going by without AI featuring in headlines. Exactly what will it enable us to do? To what extent can it impact the customer journey and therefore customer satisfaction? The answers to these questions are constantly evolving in line with the evolving technology.
We are already seeing some of the positives that AI can bring, in terms of efficiency savings and speed to market. But at the same time, questions around oversight of AI and its outputs, and the decisions that are made as a result of these outputs, have been furrowing brows among regulators around the world.
In terms of its speed to market, back in 2022 ChatGPT was launched by OpenAI and began its global roll-out. This roll-out came at a time when insurers were beginning to think about how they could use large language models to improve business operations and achieve better results. The fact that a free-to-use open-source model was made available globally so rapidly, however, was unlikely factored into strategic planning at that point.
We know that insurers plan to use AI to reduce biases in underwriting algorithms, make more data-driven decisions, increase customer loyalty, and align existing algorithms with business objectives.
But amid all of the hype and curiosity, this fast-moving phenomenon is slowly but surely becoming better regulated across the globe, and insurers must switch on to the potential pitfalls or face fines.
Topics InsurTech Legislation Data Driven Artificial Intelligence
Was this article valuable?
Here are more articles you may enjoy.