Å˽ðÁ«´«Ã½Ó³»­

How Å˽ðÁ«´«Ã½Ó³»­ Industry Can Use AI Safely and Ethically

By Doug Marquis | October 7, 2024

Several types of artificial intelligence are already being adopted by various parts of the insurance industry and they have the potential to deliver extraordinary efficiency savings, opening the door for even more profitability, innovation, and complex problem solving.

While the use cases for AI-based large language models, such as those used in ChatGPT, in the insurance industry are evolving, at present examples of how it is being used include summarizing and generating documents, carrying out data analytics, and acquiring data for risk assessment and underwriting. As an insurtech company, we are also looking at how AI can help us write software in an automated way and exchange data between two entities across the insurance ecosystem.

AI Risks

There are, however, multiple risks that can arise when using AI — primarily because it can easily generate errors. For example, AI can ingest statute information from one U.S. state and posit that it applies to all states, which is not necessarily the case. AI can also hallucinate – make up facts – by taking a factual piece of information and extrapolating the wrong answer.

AI also can be biased if it uses data that could be inherently prejudiced and creates algorithms that discriminate against a group of people based on, for example, ethnicity or gender. This could result in the AI recognizing that one racial or ethnic group has higher mortality rates, and then inferring that they should be charged more for life coverage.

AI bias also presents a danger when it comes to recruitment, potentially discriminating against people who are from certain regions or socio-economic backgrounds. For these reasons, there is still a critical need for human oversight of AI decisions to ensure inclusivity, fairness and equal opportunity.

New AI Regulations

AI technology has been moving so quickly over the last two years that regulation has been trailing far behind. Legislators are trying to catch up with the breakneck development of AI and the potential risks it might pose, which means insurers must be prepared for a raft of new regulation.

Earlier this year, Colorado became the first state to pass comprehensive legislation to regulate developers and deployers of high-risk AI to protect the consumer. High-risk AI systems are those that have a substantial input into consequential decisions relating to education, employment, financial or lending services, essential government services, healthcare, housing, insurance, or legal services.

Avoiding Algorithmic Bias

The Colorado AI Act, which goes into effect on Feb. 1, 2026, insists developers and deployers of AI high-risk systems must use care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination or bias.

This means that developers have to share certain information with deployers, including harmful or inappropriate uses of the high-risk AI system, the types of data used to train the system, and risk mitigation measures taken. Developers must also publish information such as the types of high-risk AI systems they have released and how they manage risks of algorithmic discrimination.

In turn, AI users must adopt a risk management policy and program overseeing the use of high-risk AI systems, as well as complete an impact assessment of AI systems and any modifications they make to these systems.

Transparency Required

The Colorado legislation also has a basic transparency requirement, similar to the recent EU AI Act, the Utah Artificial Intelligence Policy Act, and chatbot laws in California and New Jersey. Consumers must be told when they are interacting with an AI system such as a chatbot, unless the interaction with the system is obvious. Deployers are also required to state on their website that they are using AI systems to inform consequential decisions concerning a customer.

Moving forward, it’s likely other states will begin adopting similar AI regulations to those in Colorado. However, it’s important to note that many governance measures, such as risk-ranking AI, control testing data, and data monitoring and auditing, are already covered by other laws and regulatory frameworks not only in the U.S., but around the world. Given the expanding layers of legislation at every level, we can expect the AI landscape to only become more complex in the near future. For the time being, there are several actions companies can take to help ensure they are protected.

Five Practical Steps for Insurers

Transparency: With simple disclaimers, insurers can let customers know when they are using chatbots and disclose where AI is being used to inform decisions in certain systems, including recruitment.

Intellectual property: It’s important for insurers to protect customers’ data ownership when dealing with AI vendors – and also protect sensitive personal data, such as medical information. At Zywave, for example, we’ve seen AI providers with contracts requesting to own the data or modeling they are providing. Companies must be more diligent than ever before when reviewing contracts to ensure confidentiality, IP ownership, and protecting trade secrets that may be placed into vendors systems.

The right data: When it comes to ensuring AI is basing decisions on accurate information, it’s the company’s responsibility to verify that it’s giving the AI system access to trusted data. For example, at Zywave, we use our own data repository, comprised of proprietary data, data purchased from a trusted third party, or from public, US government agency sites we have acquired and diligently vetted ourselves. The new Colorado AI regulations state a company must be able to explain how it reached its hiring decision and be able to prove it’s not biased, which comes back to transparency and logging where the data originated from.

Documentation: As increasing numbers of AI products are being used in the insurance industry, it’s crucial to scrupulously document what data is being used, where it comes from, and who owns it. This enables companies to protect themselves from accusations of copyright infringement and intellectual property theft, as well as from AI making mistakes based on inaccurate data lifted from the internet.

Learning new skills: Å˽ðÁ«´«Ã½Ó³»­ companies need to have a greater understanding of AI to ensure they are complying with regulations, which likely will be rolled out across the US and in other countries over the next two years. While new roles have already been created for prompt engineers to ensure AI systems are producing the best answers, they must still be overseen by other humans in case the information they are inputting is biased or presents a security risk.

Given the increased usage and advancement of AI over the past few years, it’s likely the technology is here to stay.

And although the extra administrative and oversight work required to ensure AI is used safely and ethically may seem daunting, the new technology offers tremendous business value with the potential automation drastically improving efficiency and profitability.

There’s no doubt the benefits outweigh the additional work of developing a robust AI protocol. By putting in place stringent guardrails, the insurance industry will reap the rewards of AI while remaining compliant within a quickly evolving regulatory landscape.

Topics InsurTech Data Driven Artificial Intelligence Market

Was this article valuable?

Here are more articles you may enjoy.

From This Issue

Insurance Journal Magazine October 7, 2024
October 7, 2024
Å˽ðÁ«´«Ã½Ó³»­ Journal Magazine

Best Å˽ðÁ«´«Ã½Ó³»­ Agencies to Work For; Workers’ Comp Outlook; Markets: Hotels & Motels