Å˽ðÁ«´«Ã½Ó³»­

AI Used for Good and Bad — Like Making Trickier Malware, Says Report

By | November 26, 2024

While the insurance industry is quickly adopting AI technologies to improve the speed and accuracy of routine functions, analyze data and assess risk, malicious actors use the same tech to disrupt business and profit from cyberattacks.

Recognizing the potential exposure accumulation risk arising from AI, the industry needs to look ahead and forge an analytical pathway to measure the risk AI-powered cyberattacks pose to every facet of the industry, from internal activities to client-facing communications.

AI Speeds Malware Evolution

Just as AI boosts legitimate business efficiencies, it increases malware attacks’ evolution and effectiveness, beginning with the ability to spot and exploit weaknesses, according to Guy Carpenter’s recent report , published in collaboration with CyberCube.

With machine learning capabilities, polymorphic malware can be designed to recursively generate new code variants without human intervention as it calls out to a Gen AI model such as ChatGPT or some more purpose-built utility. The malware itself can periodically create an evolved version of its own malicious code, autonomously generating new variants that are more evasive and difficult to detect, staying one step ahead of safeguards.

Lateral movement and infection propagation capability are particularly applicable to ransomware campaigns attempting to extort wider footprints of systems for higher profits.

Impacts of More Efficient Malware

AI-assisted or generated malware can increase dwell time, mutate often enough to avoid signature detection and automate the learning and command and control processes to spread faster, both externally and internally within networks. Organizations that deploy AI may seek third-party solutions such as ChatGPT, in which the compromise of the vendor model can become a single point of failure for all customers using the model.

AI also presents a new attack surface in which users interact with the model, such as a chatbot, a claims processing tool or a customized image analysis model, a process subject to malicious and sometimes accidental manipulation. Tools like large language models (LLMs) have been demonstrated to allow for higher-quality social engineering at scale (phishing, deepfakes, etc.), quicker identification of vulnerabilities and the possibility of a larger initial footprint. Risk is expanded for companies that deploy customer-facing LLMs. Proof of concepts and reports from threat intelligence company Recorded Future show that LLM usage in phishing and social engineering increases the efficiency and efficacy of the reconnaissance, weaponization, and delivery stages of an attack.

Data privacy is an ongoing target of cyber attacks. When AI is trained through access to large, sensitive datasets, a compromise to the centralized storage for these datasets can have dramatic downstream effects. Research has shown machine learning can allow faster and more stealthy data exfiltration by reducing extraction file sizes and automating mass data analysis to identify valuable information within a sea of worthless data.

One of the highly touted use cases for AI is in cyber security operations, the type of procedures that require high-level privileges. With such critical response decisions given to AI, the potential for errors or misconfigurations may increase, resulting in additional risks.

AI enhancements to attack vectors will increase the efficacy and efficiency of attacks in the pre-intrusion phases of the cyber kill chain. Threat actors will be able to attack a greater number of targets more cost-efficiently, with an expected increase in success rate, resulting in a larger footprint for a given cyber threat campaign.

Fighting Back

All else being equal, we may expect defenders to have an advantage over threat actors, primarily because legitimate developers of defensive tools will have greater access to superior AI technology and training data from user systems. However, the rapid developments in the AI field are dynamic and uncertain, making the influence of any trend challenging to predict.

While larger, more resourced firms have a better chance at reducing their often outsized exposure to cyber risk by deploying AI in defensive mechanisms, smaller, less-resourced or less-prepared firms will likely have increased exposure to novel attack trends and methods.

This also likely increases the variation of possible impacts from one organization to another when other factors, such as size and industry, are the same.

Cyber threat landscape data suggests that trends fluctuate in waves of event frequency as novel attack methods and techniques are countered with advances in defensive methods and capabilities. Time shortens between peaks as attackers and defenders learn and adapt to one another at faster rates.

While both attackers and defenders can leverage AI, we will likely see a more significant difference between companies that employ defensive AI technology and those that do not. AI’s high dependency on the training data available tends to favor defenders and vendors, as they have access to world-leading AI technology and data from within their users’ systems.

While Gen AI’s integration across all industries marks a transformational shift in the current and future landscape of cyber threats, it creates a unique growth opportunity for the (re)insurance industry.

Frameworks for assessing systemic cyber risks must be refined since traditional models built on retrospective data may no longer suffice in a world where AI-driven attacks can evolve and scale at unprecedented rates. As Gen AI advancements increasingly influence the cyber threat landscape, developing an analytical pathway toward quantifying AI’s financial implications is key in helping (re)insurers prepare for a future in which AI technology becomes even more prevalent.

Topics InsurTech Data Driven Artificial Intelligence

Was this article valuable?

Here are more articles you may enjoy.