AI and Insurance – The Awkward Early Days

Blog Post

By: Seth Row

For lawyers like me who practice at the intersection of law and insurance, the swift and widespread adoption of Artificial Intelligence across the business world is bringing new challenges and questions nearly every day. In this blog post I have endeavored to capture the current state of play between insurance coverage and Artificial Intelligence. But check back next quarter – who knows what might have changed!

In the insurance world, the glass is perpetually half-empty: when new technology comes along, we think about the risks and liabilities that the new tech may create. In twenty years of practice as an insurance litigator, I have never seen a piece of technology implicate such a diverse range of risks as Artificial Intelligence.

Liabilities associated with AI can come from all quarters – here are some actual examples ripped from the headlines:

  • Breach of contract (AI chatbot promises to give a customer a discount – company doesn’t honor it)
  • Defamation (Generative AI fabricates (or “hallucinates”) accusations of wrongdoing by individual)
  • Bodily injury (meal planner AI suggests a recipe for chlorine gas; AI in self-driving cars results in accidents)
  • Data breach (employees enter sensitive information into ChatGPT not realizing that makes it effectively public)
  • Discrimination against employees or customers (AI-enabled hiring bot favors white applicants; AI-driven pricing results in Black customers paying more)
  • Workers’ compensation (employer’s AI-driven machine confuses a worker for a box of vegetables and crushes worker)
  • Copyright (generative AI maker uses copyrighted material to “train” ChatGPT)
  • “AI-washing” and securities fraud (company makes inflated claims regarding use of AI for competitive advantage)
  • ERISA (health insurer uses AI to deny claims for healthcare without adequate medical oversight)

This broad range of risks are likewise potentially covered under an equally broad range of insurance policies. Here are three examples:

  • In multiple lawsuits against Character.AI , families have alleged bodily injury (including suicide and attacks on parents) resulting from teenagers interacting with hyper realistic and hypersexualized AI-powered chatbots who are allegedly encouraging the teens to commit violent acts. Bodily injury is covered under Commercial General Liability policies.
  • In several “AI-washing” lawsuits investors contend that companies inflated projected financial results through false claims about their use of AI or the improvements that AI was making to business processes. Those types of claims are often covered by Directors & Officers private company policies, or Errors & Omissions policies.
  • In lawsuits against video-production companies, individuals contend that AI-generated videos shown on social media are using trademarked or otherwise protected likenesses, words, or images. Display of another’s intellectual property on social media is often within the “media liability” coverage in cyber-risk policies.

Recognizing this (and reverting to the normal insurance-industry approach to risks that are novel and therefore difficult to underwrite), one industry response has been to exclude these risks.

Exclusions are now appearing in coverage for media liability, which has traditionally been very broad, for media that is created using “generative artificial intelligence,” defined as “content created through the use of any artificial intelligence application, tool, engine, or platform.” These exclusions are appearing in both cyber-risk policies and in technology error and omission policies.

Very broad exclusions are also now appearing in “business owner’s protective” (aka “BOP”) policies, titled “Business Use of Artificial Intelligence,” purporting to exclude liability for (among other things) personal and advertising injury (e.g. defamation) “arising out of or related to” any “AI,” defined as follows:

“AI” refers to “artificial intelligence enabled tools or software”, which means anything that uses artificial intelligence in whole or in part, including but not limited to publicly available applications driven by generative artificial intelligence, chatbots and/or image generators. 

As one would expect, along with these exclusions we are seeing questions about the use of AI on insurance applications. These kinds of questions create the risk of the policy being rescinded if the answer is wrong, often because the person filling out the application did not understand or keep up with the company’s use of AI.

On the other hand, some insurers are beginning to offer AI-specific Errors and Omissions coverage (for companies creating AI-enabled products), and endorsements specifically covering AI. One insurer is even offering an endorsement as an add-on to its cyber-risk coverage that specifically covers a “Machine Learning Wrongful Act” -- defined as unauthorized use of data to train an AI model -- or “Data Poisoning Wrongful Act” -meaning introduction of false data into a data set used to train the AI model.

Although we are now seeing the awkward early days of the insurance industry’s grappling with AI (on the one hand excluding it and, in some cases, covering parts of it), it is easy to envision a world in which a company would be penalized for not using AI. The insurance industry is increasingly adopting AI to, among other things, make routine decisions about coverage or underwriting as a cost-saving measure (and in some cases getting sued for doing so). But the industry has also recognized that taking the “human element” out of some processes (including communicating with policyholders) can actually reduce variability and risk.

We could therefore imagine a day when an applicant for property insurance might get a lower deductible if they have implemented AI-enabled systems to monitor the performance of industrial machinery rather than relying on a human, just as companies are now rewarded for having (or simply required to have) an automated sprinkler system in a warehouse. Or having an applicant for employment practices liability insurance be given a lower premium if they use an AI-driven system to rank job applicants rather a human, with all of their implicit biases.

Certainly, we can envision companies in the tech sector that are developing AI products being rewarded for ditching some aspects of the “black box” approach and adopting governance standards such as the NIST AI Risk Management Framework, or the New York City “Automated Employment Decision Tools (AEDT) Local Law 144” which requires bias audits and notification for the use of automated employment decision tools.

But for companies outside of the tech sector (and many within it) one of the hardest parts about risk management when it comes to AI is the fact that AI-enabled products are not home-grown: as with so many facets of business life, they are licensed from vendors. As my colleague John Pavolotsky wrote in this blog post in March, contracting for AI products or services requires investigation into how the AI is trained, on what, and where. Companies are purchasing AI-enabled solutions to enhance their own offerings or business processes, often sourcing the solutions from startup companies with a limited track record and a limited capacity to indemnify their customers if something should go wrong.

This means that companies have to weigh how much they can use their own insurance to protect against AI-related risks against the viability of contractual risk transfer to a vendor in a nascent industry - while trying to stay competitive and not over-pay or duplicate coverage. As with all new technologies, stakeholders within the company eager to get their hands on a new product, combined with enthusiastic AI salespeople, can result in risk management (not wanting to be the wet blanket) trying to back-fill coverage.

The net take-away for companies investing in AI-enabled technologies will sound familiar to anyone who’s lived through several rounds of the “big new thing” phenomenon: read your policies (all of them); coordinate risk management with product development and IT early - not after the deal is done; and finally - pay attention to risk scenarios that could come out of AI use, and talk to your lawyer (and, perhaps, broker) about insuring against those. 

Related Professionals

Related Practices & Industries

Practices

Media Contact

Jamie Moss (newsPRos)
Media Relations
w. 201.493.1027 c. 201.788.0142
Email

Mac Borkgren
Director of Marketing Operations
503.294.9326
Email

Jump to Page
Stay Informed Arrow

Subscribe to Our Updates