Business & Economy

Insurers launch cover for losses caused by AI chatbot errors

Stay informed with free updates

Insurers at Lloyd’s of London have launched a product to cover companies for losses caused by malfunctioning artificial intelligence tools, as the sector aims to profit from concerns about the risk of costly hallucinations and errors by chatbots.

The policies developed by Armilla, a start-up backed by Y Combinator, will cover the cost of court claims against a company if it is sued by a customer or another third party who has suffered harm because of an AI tool underperforming.

The insurance will be underwritten by several Lloyd’s insurers and will cover costs such as damages payouts and legal fees.

Companies have rushed to adopt AI to boost efficiency but some tools, including customer service bots, have faced embarrassing and costly mistakes. Such mistakes can occur, for example, because of flaws which cause AI language models to “hallucinate” or make things up. 

Virgin Money apologised in January after its AI-powered chatbot reprimanded a customer for using the word “virgin”, while courier group DPD last year disabled part of its customer service bot after it swore at customers and called its owner the “worst delivery service company in the world”.

A tribunal last year ordered Air Canada to honour a discount that its customer service chatbot had made up.

Armilla said that the loss from selling the tickets at a lower price would have been covered by its insurance policy if Air Canada’s chatbot was found to have performed worse than expected.

Karthik Ramakrishnan, Armilla chief executive, said the new product could encourage more companies to adopt AI, since many are currently deterred by fears that tools such as chatbots will break down.

Some insurers already include AI-related losses within general technology errors and omissions policies, but these generally include low limits on payouts. A general policy that covers up to $5mn in losses might stipulate a $25,000 sublimit for AI-related liabilities, said Preet Gill, a broker at Lockton, which offers Armilla’s products to its clients.

AI language models are dynamic, meaning they “learn” over time. But losses from errors caused by this process of adaptation would not normally be covered by typical technology errors and omissions policies, said Logan Payne, a broker at Lockton.

A mistake by an AI tool would not on its own be enough to trigger a payout under Armilla’s policy. Instead, the cover would kick in if the insurer judged that the AI had performed below initial expectations.

For example, Armilla’s insurance could pay out if a chatbot gave clients or employees correct information only 85 per cent of the time, after initially doing so in 95 per cent of cases, the company said.

“We assess the AI model, get comfortable with its probability of degradation, and then compensate if the models degrade,” said Ramakrishnan.

Tom Graham, head of partnership at Chaucer, an insurer at Lloyd’s that is underwriting the policies sold by Armilla, said his group would not sign policies covering AI systems they judge to be excessively prone to breakdown. “We will be selective, like any other insurance company,” he said.

https://www.ft.com/__origami/service/image/v2/images/raw/https%3A%2F%2Fd1e00ek4ebabms.cloudfront.net%2Fproduction%2Fe7f45d69-0759-45df-8eff-df9cecd6d291.jpg?source=next-article&fit=scale-down&quality=highest&width=700&dpr=1

2025-05-11 04:00:20

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button