What happens when the software that everyone’s racing to adopt becomes too risky for anyone to insure? According to reporting from the Financial Times, we’re about to find out.
Major insurers including AIG, Great American, and WR Berkley are asking U.S. regulators for permission to exclude AI-related liabilities from corporate policies. One underwriter describes the AI models’ outputs to the FT as “too much of a black box.”
The industry has good reason to be spooked, the story reminds us. Google’s AI Overview falsely accused a solar company of legal troubles, triggering a $110 million lawsuit back in March. Air Canada last year got stuck honoring a discount its chatbot invented. And fraudsters last year used a digitally cloned version of a senior executive to steal $25 million from the London-based design engineering firm Arup during a video call that seemed entirely real.
What really terrifies insurers isn’t one massive payout; it’s the systemic risk of thousands of simultaneous claims when a widely used AI model steps in it. As one Aon executive put it, insurers can handle a $400 million loss to one company. What they can’t handle is an agentic AI mishap that triggers 10,000 losses at once.
You Might Also Like
Upscale AI in talks to raise at $2B valuation, says report
AI infrastructure company Upscale AI is reportedly in talks to nab its third funding round since launching just seven months...
Thousands of rare concert recordings are landing on the Internet Archive — listen now
Chicago-based music superfan Aadam Jacobs has been recording the concerts he attends since the 1980s, amassing an archive of over...
Anthropic temporarily banned OpenClaw’s creator from accessing Claude
“Yeah folks, it’s gonna be harder in the future to ensure OpenClaw still works with Anthropic models,” OpenClaw creator Peter...
Iranian hackers are targeting American critical infrastructure, US agencies warn
The U.S. government is warning that Iran-backed hackers are escalating their tactics by targeting American critical infrastructure systems with the...








