By Nita Okorie, Associate – Business Development – FINEX PI, WTW
Although the era of Generative AI is still in its early stages, studies suggest that it could contribute to annual productivity growth of 0.1% to 0.6% through 2040. The current impact is already being felt in business growth, with noticeable gains such as increased sales and reduced costs (McKinsey, 2023).
AI is also proving effective in the insurance sector, particularly in improving efficiency, customer satisfaction and risk assessment. AI can greatly improve the insurance process for professional indemnity insurance (PII). This includes managing claims and evaluating risk, customer service, engagement and underwriting. PII is designed to provide third-party cover for negligence when firms are providing their professional services. It covers industries from legal and construction to healthcare and other professional fields. AI’s integration extends to various industries, including Construction, where it enhances efficiency and quality across various stages. From Pre-Design with feasibility studies, to validating building code compliance during Permitting & Approvals, or even optimising Project Management. AI streamlines tasks, making them more efficient and cost-effective.
While AI has made significant strides in improving insurers' fraud detection and claims processing procedures, it also introduces new risks. These risks can arise from biased or inaccurate claims handling decisions driven by AI, without human oversight, which could result in increased professional liability claims, not only for the insured but also for the insurer. Following this, Open AI created a terms of use which includes a “Limitation of Liability clause” which limits the company’s financial exposure in events of claims arising from errors or omissions in content provided by its services — which includes ChatGPT — and for legal professionals depending only on the AI’s output without reviews and human oversight, client claims arising from the errors will majorly fall on the legal professionals or firms (OpenAI, 2025).
Cybersecurity insurance has also changed with AI. It uses AI to find cyber threats faster, protect data better and find patterns and anomalies more quickly. There’s no doubt that cybersecurity professionals are utilising artificial intelligence to combat cyberattacks and threats. According to EY’s (2024) Global Cybersecurity Leadership Insights report, organisations that have effectively responded to cyberattacks — over 50% faster than others — have integrated AI solutions into their cybersecurity strategies. This gives them a significant advantage over organisations relying on traditional cybersecurity measures. However, as AI adoption spreads across industries, it’s important to note that people in non-ethical jobs, like cybercriminals, also have access to AI-driven tools. This makes organisational AI systems also vulnerable to being hacked. From deepfakes to phishing email techniques, traditional security measures are being challenged.
Claims scenarios: Professional indemnity
A notable example involves a 2019 dispute between Tyndaris SAM, a London-based investment management firm, and its former client, MMWWVWM Limited. Tyndaris made and sold a trading platform that used AI to make investment decisions. He said it could do better than traditional asset managers.
The platform was promoted as sophisticated and proven, leading MMWWVWM to invest significant funds. However, the AI worked without human supervision. The client later said that Tyndaris had lied about the system’s abilities, especially that it was tested and used in real client accounts. Following significant investment losses, MMWWVWM filed a claim alleging lack of transparency and oversight, which ultimately led to legal action against Tyndaris. (Pinsent Masons, 2024).
Similarly, in the U.S., two lawyers from Morgan & Morgan faced potential sanctions for including fictitious case citations in a lawsuit against Walmart. One of the lawyers admitted that an AI program was used, which had "hallucinated" the cases. This hallucination occurs when an AI tool produces content that seems credible and real but is actually fabricated. The lawyer confirmed that it was an error that wasn’t identified sooner. The judge hasn’t yet decided how the lawyers will be treated. However, cases like this can lead to professional discipline or job loss.
Role of professional indemnity insurance
There’s no public evidence that professional indemnity insurance (PII) was invoked in the initial case. However, had the matter proceeded and Tyndaris been found liable for misrepresentation or failure to supervise the AI platform, such claims would typically fall under the scope of PII, specifically, coverage for professional negligence.
PII generally covers:
01 Negligence
Negligent acts or omissions in the delivery of professional services
02 Costs
Legal defence costs
03 Losses
Third-party losses arising from reliance on those services
As long as the use of AI is declared within the scope of professional services covered under the policy, and a relevant claim arises from those services, the policy should respond, subject to specific terms and conditions.
Conclusion
Brokers and insurers must change and offer solutions that fit the challenges AI presents. These challenges include biases in AI decision-making, data breaches and AI-driven cyber threats. AI technologies with human knowledge and supervision are needed to make ethical decisions, be responsible and reduce possible risks.
Human oversight in AI systems also requires much more than just technical knowledge. In ensuring human oversight into AI processes, firms can show that their AI technologies function in a manner that honours human thought, independence, self-determination and empathy. This oversight plays a crucial role in reducing the risks linked to AI, including bias, discrimination and operational mistakes.
As AI continues to shape industries, professionals and organisations must stay ahead of evolving risks. Insurers that can offer new, AI-based policies will protect their clients better and stay at the forefront of this fast-changing world.
From deepfakes to phishing email techniques, traditional security measures are being challenged.
Highlights from professional indemnity insurance specialists
"From a pii perspective, we recommend that clients talk to their Insurance Advisors and evaluate to what extent AI-related risks are covered and how to respond to the emerging exposures of technology” - David Carragher - PI Professional Services
“The operational efficiencies from the use of AI are undeniable, and the benefits are being experienced by many businesses. However, the utilisation of AI is clearly not without risk; insurers expect their clients to take a responsible and measured approach when utilising AI and ensure policies and procedures governing this risk within the business are monitored appropriately.” - Jonathan Angell - PI Legal Services
|