Articles - Trends in Insurance Analytics


Driven by booming technology, the influx of new talent, tools and ideas – expectations of advanced analytics have reached fever pitch across almost every industry. In general insurance, many actuaries initially reacted with bemusement and trepidation at this sudden hype. After all, applying reason and leveraging data to inform the decision making process is nothing new. What’s changed?

 By Jack Buckley, Chief Pricing Actuary and Kenny Holms, Head of Predictive Analytics, ArgoGlobal

 Analytics in the past
 General insurance has a long history in deploying analytical techniques to varying degrees in our constituent markets. For diversified and specialty carriers, the modelling landscape is varied and class dependant. It varies depending on the competitiveness of the environment, data assets and the nature of the products. Many insurers in this bracket have not faced the same degree of commoditisation as their personal lines and mid-market counterparts, and so have not been forced into seeking out an analytical advantage. Pricing via tried and tested rules of thumb often prevailed, with untapped data and loss experience piling up in the corner throughout the years.

 This could never persist in the long term. Sure enough, structural changes in global insurance markets have squeezed returns to the point that many carriers have been forced into re-evaluating how they think about data and quantify risk.

 Progress without baggage
 Perversely, for many carriers that are just beginning their analytical journeys, there are fewer impediments to success than for the already advanced analytical outfit. The new entrants face a blank canvas, unconstrained by the preconception of what analytics means, how it is done, and what it can and can’t do. Many have opened their minds, and their cheque books, to a broader class of analytics function – one in which actuaries, data scientists and statisticians bring the best of their respective trades and tricks to the table.

 First thing first
 It can be easy for any analyst to fall into old habits when approaching a new analytics project: the actuaries normally want a method that comes with a story, and the data scientists often want to use whatever algorithm gives the best fit. Often, before the analysis even begins, both camps are already trying to justify their view of the world.

 Unfortunately, debating the efficacy of the available tools before understanding the job isn’t the best way to begin any project, so it helps to start at the beginning. It’s often useful to articulate
 • Aims: What exactly are you trying to achieve? Tell a story, set a strategy, find clusters of business, predict a quantity of interest?
 • Measures: How can you differentiate between a good model or analysis and bad one?
 • Physical processes: How are losses generated to the insured and how does that translate into losses to the insurer? How can censoring, capping, aggregation etc. confound your analysis and how should it be dealt with?
 • Data: What is your data quality, quantity and relevance? Can you access more, internally or externally?
 • Constraints and timeframe: Does the solution need to be a particular shape or have particular attributes? Are you time constrained?

 Answering questions like these, and having a broad bag of analytical tools at your disposal should enable you to identify a selection of relevant approaches fairly quickly.

 An example from excess professional lines pricing: a 21st century Bayesian approach
 Quantifying the expected ground up loss cost for an excess policy is often of secondary interest. It is typically the tail of the loss process we are interested in capturing accurately. Such business is often priced using benchmark base rates and ILF’s, or broad brush stochastic modelling.

 When presented with a ground up industry loss list, we were able to focus on what we are really interested in: the full predictive distribution of losses – body, tail, everything. We chose to model the mean and dispersion of an assumed loss distribution, conditional on the exposure traits of the underlying risk. As such, we could capture which segments not only had higher “average” risk, but also those that were more volatile – crucial information when you provide excess cover. Cost of cover is calculated, trivially, via application of policy terms to simulations from the predictive distribution of ground up losses.

 We built our loss model using cutting edge statistical tools, using Hamiltonian Monte Carlo techniques. The flexibility of such techniques is huge, and allows us to:
 • Fit a parsimonious double GLM, all at once under one model roof
 • Regularise the model by setting a Bayesian prior, and tune this regularisation via cross validation for maximum predictive performance, see Figures 1 and 2.
 • Incorporate credibility adjustments into our model automatically
 • Directly quantify uncertainty in our estimates, and generate loss realisations ready to feed through our contract terms

 This new model has significantly increased resolution over broad brush market rates, and provides our underwriters with a tool to help resolve an uncertain risk landscape.

 The Future
 The future holds many challenges and opportunities for the modern analytics function. From a technical perspective, the greatest challenge isn’t building smarter algorithms, it’s making the ones we have better fit our very specific set of problems. Brute force rarely helps a poorly framed problem.

 We also need to be mindful of the world outside of our modelling bubble, and how we should best trade off prediction and insight within our work. If we chase predictive power at all costs, our approaches inevitably start to assume similar levels of complexity as the world around us, which can make clear insight difficult to articulate without being disingenuous.

 Perhaps the biggest challenge, however, is sifting through the analytics hype, taking what we need and structuring our teams to make best use of the undoubted talents of a new and varied generation of professionals.
  

Back to Index


Similar News to this Story

CDC a new dawn
In the slow moving world of pensions, the week commencing 7 October 2024 was a big week. On Monday, we saw the launch of the Royal Mail Collective Def
AI regulation shaping the future of the insurance industry
James Clark and Chris Halliday look at the EU AI Act, arguably the world's first comprehensive law specifically designed to focus on the regulati
Will COVID19 keep excess mortality rates high until 2033
Sergio Jimenez Lopez, Head of Life & Health Research Forecasting, delves into the long-term impact of COVID-19 on excess mortality rates. He explains

Site Search

Exact   Any  

Latest Actuarial Jobs

Actuarial Login

Email
Password
 Jobseeker    Client
Reminder Logon

APA Sponsors

Actuarial Jobs & News Feeds

Jobs RSS News RSS

WikiActuary

Be the first to contribute to our definitive actuarial reference forum. Built by actuaries for actuaries.