Articles - Reserving machine learning and the myth of the silver bullet


Advancements in technology today provide the tools and resources necessary for the step change in reserving capabilities that were not previously possible. The principal challenge is the opportunity cost of inefficiencies, primarily caused by the ineffectual use of human capital. This in turn leads to opportunity costs from the inability to redeploy more resources on continuous development and talent drain from reserving teams as individual capabilities are underutilised.

 Lewis Maddock leads the Nordic Insurance Consulting and Technology business and global future of reserving intellectual property development at WTW.
 
 The amount of time spent on reporting and monitoring is far from where it needs to be to provide the innovation and incremental developments needed for real change. In terms of control, the reporting process itself and supporting activities are far too onerous and consume too much human capital resource to manage the operational risks and control the output to meet requirements.

 Given this strain on resources and the continuous compromises made due to deficiencies in data and information relied upon for effective decision making, an important question to ask is why has there been so little innovation to date? And why do we think things are going to change?

 Uniquely complex
 The reporting process is uniquely challenging due to its very different objectives, constraints and timescales compared to typical modelling processes. Firstly, the governance controls and audit requirements are much more onerous. This adds significantly to the difficulty of maintaining the process - let alone considering innovative techniques and new approaches.

 For example, in some markets there is a requirement to print out detailed information on the model parameterisation and the models used with enough detail for someone to independently recreate the results, which then becomes even more challenging when you start to add complexity to the models involved.

 The reserving process itself is an optimisation exercise. We recognise that there are a range of reasonable best estimates at any given valuation date and the challenge is in optimising the selection of our best estimates to meet reserving objectives over time.

 This, in itself, is a non-trivial process control problem and real progress can only be made with a step change in how we think about and perform reserving.

 There is of course more to reserving than the reporting process. But until we address the fundamental issues here, there is little opportunity for growth. This has traditionally led to a common perception that reserving is a statutory requirement and nothing more.

 But, it should be recognised that reserving is a critical part of business intelligence.

 What has changed
 The difference now is that advancements in technology provide the tools and resources necessary for the step change in capabilities not previously possible. For example, workflow management solutions enable the production of much more decision support material in a shorter time-frame, whilst maintaining and improving governance and controls. And the leaders in this space reduce the need for ad hoc analysis, have more data-driven analysis to support decision making, and free up time to embed further capabilities.

 Tools of this type provide a powerful solution for integrating software and systems in an end-to-end process, supporting a best-in-class approach to the architecture. This allows much more flexibility to use the best tool for a particular job in an end-to-end process.

 The availability of cheap computing power is opening up new possibilities to leverage data assets. For example: robotic process automation is being used to produce more for less with increasing granularity; interpretation techniques are helping decision makers identify the pertinent information and prevent this being lost in large volumes of analytical output; and machine learning is being leveraged to unlock value from unstructured data.

 The architecture possibilities enabled by the capability to integrate a diverse range of systems and applications into a coherent process has really expanded the possibilities beyond what we could have achieved before. Not just within reserving, but across the organisation. For this reason, real change in reserving practices appears feasible in the near future.

 Impact of machine learning
 The role of machine learning should be limited primarily as an enabler for targeted elements of the reserving solution, rather than as a one-stop remedy. Further, machine learning needs to be considered in the context of a wider roadmap to a future target operating model. Investing in the right development at the wrong time is probably the biggest pitfall when it comes to machine learning and reserving.

 Besides operational efficiency and process control, the main benefit of machine learning for reserving is improved insights, and at its core it is designed to tackle the problem of optimisation. In the context of the reserving process, we are effectively running an optimisation in order to hit a moving target, with the objective of considering the cost of being wrong in order to provide meaningful outputs. This is very different from machine learning techniques currently being used elsewhere by insurers where supervised learning methods are typically used with a well-defined response to simply minimise the error in fitting to historic data.

 In pricing and risk modelling, for example, an insurer can assume that the risk differentials in their recent historic policy and claims data are representative of experience in the near future. This is a reasonable assumption in practice, such that if they fit a model that is predictive of risk relativities on the recent history, it is sufficient then that they have a predictive model of experience in the near future.

 The difficulty in reserving is three-fold. The information necessary to inform the answer far in the future may not be in the historic data. So, building a predictive model of historic data is not going to provide the answer. Secondly, the estimates will change over time as the insurer generates more information on what the outcome will be. Finally, the reserving process and the way the insurer communicates results are going to be very different from today. This needs to be supported by many more visualisations than they currently use. Back testing diagnostics, for example, will be essential. This is why the roadmap is so important for placing developments in the right order to get where they need to be.

 In terms of a single machine learning method that is going to magically solve all the problems in reserving, there is no silver bullet. Indeed, some developments will simply exacerbate existing problems if the right parts of the wider solution are not in place beforehand.

 We are seeing a gradual move towards the use of machine learning in projection of ultimate losses. In considering the end goal for machine learning in reserving processes, there is a useful analogy in process control applications in robotics. Consider an exercise of programming a quadcopter to fly through a hoop that is thrown through the air at random. This requires the quadcopter to monitor the data feed from sensors tracking the position of the quadcopter and the hoop to determine adjustments to the speed and direction the quadcopter should make at any given time, in order to optimise the trajectory needed to hit its target.

 How the hoop is thrown, how the wind blows during flight, and any other numerous variables mean that the quadcopter cannot know where that hoop will be when it eventually flies through it. The algorithms optimise a decision, given all information available at the time, to minimise the probability of missing the target at that point in time, and repeat this at regular intervals until the target is reached. In reserving, we cannot know what the ultimate loss for any given cohort will be, any more than the quadcopter can know where the hoop will be. But we can optimise the output from our reserving processes to acknowledge the fact that we're on a journey and minimise the cost of being wrong at any given time.

 Roadmap to unlock reserving capabilities
 Insurers that have had the greatest success so far in improving their reserving capabilities are those that have made the most progress on the journey towards their defined future operating model. Strategic planning on developments and clear objectives for these exercises up front is a key differentiator for the market leaders in this space. There is a lot that can be done to realise the benefits of operational efficiency and process control before resorting to machine learning. It has its place, but it's not the complete solution.

 Embedding a workflow management solution in the reserving processes, such as WTW’s Unify technology, is a critical enabler for the journey. The step change in automation capabilities will enable insurers to produce significantly more decision support material, whilst mitigating the problems of maintaining the governance, controls and audit, in addition to freeing up resources for development activities. This tooling is essential to have in place in order to enable any real growth in data-driven analytics as well as supporting reserving without expenses spiralling out of control. Insurers will have the capabilities to readily integrate new tooling into the environment, with improvements in governance, control and audit. Robotic process automation will then further complement these capabilities by producing increasingly sophisticated data-driven analysis and output in a timely manner for the same headcount.
 
  

Back to Index


Similar News to this Story

CDC a new dawn
In the slow moving world of pensions, the week commencing 7 October 2024 was a big week. On Monday, we saw the launch of the Royal Mail Collective Def
AI regulation shaping the future of the insurance industry
James Clark and Chris Halliday look at the EU AI Act, arguably the world's first comprehensive law specifically designed to focus on the regulati
Will COVID19 keep excess mortality rates high until 2033
Sergio Jimenez Lopez, Head of Life & Health Research Forecasting, delves into the long-term impact of COVID-19 on excess mortality rates. He explains

Site Search

Exact   Any  

Latest Actuarial Jobs

Actuarial Login

Email
Password
 Jobseeker    Client
Reminder Logon

APA Sponsors

Actuarial Jobs & News Feeds

Jobs RSS News RSS

WikiActuary

Be the first to contribute to our definitive actuarial reference forum. Built by actuaries for actuaries.