Articles - Designing better risk models for the next pandemic


It will take time and resources to build an infectious disease risk model – re/insurers will have to be more innovative in their pandemic coverage and exposure management. Natural catastrophe risk models have revolutionised the property/casualty re/insurance business over the past 30 years. They have allowed more efficient deployment of capital by providing a rigorous way of estimating potential losses, better quantifying the tail and increasing trust in the probabilities assigned to natural disaster events and the damage and losses they produce.

 By Simon Young is Senior Director, Climate and Resilience Hub, Willis Towers Watson

 All of these models have been developed from common foundational assumptions: an event happens and produces impacts on a known (although somewhat uncertain) exposure (property or other fixed asset), which has a known (although, again, somewhat uncertain) vulnerability to the consequences (hazard) of the originating event. Using an intricate mix of physics (through natural science and engineering lenses) and statistics, such models produce insurance loss estimates that are, generally, robust and defensible.

 As new systemic and non-natural risks have emerged, establishing the potential future loss range of perils, such as terrorism and cyber, has required the introduction of social science disciplines (and greater levels of uncertainty), but did not greatly disrupt the established logic of the cat model; the components and controls remained familiar.

 Not so infectious disease models. First introduced to the insurance sector to capture excess mortality from global pandemics in the life insurance business, they began as a combination of stochastic elements of natural catastrophe models with a well-established form of epidemiological model, the Susceptible – Infectious – Recovered compartmental model (and its many and varied siblings).

 Unknowns
 From a traditional cat modelling perspective, there remained a lot of unknowns. For example, the two components of “hazard” – location and intensity – were both poorly understood, thanks to a very sparse and poorly documented experiential history, and only a rudimentary understanding of the zoonotic viruses that are the dominant cause of epidemics and pandemics.

 And the model architecture required was more Gaudí than Brutalism. There is no fixed exposure or vulnerability; both are dynamic and feed directly back into the model in its next time step. And exposure and vulnerability are not controlled by engineering equations, they are assumed impacts of political decisions and human behaviour, of travel webs and social networks.

 The Sars-CoV-2 virus has brought epidemiological modelling to our living rooms (many doubling as home offices). Previously obscure epidemiological modellers have become household names and the concepts of reproduction numbers, non-pharmaceutical control measures and even herd immunity have become all too familiar. Covid-19 is by far the best-documented pandemic ever, but even after many months of live information being available (although to widely varying degrees and with a broad quality range) to calibrate forward-looking models of case-counts and mortality, inconsistencies and uncertainties abound.

 Epidemic forecasting, by nature, is a tall order. In some cases, these model inconsistencies are due to different assumptions that necessarily change as new information becomes available. Another reason model outputs may not reflect future outcomes is because there is a feedback loop dynamic – models affect reality. If a model predicts a dire outcome, it may in fact prompt decision makers and even the general public to change their behaviours, thereby changing the final outcome.

 Further challenges are found in the conversion of pandemic model outputs to the short-term economic impacts of interest to P&C re/insurers. The literature on the economic impacts of pandemics is extremely sparse (although this will change) and dominated by economic simulations that sit on top of epidemic simulations, rather than empirical data. The consequences of government policy responses (like lockdown) and sociological dynamics (fear, social distancing) are generally not economic outputs from models but input assumptions driving the direction of the reproduction number and, ultimately, the outcome of the epidemiological event.

 As one moves from modelling a single event to the full probabilistic modelling familiar to the re /insurance industry, additional challenges must be addressed.

 We think near misses are frequent in real life and must be captured via counterfactuals in the modelling domain; two coronaviruses with very similar characteristics emerging in very similar locations can lead to very different global outcomes, at the whim of individual actions – by patient zero, a head of state or many people in between – impossible to fully capture stochastically. Big challenges remain in quantifying public policy and behavioural elements that shape the nature of risk; these too need to be mapped out as they evolve over time and then linked to biological and epidemiological modelling frameworks.

 Lessons to learn
 Progress is being made, however, and Covid-19 learnings will help, although the temptation to model to the last big event has to be closely managed. The next pandemic will most certainly be different in character as compared to the present event.

 There have been significant advances in our understanding of the nature and spatial distribution of zoonotic viruses that pose the greatest risk of spilling into human populations and igniting pandemics. Improvements in biosurveillance have also shed new light on the rate of spillover, which is critical to characterising high-frequency events, as well as the tail.

 There are also continuing advances in modelling methodology, ranging from the incorporation of socio-political factors to capturing population movements. And there is still work to be done. The assumptions required to construct a probabilistic pandemic model are hugely influential on outcomes but are now based on expert judgments that are art as much as science and vary (often in ways that are not readily quantifiable) from modeller to modeller. The use of structured expert judgment to quantify and constrain uncertainties in such assumptions – and thus in model outcomes – is an area of development that carries promise from successful deployment in other contexts and, alongside other innovations, will help to build a level of trust in pandemic models that approaches that found in nat cat models.

 Despite present and future scientific and modelling advances, the full benefits will not be realised if there is a failure among decision makers to effectively use data and analytical tools as part of their decision-making process, whether it be to inform preparedness or guide response activities.

 In the context of the global re/insurance market, it must be recognised that while modelling infectious disease risk is challenging and will take time and resources to build the level of trust found in nat cat models, there are already pathways to gain an understanding of the risk. This present understanding is sufficient to support tangible innovation – policy experiments, insurance structures, refinements to preparedness and mitigation strategies – within both public and private sectors. Ultimately, further innovation will be necessary (and is entirely within our grasp) if we hope to better manage the financial and social consequences of future epidemics and pandemics.
  

Back to Index


Similar News to this Story

Five step approach vital for DB schemes looking to buyout
Insurers may refuse to quote and provide pricing for buy-ins and buy-outs where the DB pension schemes’ data is of a poor quality, warns Hymans Robe
What insurers must know about the hidden risks of silent AI
Anja Vischer, Senior Emerging Risk Manager at Swiss Re Institute, discusses the emerging risks of AI for insurers. She stresses the need to reassess c
September 2024 Edition of the Actuarial Post Magazine
Our cover story comes from Jon Jacobson from Omnisient who looks at applications of Privacy-Preserving Data Collaboration (PPDC) for actuaries. We als

Site Search

Exact   Any  

Latest Actuarial Jobs

Actuarial Login

Email
Password
 Jobseeker    Client
Reminder Logon

APA Sponsors

Actuarial Jobs & News Feeds

Jobs RSS News RSS

WikiActuary

Be the first to contribute to our definitive actuarial reference forum. Built by actuaries for actuaries.