General Insurance Article - Audit capability: How to control your financial models


 By David Gregson, OAC Actuaries and Consultants

 As we are all aware, Solvency II has recently come sharply back into focus. While this means renewed focus on insurers financial models, an item that has remained a constant requirement is a need for control and auditability of models. Auditing requirements are going to be at least as tight, if not tighter, than under current regimes. For example, the need for models to be demonstrably used across different business areas means the control of models is key.

 This increasing requirement reflects a growing shift from users of financial models towards tightening their control over their models and how they are being used. The balance is to ensure this does not choke innovation. The following article helps paint a picture of how both innovation and control can work together.

 What is Audit capability?

 Auditability of a financial model covers a very wide range of diverse items but a few key points a user or auditor might include are as follows.

 1. Model source control
 Auditing and control of changes to the model either in development or in production is of particular interest as either new models are written or existing models evolve. IT development has utilised full source control for many years (and recently moved to ‘distributed source control’), and it is only relatively recently that financial modellers have been able to catch up.

 A modern modelling environment needs to allow for the development of a single model by multiple users. Given a model’s size and complexity, different individuals or even teams may be responsible for separate areas of its development. Source control allows these users to work independently without requiring any “locking down” of other sections of the model. Any change is permitted by any team member. This avoids complicated ‘per model section’ authorisation rights, that can hamper individual creativity. When a model release is required, appropriately authorised users can then select exactly which changes, out of those submitted, they would like to go live. Allowing models to be constructed using structures saved in .xml files means that each separate section of a model can be treated as an independent entity and so edited separately – conflicts only arising in the rare instance of the same item being changed by two users.

 As changes are made the users can ‘check in’ their changes and sync with changes from other users. Each check in can have a text description included along with the auto-generated details of the changes made together with the user’s name, date and time of editing. Over time this log forms an automatic audit trail of the model.

 Source control can be used to compare every aspect of the model, and so this could include results. Hence including a suitable Auto-compile / Run process within the checking in and checking out process means that the evolution of results, as well as the model’s components, can be completely integrated with the model. When a change to results occurs it can be easily traced.

 Use of source control is of course not limited to the initial development of a model. Once a model is in production, development continues as regulation or internal requirements change. As this happens, the same process can continue whereby the relevant area(s) can be updated, tested and checked back in and the audit trail added to the existing documentation (and here the integration of results with the model is of particular importance). Additionally, source control can allow separate regions to be set-up. So, for example, a locked down “central library” of models may be set-up, and as required models taken from there to a separate development area. Once development is completed and an agreed standard of testing required for sign off has been completed then only authorised users may be permitted to promote the model back to the central region.

 A similar system can also be used to support the use of a central model by different business areas. As an example, a central model may exist for generating cashflows. Separate areas would use the same model but apply separate parameters. A financial reporting team may apply realistic parameters while the statutory valuation team (under the current regime) apply a more prudent basis. A pricing team may wish to use the same cashflows but on top of this overlay some pricing system eg a way of using the cashflows to achieve a certain criteria.

 Using the same central set of cashflow models is of course more efficient than in separate versions, so there is nothing new here. However, the ORSA is likely to mean this utopia is less of a ‘nice to have’ and more of an essential.

 2. Audit of runs performed
 This is the area currently served by the production of an Audit log, or Run log by the model. Typically this will contain details of each component of the run, such as the version of the model used, the input and parameter files with date and time stamps. This is a standard requirement for modelling packages. A challenge is ensuring this is concise enough to be digested while retaining sufficient detail to make it fit for purpose which is why customisation of these logs may be useful. For example, allowing a user to specify different levels of audit output produced depending on the purpose of the run.

 Used in conjunction with source control, information recorded during a run allows complete visibility of every component of a set of results, such as the model code, the model points used, all parameter tables used. Most actuaries have, at some point, been frustrated at not being able to understand exactly how a particular value or set of values was produced. Modern audit techniques such as these should mean this is no longer a frustration!

 Ultimately, an unaudited run is unusable. This could result in anything from frustration at not being able to reproduce/ understand results derived previously, through to time being wasted if it is not possible to sufficiently justify results.

 3. Visibility and control of how runs are performed
 The need for IT-intensive processing to cope with stochastic runs (RBS) and multiple shocks and stresses (ICAs) is not new. Splitting those runs over multiple PCs or remote running (cloud-type services etc) is now commonplace.

 The ability to control these runs and ensure that they are split in the most optimal way is therefore of significant importance. Users also need to know how their run is progressing, and in particular if it has failed. Advanced models allow multiple runs to continue even if one or more fails, for whatever reason, to avoid lost time when this is crucial, such as year ends. Ideally the model would be smart enough to transfer production to another processor if the failure is due to an IT issue (rather than with the code or parameters).

 Controlling the future

 A key theme emerging from this article is the repeated use of the word ‘Control’. How does a risk officer, a chief actuary or an internal auditor control their models, and hence the results derived from them? How do they demonstrate that control to their auditor? Are their needs being compromised by uncertainty over control? Hence each stakeholder needs to ask whether their current modelling solution is capable of coping with these demands, and whether it offers the expected capabilities outlined above.

 This may mean examining the package itself. Increasingly, these days, it needs remembering that a “model” is a broad-ranging term. It isn’t just the models contained in the company’s chosen financial modelling software but also the many spreadsheets used on a regular basis. Anything that produces or contributes to a result on which a decision could potentially be made is considered a model, and so should be subject to the levels of control and audit demanded of models in general.

 Alternatively, it may mean examining the models within the package. For example, are they as efficient as they could be? Do they lend themselves well to the concept of being centralised, and used across different business areas?

 Extending this, legacy systems mean a company may have a range of different software models in use to model their existing book (even though many products will actually be very similar). Alongside general inefficiencies in supporting these models, it also clearly increases the audit workload, and so this may be another catalyst for consolidation of their models.

  

 About the author: With an initial background in regulatory reporting gained within one of the UK’s largest mutual insurers, David now specialises in risk management and developing modelling methodologies for clients to achieve a better understanding of their business and in order to meet ongoing regulatory requirements.

 His actuarial experience also includes solvency measurement and estimation, regulatory valuations, and developing and pricing new product lines for life offices.

 David is involved with developing OAC’s Solvency II services, including the design of models within Mo.net.

 He actively participates in thought leadership and has written a number of articles on the subject of Solvency II.
  

Back to Index


Similar News to this Story

PRA publish Policy Statement on Solvency II
Bharat Raj and Kathryn Moore at OAC following the publication of the PRA’s Policy Statement on Solvency II.
AI provides opportunity for insurance industry
Global Insurance Law Connect (GILC) has launched its first ‘Artificial Intelligence Report’, providing insights from 18 countries on how artificial in
Ten steps insurers are taking to tackle motor cover costs
Insurers announce actions to bring down costs after premiums increased 25% in 2023

Site Search

Exact   Any  

Latest Actuarial Jobs

Actuarial Login

Email
Password
 Jobseeker    Client
Reminder Logon

APA Sponsors

Actuarial Jobs & News Feeds

Jobs RSS News RSS

WikiActuary

Be the first to contribute to our definitive actuarial reference forum. Built by actuaries for actuaries.