By Alan O’Loughlin, Statistical Modelling and Analytics Lead for Europe, Insurance, LexisNexis Risk Solutions
However, the less glamourous Hollywood truth is that, for decades actuaries and Statisticians have been working with the advanced algorithms on which “artificial intelligence” is based. And so far, no actuary’s computer has ever attempted to destroy or rule mankind –which we know of!
Advanced analytics for insurance has changed almost beyond recognition since their first application and continues to change in profound ways to meet the latest and future market needs. Test cycles are speeding up and new models and data are regularly being deployed to improve the accuracy of predicting an outcome. AI is already becoming a key area of competition, having a direct impact on the speed of deployment, efficiency, improved customer experience, targeted pricing and customisation. However, it is not a panacea for all our data problems.
Data Scientists to the rescue
For insurance providers, machine learning cannot effectively manage, cleanse, analyse and deploy data without input from a highly experienced Data Scientist. Algorithms are incredibly helpful and intelligent and are utilised in many ways: for example, learning our shopping habits to suggest add-on purchases, using traffic data to suggest alternative map routes, delivering relevant online advertising, or that tell Siri or Alexa to “to shut up” when they creepily start speaking to you means mute. However, without manual intervention by a Data Scientist, algorithms are not able to accurately structure data or use them for the correct business acumen, especially within Insurance while keeping in mind the relevant regulatory requirements as they model to a loss curve.
The most important step is undoubtedly the preparation. As much as 80% of a Data Scientist’s time is spent in connecting the data, cleansing it, normalising and preparing it for different use cases, reviewing code that doesn’t work as it should, across multiple programing and modelling platforms and subsequently questioning their career choice. The remaining 20% is building something new and exciting, Gartner now defines AI as “applying advanced analysis and logic-based techniques, including machine learning, to interpret events, support and automate decisions, and take action”. True AI should solve problems you don’t know exist, but until that is the case you can use AI to identify problems better and faster and then separate to the issues solve them faster but you need Data Scientist to tie the issue and the solution together.
Building successful Analytics Projects
Once the cleansing and preparation is complete, the next step is to work out which data mining technique will deliver the best value. What data do you need to accurately predict claims or underwriting? How do you clean and pipeline the data into the right structure and database? Based on years of experience in data modelling, my data analytics team at LexisNexis Risk Solutions would advocate applying the following principles:
Start small, but not trivial. The first proof on a concept project should be relatively small, with a timescale of less than six to eight weeks. However, it must still tackle an issue important enough to encourage buy-in from stakeholders as well as end-users if the hypothesis it proven true. They must be able to see the potential benefits. Diving straight in and taking on the firm’s biggest issue without fundamental understanding of all the links in the chain is too great a risk and investment.
Get the requirements right. It is essential to take enough time to fully understand the project requirements. The analyst needs wide engagement to understand the business problem and how best to tackle it.
Understand the limitations. It is not necessarily helpful to assume that data science and machine learning is going to solve every problem. Remember, 80% of the job is cleansing and normalising the data and while machine learning can help with some of this, getting it right will require human intervention.
Focus on processes. It can be expensive to implement robust procedures for coding and development standards, managing access keys, testing routines, Q&A documentation and review, but getting it wrong can be far costlier.
Maintain perspective. It is common to come up against dead ends, but any possible error or failure provides invaluable opportunities for learning. Intelligence gained in these situations should be shared across the team, helping to improve widespread productivity. FAIL FAST if you can.
With human involvement playing such a fundamental role in data and analytics for insurance, ‘Applied Intelligence’ or ‘Machine Augmented Intelligence’ are better descriptions rather than full Artificial Intelligence – no rogue robots plotting to take down the human race, here.
Augmented intelligence describes the application of automation within the insurance workflow, alongside the essential human intelligence and business acumen, rather than a fully machine run operational process. LexisNexis Risk Solutions has been undertaking data science in this way for more than four decades. While today AI is increasingly helping with data structuring, cleansing and preparing, some of the biggest successes are with applications of simple machine learning, utilised by a team of expert data scientists.
|