Articles - Algorithms powerful but not perfect


Intelligent algorithms can make decisions that are far more accurate than human computations can achieve. Hence governments everywhere are using them to justify decisions that affect people in their day-to-day lives, saying that the algorithms are far better at analysing the situation than people and therefore the results of their calculations must be better than ones which are human-made.

 By Tom Murray, Head of Product Strategy for LifePlus Solutions at Majesco.

 The furor that broke out across the UK because of the algorithm that determined individual A-results for this years A-Level students has brought the use of algorithms to make decisions about individuals back into the news. The specific application of the general results per school left many students feeling cheated, as although it may well be true that only a certain number of A’s would be expected from a particular school, each individual student of that school naturally felt that he or she would be one of the outliers and therefore was devastated to find that his or her result had been marked down.

 As any actuary could have told the government, one has to be very careful in moving from the general to the specific. Actuarial science has for many decades been able to look at a cohort of individuals and estimate their average life expectancy with amazing accuracy. However, this is a far cry from being able to tell individuals in that cohort how long they would live. Both life and pension companies and financial advisers are well aware of these limitations.

 Hence recommendations to purchase particular financial products are not automatically calculated based on the facts gathered but on a more holistic view of the individuals situation and their personality. It is ironic that algorithms which gives such accuracy at a general level can immediately introduce unfairness if moved to the specific level.

 A similar example of the inadvisability of totally relying on algorithms has occurred in those countries where the penitential systems have been using algorithms to calculate the likelihood of an individual re-offending and then using this calculation to make decisions on specific applications for parole. Denying a person their freedom based on what happens to the average person in a particular cohort is fraught with risk and could lead to problems such as denying parole to someone whom it would be perfectly safe to allow back out into the community or, more dangerous for the community, paroling someone who re-offends, based on the fact that the majority in his or her cohort wouldn’t have done so.

 The truth is that algorithms are very powerful but need to be used intelligently. In the field of robo-advice, systems use algorithms to make recommendations based on the facts given but at this point they need human intervention to ensure that the recommendations are optimal for the specific customer. Whilst simple products and straightforward scenarios can be handled by algorithms currently, for fuller advice scenarios, robo advisers can be best operated in conjunctions with expertise from IFAs – using the robo-adviser to process the data to recommend products but then enabling the IFAs to finesse the result to ensure that the recommendations are a good specific fit for the customer.

 Feedback loops are essential to enable algorithms to learn from experience, but until the algorithm are as sophisticated as humans at learning from experience, robo advisers will need human input to handle more complex situations. Practical science is not yet at the point where the feedback loops are anywhere near good enough for machines to be able to learn like humans.

 We can use algorithms to produce unbiased product recommendations and to design new products that will perform better for our clients, but at the same time we must mitigate their limitations by adding human knowledge to the equation. This will ensure that specific recommendations for individuals are based on the correct ones for the cohort but fine-tuned for that specific individual.

 Undoubtedly, in the future, machine learning will lead to algorithms that can understand far more about the recommendations for that cohort and go deeper to ensure that the individual is also given a personalized flavour of the recommendation in tune with their individual needs. At the moment, however, robo-advisers need to work in conjunction with human experts if they are to be able to provide holistic financial advice to the consumer. Otherwise we are going to end up with a lot of people just as upset with their financial advice as this year’s A-level students were with their exam results. And that will just damage the idea of robo-advice in the eyes of the consumer.
  

Back to Index


Similar News to this Story

Industrialise risk modelling with ReMetrica® Ultimate
How do actuaries manage multiple models and data sources that cover several entities? Now ReMetrica Ultimate edition brings new workflows, advanced au
Time to home in on home claims data
In just six months, the way we occupy and use our homes has altered dramatically, and the change in risk this creates will be felt in the market’s cla
The ethnicity pensions gap
With 86% of the UK population from a white ethnic background and pension schemes designed to meet the needs of the majority, are we confident 14% of t

Site Search

Exact   Any  

Latest Actuarial Jobs

Actuarial Login

Email
Password
 Jobseeker    Client
Reminder Logon

APA Sponsors

Actuarial Jobs & News Feeds

Jobs RSS News RSS

WikiActuary

Be the first to contribute to our definitive actuarial reference forum. Built by actuaries for actuaries.