Articles - Policing the robots

Last month’s article on the willingness of people to trust advice given by questionnaires on the Internet and their willingness to assume that information given to them on websites was trustworthy got me thinking of the problems that we risk creating if we rush into the era of robo-advice in the financial sector without ensuring that the quality is good enough.

 By Tom Murray, Head of Product Strategy, Exaxe.
 Are we in danger of reverting to the early days of mass financial product sales when door-to-door salesmen employed the type of dubious tactics to increase sales that eventually forced the government to step in and increase regulation?
 The need for automated advice solutions is obvious. The cost of financial advice is so high that it would be completely unavailable to a large section of the population on a face-to-face basis. Their only hope of getting help in the complex world of financial products is through the use of automated solutions that can analyse their personal financial situation and recommend options to maximise their financial security.
 Thus, the need to eliminate bias within systems is great and will require imaginative solutions in order to combat it. I suggested last month that there needed to be a certification process that would ensure the system is capable of giving independent advice to a consumer, but in order to achieve this a far more holistic approach is required.
 Right through the system from the requirements definition, through the design and development stage, the biases of the humans involved are likely to come through in the process. Many companies prefer to buy-in software for this very reason as the systems vendor will be taking an industry view of the market, if it wants to sell across the market, whilst in-house solutions are often developed very much from the viewpoint of the organisation, which will have its own historical baggage to deal with. Therefore, external suppliers automatically introduce new ideas and prevent inherent bias from being maintained.
 But even independent suppliers are exposed to the danger of creating their solutions with biases embedded. The challenge is to ensure that these biases, which are natural to all humans, do not ultimately affect either the output of the system, nor the advice that it tenders to the consumer.
 It seems peculiar to imagine automated solutions being biased but it is obvious when one stops to think about it. Any algorithm will naturally be developed with the biases of the designer in mind, e.g. if the designer is inclined to believe that protection is more important to lower earners than investment, that particular algorithm will have a bias towards protection recommendations for people in those circumstance. It is completely natural. It is also very dangerous. The very strength of robo-advisers, their potential to reach vast swathes of the population, also means that their capacity to missell is far greater and could damage the interests of far more consumers than a standard human adviser ever could.
 The regulators need to get ahead of this industry whilst it is still in the early stages of establishment. After so many financial scandals of the recent past – one thinks in particular of PPI – we should now realise that spending money upfront to avoid financial scandals is far more economic than waiting until the crisis has unfolded and imposing fines and compensation payments to try to sort it out then.
 The adherence of automated advice solutions to the principles and practices of the regulator’s rules is vital to preserve the integrity of our financial services market. However, control in this new era will not be as easily sorted through the FCA’s current principles based approach. Whilst human advisers keep themselves aware of the market and can adjust their approach based on the latest news articles and discussion forums in the industry, the capacity of machines to learn from events and amend its actions based on principles is very low.
 Instead the regulator is going to have to get its hands dirty and return to a more prescriptive approach to rule making. Rigid rules rather than generic principles are the only way forward to enable a consistent approach to be taken across the next generation robo-advisers. This will mean far greater involvement of the FCA in developing the rules and defining the acceptable outcomes that those developing the automated solutions will have to follow.
 To support this, one can see the difficulties that automated cars are having in being able to make the type of judgements that humans are capable of making in a split second. Quite a few accidents have happened to date whilst the systems that guide these cars struggle to cope with the less than rational approach that humans generally have toward driving.
 This applies ten-fold in terms of the human approach to money, with a huge divergence across society at an individual level in attitudes to money. A bright future awaits with the advent of automated advice solutions, but care needs also to be taken before the problems of earlier unregulated eras are replicated.

Back to Index

Similar News to this Story

February 2024 Edition of the Actuarial Post Magazine
It will be interesting to see what changes the Chancellor actually makes in the upcoming Spring Budget. In the meantime, the Regulators have been bu
Local authority audit backlog and pensions considerations
A recent joint statement from the Department for Levelling Up, Housing and Communities (DLUHC) and the National Audit Office (NAO) announced proposals
New strategic options for DB pension schemes
The next phase of Mansion House reforms is upon us! On 23rd February the government published a consultation on proposals that will open up two new st

Site Search

Exact   Any  

Latest Actuarial Jobs

Actuarial Login

 Jobseeker    Client
Reminder Logon

APA Sponsors

Actuarial Jobs & News Feeds

Jobs RSS News RSS


Be the first to contribute to our definitive actuarial reference forum. Built by actuaries for actuaries.