Articles - Artificial intelligence and pensions: Cyber risk


This is the third in a series that takes a deeper look at areas relevant to U.K. pension schemes and how artificial intelligence (AI) may have a significant influence or impact. Cyber risk is an increasingly important issue in the pensions sector, and its significance is only expected to grow. Earlier this year, a cyber breach pushed Australian superannuation schemes into the spotlight; five pension providers were hit with a series of cyber-attacks, with members of one fund collectively losing $500,000 in retirement savings.

 By Adam Boyes, Head of Trustee Consulting, Ian Cairns, Cyber Risk Consultant and Kiran Mandalia, Director, Retirement from WTW

 Such incidents can cause concern among members and undermine confidence in the system. Following the event, some criticism was directed at the pension providers for their inadequate security measures.

 What is cyber risk and whose responsibility is it?
 The Pensions Regulator broadly defines cyber risk as "…risk of loss, disruption, or damage to a scheme or its members associated with using information technology systems. Risks can arise not only from the technology itself but also from the people using it and the processes supporting it".

 Under the GDPR, trustees of U.K. pension schemes have a responsibility to safeguard members' personal data, ensuring it remains secure and confidential. This involves putting in place robust protections like encryption, secure access controls, and regular cybersecurity reviews. Additionally, the Pensions Regulator's General Code of Practice requires trustees to actively assess and document how they're managing cyber risks, including maintaining up-to-date incident response plans.

 Pension schemes hold and manage vast amounts of data relating to their members: names, dates of birth, family details, national insurance numbers, bank details and so on. This makes pension schemes attractive targets for cyber criminals - the potential proceeds of the crime are literally people's life savings.

 While cyber threats have been rapidly evolving for many years, AI is accelerating this trend and introducing brand new capabilities. It is a double-edged sword, capable of both creating and mitigating cyber threats. We will examine both sides below.

 AI in attack: how it could worsen cyber risk for pension schemes

 Deepfake impersonations
 Generative AI can already create realistic deepfake content, replicating someone's likeness in videos and pictures as well as their voice. It is not much of a stretch to see that this technology, if used duplicitously, could be used to impersonate someone live on a video call.

 If criminals were successful in gathering information about members of a pension scheme (either from the scheme itself or other avenues), they could defraud members through impersonation and initiating payments. At a higher level in a scheme's operation, processes could be vulnerable to key individuals being impersonated such as individual trustees, sponsor representatives or key advisors. If this sounds unlikely, consider the chilling tale of a UK engineering firm transferring $25m following a request from a 'digitally cloned CFO', which prompted an unsuspecting staff member to make the transfers.

 Enhanced phishing capabilities
 For some time, there have been some 'tell-tale' signs of a phishing attempt, with emails or websites being a poor approximation for a genuine communication from a trusted source. Of course, even in these circumstances phishing strategies have been an effective way for cybercriminals to attack, preying on individuals that are unaware of the signs and precautions, time-pressed or just unlucky.

 The breakthroughs in generative AI mean that large language models are now able to generate text that better mimics other styles and can be as high quality as that from a reputable organisation like a pension scheme. This increases the likelihood of individuals falling for phishing attempts, making it crucial for pension schemes to clearly convey their communication methods to members. Some of our clients see a role for the pension scheme in educating members of the risks and what to watch out for.

 Automated attacks
 Malicious software pries for weaknesses in a pension scheme or administrator's digital infrastructure. AI makes these attacks cheaper to carry out and therefore more prevalent. Such attacks could lead to:

 Ransomware being deployed that 'locks out' the administrator from the data it needs to operate the scheme – requesting a ransom is paid before access will be restored
 Member data being stolen and members or their families defrauded in onward attacks based on the information gathered
 Digital vandalism, with data being erroneously manipulated causing significant operational disruption

 AI for defence: how it could help guard pension schemes against cyber risk

 Protecting against phishing threats
 AI could be employed in the defence against phishing attacks. While the 'tell-tale' signs of a scam email are getting harder to spot, some will still remain. Unlike humans, AI doesn't suffer fatigue, stress or time-pressure and has the ability to check every email in detail for any hint of impropriety and could warn the individual before they click the offending link.

 Data surveillance for fraud detection
 Machine learning applications can be excellent at identifying hidden patterns and subtle cues that could be used to spot threats quickly. Such technologies could monitor all activity associated with a pension scheme's administration and might pick up on concerning threat signatures in a way that would be impossible for separate human administrators to do. For example, an AI tool might pick up on several members altering bank account details in a similarly suspicious way whilst dealing with different administrative staff.

 Vulnerability testing and patching
 Just as criminals can use AI to find weaknesses in cyber defences, these same techniques can be used for good to ensure that vulnerabilities are identified and patches are applied. Indeed, AI can even be used to help construct the programming logic required to speed up the process of addressing the vulnerabilities too.

 So, what does all this mean?
 Cyber risk already sits in an uncomfortably prominent spot on risk registers because the potential impact is material and the likelihood is higher than anyone would like.

 AI appears to have a greater impact on the likelihood of cyber-attacks than the damage they cause. While it equips bad actors with new modes of attack, it also offers powerful defence capabilities. Indeed, in the Bank of England's 2024 report on the use of AI within the financial services sector, cybersecurity and fraud detection were second and third most popular use cases among a list of 26. Our concern, however, is that bad actors might outpace good actors in leveraging AI.

 Given the critical data, operations and finances at stake, the amplified cyber risks heighten the need for robust cybersecurity measures to be taken in pension schemes and for members to be educated and vigilant.

Back to Index


Similar News to this Story

Artificial intelligence and pensions: Cyber risk
This is the third in a series that takes a deeper look at areas relevant to U.K. pension schemes and how artificial intelligence (AI) may have a signi
The four dimensions of reserving uncertainty
This article presents a framework that I have found helpful in working with general insurance firms to better manage their reserving risks. The four d
Strategies for defined contribution pension arrangements
Three areas of focus that companies can build into their strategy to help their defined contribution arrangements stand up to adverse conditions. Turb

Site Search

Exact   Any  

Latest Actuarial Jobs

Actuarial Login

Email
Password
 Jobseeker    Client
Reminder Logon

APA Sponsors

Actuarial Jobs & News Feeds

Jobs RSS News RSS

WikiActuary

Be the first to contribute to our definitive actuarial reference forum. Built by actuaries for actuaries.