Articles - With these hungry eyes one look at you and I am optimised


As anyone who’s used ChatGPT can tell you, Artificial Intelligence (AI) techniques are constantly pushing the barriers of what we thought was possible. It’s interesting to think how much this could impact the investment industry. On the more quantitative side, optimisers have been used to help design Strategic Asset Allocation (SAAs) for some time; but they haven’t yet replaced human input, and it’s worth considering why.

 By Alex White, Head of ALM Research at Redington
 
 What do they do?
 In some ways computers are phenomenally ‘intelligent’ - in other ways, it can be helpful to think of them as extremely fast idiots, or savants. Any optimisation is, at its core, maximising a number. When playing games, the goal is well-defined, so clarifying exactly what to optimise is relatively easy; and we know that AI dramatically outperforms anything humans can do. With more layered problems, such as investments, what you’re trying to maximise is more nuanced, and optimisers can be somewhat myopic in these spaces.

 For example, the standard formula in Solvency 2 does not allow for diversification among straightforward credit assets; therefore, an optimiser picking the highest return for a given Solvency Capital Requirement (SCR) may pick a very concentrated credit portfolio for a negligible pickup in expected return over a less risky, more diversified alternative . It’s finding a solution but without understanding the true nature of the problem or the potential consequences.

 What about less obviously quantitative inputs?
 It can also be extremely difficult to codify certain factors - with upcoming rule changes a good example. There is a clear regulatory drive to make infrastructure assets more attractive by giving them lower capital charges, so there’s an argument for holding more dry powder; how should an optimiser weight this? On the other hand, at the time of writing, credit spreads are generally quite wide, when compared to historical levels.

 Should an optimiser factor in opportunity cost into a dynamic strategy? There are reasons for and against this, which are broadly the advantages and disadvantages of long-term modelling - you get a more directly relevant answer at the cost of more dependence on both assumptions and on interactions between assumptions. In the case of optimisers though, it's probably easier to find any weaknesses or dependencies in a model than to discover truths about the universe, so an optimiser is more likely to do the former.

 Another example would be relative confidence. For example, I believe that techniques such as volatility control are likely to lead to better outcomes, but I have more confidence that equity beta will outperform over the long-term; similarly, the potential error in estimating the spread on a liquid asset is lower than on an illiquid asset. Both are theoretically quantifiable, but not in any obviously practical way. So how should an algorithm account for them?

 Why use them at all?
 None of this makes optimisers useless - not by a long shot. But having a more grounded view of the strengths and weaknesses makes it clearer how to use them in a more tailored and effective way. For one thing, the fact that optimisers exploit any modelling inconsistencies makes them a good tool for checking models.

 Beyond that though, there are lots of investment questions where optimisers give better answers than humans - for instance:
 • What is possible- what sort of characteristics can I achieve within my constraints?
 For example, how close is a portfolio to various efficient frontiers?
 • Can I improve my portfolio by adding in a new asset class?
 • What are the trade-offs involved in various constraints, and how much is this constraint limiting my portfolio?

 Where does that leave us?
 Optimisers are useful tools, which can help inform better decisions; but they’re not magic pills that solve the problem. With any tool, it’s important to remember what it’s good for and where it might be flawed, and use it accordingly.
  

 As a more light-hearted example, I asked ChatGPT to help come up with a title that was a pun on a song lyric, replacing a word that rhymed with “optimiser “; it proposed “The club isn't the best place to find a lover, so the optimiser is where I go.” 

Back to Index


Similar News to this Story

How do you solve a problem like Nvidia
At the time of writing, NVIDIA is down from its summer highs, and no longer worth as much as the entire UK stock market. Following the reported subp
Responsible AI why it matters for Life and Health insurers
AI introduces new avenues for innovation and efficiency in L&H insurance, yet it also presents risks like bias, inaccuracy, misinformation, and ethica
Tapping into the data trinity
Looking at the strategic benefits of databases and how to get the best use out of them as data is increasingly becoming the currency for the world we

Site Search

Exact   Any  

Latest Actuarial Jobs

Actuarial Login

Email
Password
 Jobseeker    Client
Reminder Logon

APA Sponsors

Actuarial Jobs & News Feeds

Jobs RSS News RSS

WikiActuary

Be the first to contribute to our definitive actuarial reference forum. Built by actuaries for actuaries.