By Alex White, Head of ALM Research at redington
For one thing, some sponsors may be stronger credits than the buyout providers, and are more likely to be bailed out in a crisis. And if the scheme can negotiate contingent calls or extra seniority in the capital structure, this could increase member safety – given the potential accounting benefits of not completing a buyout, the sponsor may be amenable. Equally, schemes offering discretionary increases (such as in times of very high inflation) may wish to retain that ability. Schemes might also simply be too large for buyout.
Every scheme has its own context, goals and constraints, and what works for one may not work for another.
So, if a scheme isn’t targeting a buyout, when does it become ‘self-sufficient’? There are several approaches we can use to answer this question. At one extreme, a scheme could assume their sponsor pays no money in, targeting a funding level that would allow them to pay off their liabilities unaided. This is theoretically viable, but is inconsistent with regulations and therefore isn’t that reflective of the real-world problem. Mathematically slightly thornier, but more pragmatic, is to define self-sufficiency as a suitably low probability of triggering contributions (or triggering substantial contributions). This approach can have counter-intuitive dynamics with changes in the technical provisions (TP) basis, but those are solved if the TP basis is consistent between strategies. It also has the advantage of being realistic and in line with the regulatory environment.
The first and most important takeaway from this definition is that being 100% funded on a prudent basis (we used gilts + 0.5%) doesn’t make a scheme self-sufficient. A fully funded gilts + 0.5% scheme has more than a 50% chance of calling for contributions.
In some ways, this isn’t surprising. Over very short periods (a day, or a second, say), the odds of an asset portfolio rising or falling are basically even. Since the next valuation will be in 1-3 years’ time, the odds of the funding level improving or worsening are not miles off 50-50 for a 100% funded scheme. If the TP is also gilts + 0.5%, and contributions are paid to cover any deficit, then the scheme will receive contributions in the event of the funding level falling.
Nothing is ever perfectly safe; there’s little that a suitably dramatic black swan event couldn’t overturn. But some things are safer than others. In a similar vein to the point above, if a scheme runs a portfolio with an expected return of gilts + 0.5% against liabilities at gilts + 0.5%, it’s unlikely to meaningfully exceed the liabilities. This means that each valuation is not too far away from a 50-50 chance of triggering contributions. We modelled a corporate bond strategy, which benefits from mean reversion, and found a 94% chance of triggering contributions at some point over the life of the scheme.
In both senses, buffers help. At 110% funded, the odds of triggering further contributions fell to around 1 in 3 (with the size of total contributions relative to the liabilities falling from c.7% to c.2%). Adding a return buffer into the strategy also made a big difference.
Relying less on traditional credit and running a strategy with an expected return 50bps higher than the TP reduced the probability of triggering additional contributions to c.15%, and the size of these contributions relative to the liabilities to c.0.5%. This takeaway is fairly intuitive: institutions, in general, tend to need buffers, and buffers can help DB schemes too.
Although, of course, the need for a self-sufficiency buffer doesn’t apply to every scheme in every circumstance, a typical scheme may well benefit by running a higher-returning strategy for longer.
|