A new generation of catastrophe models is set to transform our understanding of the uncertainty inherent in risk analysis
All catastrophe models are wrong in that they are unable to provide accurate estimates of the probability of insurance losses from natural disasters. Until recently it has been very difficult to estimate how wrong they might be. When investors read the modelling analysis in a catastrophe bond prospectus, they will see the average annual default calculated to seven figures or more. But they would be wise to remember that other, equally plausible, approaches could produce numbers that are half or double.
Like throwing a dice, the number of US hurricanes in a year is uncertain. When we roll a six sided dice, we know that the result is random but that the probability of getting any number is a sixth. But catastrophe models also contain another, less tractable type of uncertainty. Even among experts, there is no consensus on the average number of hurricanes in a particular year. It is like calculating the average value of a roll of a dice without knowing its shape.
In fact, the level of knowledge associated with the frequency of Atlantic hurricanes is high compared to other perils that reinsurers are interested in quantifying. In many parts of the world, earthquake models rely on a handful of data points and little understanding of the physical processes that drive them. For terrorism, the probability of an attack is not only extremely difficult to pin down, but changes with every news cycle.
And the large uncertainty in the frequency of events is just the beginning of the difficulties in building accurate catastrophe models. It is compounded by uncertainty in the size and severity of events and then by uncertainty of how a particular wind speed or earth motion will effect a particular building. Additionally, there is often a great deal of uncertainty about how the legal and political environment following a large disaster will affect insurance claims.
Nevertheless, in the last 15 – 20 years property insurers and reinsurers have become totally reliant on cat models for all the key pricing and capital decisions that drive their businesses. These models have also played a large role in enabling capital market investors to be comfortable with catastrophe risk.
One important reason for their success is that cat models introduce a common yard stick to measure the relative value of alternative decisions. It is often possible to say with some confidence that the risk of a deal is 10% more than the same deal last year or 20% less than some other deal. But the absolute value of risk is much harder to specify.
But the practice of ignoring the large uncertainties inherent in quantifying catastrophe risk may be coming to an end. The new generation of catastrophe models have been designed to explicitly expose the variability of results.
Next month, RMS will begin rolling out their new modelling platform called RMS(one). Ben Brookes explained how sensitivity analysis will be at the heart of the new approach:
“We open up the model settings much more than has been possible in the past so that, as a user, you can decide that you think the RMS default view of risk should be changed by X percent up or down depending on your own personal view. You don't have to rely on a single view of risk which is what’s typically presented in an offering circular.”
The same is true of AIR Worldwide’s new Touchstone system. Brent Poliquin, commented that users will have “unlimited access to run as many analyses and sensitivity cases as needed to stress test model uncertainty to truly gain comfort in the robustness of each AIR model.”
Another way of generating a range of results is to use more than one model. This has been common practice in insurance and reinsurance companies for some time and has recently been introduced to the cat bond market. In their recent Tradewynd bond, AIG took the unusual step of releasing exposure information to AIR and EQECAT in addition to the modelling agent which was RMS.
The modelling firms have embraced the idea that it is in their customers’ interest to use competing models. EQECAT’s Paul Little said “I do think generally it's good to have multiple views of risk. There are significant differences in the methodologies used by the modeling firms.”
When making investment decisions involving hundreds of millions of dollars it is tempting to look for reassurance from a single ‘accurate’ model. But the reality is that quantifying the risk of infrequent natural disasters relies on making assumptions that are unknowable – either in practice or in principle. Understanding a little more about how little we know could teach us a great deal.
Adam Alvarez works as a consultant for insurance linked funds and their investors. He can be reached at firstname.lastname@example.org
Posted: Monday, March 24th, 2014