- Aquaculture, Equine & Livestock
- Architects & Engineers
- Aviation & Aerospace
- Consumer Goods & Services
- Education & Public Entities
- Entertainment & Leisure
- Financial Services
Let’s Talk: Natural Perils
July 20, 2016
Catherine (Cat) Pigott leads a virtual team responsible for analysing and modelling natural perils. She is an actuary by training and a Fellow of the Institute and Faculty of Actuaries (UK).
Catastrophe (cat) modelling is often regarded as dense and complicated. Could you demystify it?
While the details can sometimes get rather complex, cat modelling is not as dense as many people perceive.
In simple terms, cat models are a tool for estimating potential losses following a natural disaster like a hurricane, earthquake or flood. Where it can get complicated is that cat models generate outputs derived from two types of data: historical and scientific information on different types of natural perils including frequency, severity, geographic spread, etc.; and exposure data including occupancy, construction details, value, etc. While the inputs and outputs of a model can be grasped pretty readily, the mechanics of organising different types of data into an informative model can indeed be somewhat elaborate.
In the late 1980s / early 1990s, several large-scale disasters illustrated the need for and value of modelling catastrophe risk, notably Hurricane Andrew that devastated the south coast of Florida in 1992. It caused USD 15.5 billion in insured damages and left nine insurers insolvent. Around the same time, some major scientific studies of natural hazards, combined with advances in information technology and geographic information systems, made it increasingly possible to measure hazards and map risks.
In the wake of Andrew and other disasters, many (re)insurers began to incorporate cat modelling into their decision-making – especially for pricing and managing accumulations in hazard-prone areas. Since then, cat modelling has become increasingly sophisticated, especially as more research is conducted into the causes, dynamics and impacts of various types of natural disasters. At the same time, modelling tools continue to become more innovative while technology platforms also become faster and more powerful.
What is your team’s role?
We are a small, multidisciplinary team of actuaries, cat modellers, and scientists with backgrounds in relevant areas like atmospheric science. We work across the entire enterprise. We also liaise extensively with other scientists and technical specialists within XL Catlin and externally.
Like other (re)insurers, XL Catlin relies primarily on cat models licensed from commercial companies. So a big part of our role is to evaluate and adapt cat models from different companies covering specific perils and geographies. The aim is to identify the strengths and weaknesses of the model and determine what we can and cannot do about those shortcomings. We also decide on the best uses of the model internally.
We typically start by breaking the model into its constituent pieces and then evaluating each component separately. And when something looks sub-optimal, we work with the developer to understand the issue further. We may then decide to adapt XL Catlin’s copy of the model so that it is more robust and fit-for-purpose.
For example, a commercial model for a particular peril and geography will be built around a hazard or a science module. We’ll try to isolate that part of the model, and assess it based on data from an academic organisation, a publicly available source and our portfolio data. We’ll review the scientific methods that have gone into the model, and where possible, make comparisons against other models.
In the process, we often find elements that are particularly uncertain; large, catastrophic events are – fortunately – relatively rare, so the historical data are often fragmented and sparse. Or we might have a different view of the risk compared to the developer’s default view. In either case, we need to justify, internally and externally, why XL Catlin’s view of the peril is more in line with the potential risks and recommend modifications. We also need to develop a roadmap for making the proposed changes.
So it’s a mix between deconstructing models to understand their inner workings, offering recommendations to make the models more reflective of specific perils and XL Catlin’s distinctive view of risk, and educating users about the strengths and weaknesses of particular models.
What do you mean by XL Catlin’s “distinctive view of risk”?
That means having a specific and particular point-of-view about how a particular risk is likely to impact XL Catlin’s clients, and, in turn, our different portfolios.
Commercial models are designed for everybody to use, and they’re typically based on averages. But our portfolios don’t necessarily match the averages. As a result, it’s important that the models we use reflect XL Catlin’s particular risk profile as well as our knowledge and expertise in, for example, specific occupancies.
XL Catlin, for instance, is the leading insurer for a particular property type in the UK that is susceptible to wind damages. These are unusual structures that are by no means average. We have been writing this business for more than 25 years and have a sophisticated understanding of the risk of windstorms on this occupancy. The developer of the UK windstorm model acknowledged that they do not have data on this occupancy, and since we’ve been writing it for so long, we do. So we are working to modify XL Catlin’s use of the model to reflect XL Catlin’s distinctive view of this unique risk.
What do you see as some of the most promising developments in this field?
I see two main developments driving the evolution of cat modelling over the next few years: big data, and the emergence of new model developers, including collaborative, open-source initiatives.
The increasing volume of data from a widening variety of sources and flowing with greater frequency – aka big data – is having significant implications across all kinds of fields, including cat modelling. For instance, when we evaluate a model, we now have significantly more and better data from academics, government agencies, and various other sources. Our Science team is liaising, for example, with some universities to capitalise on new research for more advanced studies.
The other piece is around exposure data. The more we understand about the characteristics of the buildings we insure, the greater the reliability of the model outputs. However, accurate and up-to-date exposure data has historically been difficult to get. Today, many new information sources are available to supplement what we get directly from clients.
Another exciting development is the emergence of smaller model developers offering solutions that provide more flexibility in incorporating the user’s assumptions and data. That includes some open source models that typically are collaborative efforts between private industry, academic researchers and government agencies.
These new approaches could move cat modelling to the point where (re)insurance companies aren’t just using models from the same major developers straight out of the box. That should lead, in turn, to much more divergence of opinion between different (re)insurers. And that’s good because it lessens the possibility of systemic risk within the industry, and it will also create more differentiation between (re)insurers.
Last question. You go by the name Cat. When did you become aware that it was shorthand for catastrophe?
(Laughs.) I’ve been called Cat since I was a child. I didn’t know it was shorthand for catastrophe until I started applying for insurance jobs. I must admit that in my first position, I was tempted to change the spelling to a “K” because it was causing genuine confusion. People were referring to me, the person, and other people thought they were referring to cat risk. So, yes, it is brought up regularly. And it is a bit ironic that I ended up in this field. On the other hand, I do find that people remember my name!
Want to know more? You can reach Cat at email@example.com