
Are new data sources *sparking joy* for underwriting and claims?
Decluttering expert Marie Kondo asks a single question when determining whether a piece of clothing or a book should stay in the house or be thrown out: “does the piece spark joy?”
A very similar question can be asked today in the world of insurance. Is new data positively impacting insurers and sparking joy? Or are these just more data points destined to be thrown out in the end?
In general, the impact of new data sources can mean very different things to different insurance stakeholders. Of course, the insurtech movement is all about making an impact. However, the timeline to impact may vary greatly depending on which part of the industry is being targeted. This dichotomy can be seen clearly in the difference between claims and underwriting, and the impact insurtech-generated data has had so far.
In underwriting, there has been an explosion of new data sources over the last several years. While this fountain of new data has the promise to significantly impact the future of underwriting, underwriters often do not have the ability to pull these new data sources into existing processes. Matthew Grant recently wrote about this very problem and the need to create “smart data” for underwriters.
We see this most clearly in the IoT space, where insurtechs have spent the last few years struggling to find a silver-bullet dataset that can tell underwriters something about the risk of a property.
For example, my home IoT sensor may tell me that I leave my garage door open once a week. Does that mean I’m a better or worse risk? Are open garage doors in my underwriting guidelines? The answer is going to be no -- it’s not yet a relevant variable to underwriters and actuaries, because no one has explored whether or not there is a risk signal there.
Alternatively, my auto telematics dongle may report that I hit the brakes harder than average. As an underwriter, do I believe hard braking is a loss signal? Quite possibly! Can I prove this at scale? Not yet. Is hard braking in my underwriting guidelines? Also not yet.
As you can see, in underwriting, the bar for usability is high and the time frame to impact is long – at least a year or two. Therefore, insurtechs must design their data with current underwriting guidelines in mind and focus on providing higher quality data that plugs directly into existing workflows. The success of data in underwriting today is based on easy ingestion, data quality, and predictive power (which you can shortcut by focusing on existing attributes).This does not mean underwriters won’t use new forms of data. In fact, certain insurtechs, like Cape Analytics and ZenDrive, are making major headway in this space. But insurtech vendors need to walk before they run. Over the long term, the ambition should be to evolve the underwriting process with new kinds of loss-predictive data.
In short, data in underwriting is all proving risk signals and building towards long-term impact.