Blogs

Post a Comment

Enhancing Catastrophe Risk Modeling in Insurance

June 20, 2013

Analytics solutions designed to handle the volume and variety of data available today also help insurance companies improve catastrophe risk modeling, through which companies determine the exposure of current policies and predict the probable maximum loss (PML) from a catastrophic event. Catastrophe risk modeling is vital for setting policy prices and ensuring there are sufficient reserves available to pay future claims.

Solutions designed specifically for big data can help insurance companies accelerate the speed and increase the precision of catastrophe risk modeling. In the past, actuaries were restricted by the inability to run frequent, fast models. Now, companies can use solutions for big data to integrate historical event data, policy conditions, exposure data and reinsurance information. With a powerful data warehouse built for big data, they can analyze trillions of data rows and rapidly deliver results that underwriters can use to price policies by street address, proximity to fire stations or other granular parameters, rather than simply by city or state. A big data enabled solution allows pricing models to be updated more often than a few times a year, and specific risk analysis can be determined in minutes rather than hours.

A solution built to analyze streaming data enables companies to capture and analyze real-time weather and disaster data that might affect losses. Predictive modeling solutions can help companies anticipate those losses before disasters strike or as events unfold. Insurers gain the ability to model losses at the policy level, calculating the effect of a new policy on the portfolio while it is being quoted.

For example, a large global property casualty insurance company wanted to accelerate catastrophe risk modeling in order to improve underwriting decisions and determine when to cap exposures in its portfolio. The existing modeling environment was too slow and unable to handle the large-scale data volumes that the company wanted to analyze. The goal was to run multiple scenarios and model losses in hours, but the system they used required up to 16 weeks to acquire data, model losses, perform analytics and take actions. As a result, the company was able to conduct analyses only three or four times per year.

Working with this company, IBM showed how the company could improve performance by 100 times, accelerating query execution from three minutes to less than three seconds. As a result, the company decided to implement IBM solutions for big data.

The company can now run multiple catastrophe risk models every month instead of just three or four times per year. Once data are refreshed, the company can create “what-if” scenarios in hours rather than weeks. With a better and faster understanding of exposures and probable maximum losses, the company is able to take action sooner to change loss reserves and optimize its portfolio.

With solutions for big data, companies can improve the performance and speed of catastrophe risk modeling to increase the precision of policy pricing, optimize reinsurance placement and determine when to cap a book of business based on risk.

To learn more, read the white paper, “Harnessing the Power of Big Data for Insurance

See more blog posts, videos, podcasts and reports on big data in insurance