Blogs

How Big Data and Cognitive Computing are Transforming Insurance: Part 2

Worldwide Industry Marketing Manager for Insurance, IBM

In last month’s post, I talked about how cognitive computers, like IBM Watson, have the ability to do what the earliest underwriters did: approach each risk individually and, based on historical learning, apply reason and judgment to determine a rate. Cognitive computing allows insurers to analyze massive amounts of unstructured and structured information in real time, formulate thousands of hypotheses, test for the best hypothesis, determine an optimal outcome and learn from the results.

The promise of cognitive computing can only be realized through the power and promise of big data. While affordable cognitive computing may be a few years away, the big data technology that will enable this shift is available now and can be used with traditional technology to achieve better business outcomes today.

While big data presents a great opportunity, it also poses challenges. To develop strategies that capitalize on the potential gold mine of information that big data represents, many carriers will have to challenge their data-centric, business-as-usual approach and the traditional principles that form the foundation of the industry.

Historically, data was an asset to be collected, stored and maintained as a differentiating advantage—but today, simply having data is no longer an unqualified benefit. Data is abundant in volume and variety. Some data is static while other data is dynamic. Some data is trusted, and some isn’t. With so much data available to anyone with the skills to find and harvest it, the real benefits of big data are available only to organizations with the capability to discern patterns and distill actionable business intelligence.

insurance-post-2-1.png

The explosion of modern data happens along four key vectors.

Beyond volume, velocity and variety, there is a fourth dimension for data: veracity, or the trustworthiness of data. Many insurance companies believe that all data used to produce insight must be cleansed and polished in a warehouse before it can be used.

According to IBM Research, by 2015 a majority of data will be unstructured and uncertain. Part of this is due to increases in social media traffic and networked devices with sensors, both of which represent uncertain data sources. As the amount of unstructured and uncertain data rises, attempts to structure and cleanse all of it before it can be used will create serious bottlenecks for insurance providers and limit the usefulness of the data.

The notion of “making sense” is a profound and complex problem. In creating the first scalable cognitive computing machine in IBM Watson, IBM engineers could never structure the data to support every question. Similarly, it would be impossible to program a computer to answer any question that might be asked of an underwriter. A better approach: make better sense out of the data. Break the question down into parts, form hypotheses and test them until a degree of confidence is gained.

Today we assume that the more data we have, the longer it takes to find an answer. But with computing technologies that focus on making sense of data, a larger volume of data can actually speed up the process of finding answers: Jeff Jonas, IBM Fellow and chief scientist for IBM Entity Analytics, has worked to show that the more data you have, the faster you can generate a result; like a jigsaw puzzle, as more pieces are connected, the easier and faster it gets to place the next one.

insurance-post-2-2.png

A better assessment of the data around and connected to a single piece of information enables a more complete, in-context understanding.

 

So if an insurance provider found a criminal committing fraud, the relevant data can identify other relevant data to identify others connected to the criminal and similar cases that match the same profile. With enough data and computing technologies designed to make sense of that data, pieces of the puzzle more easily fall into place.

To learn more . . .