Highlights and success stories from IBM Insight 2015
Opening day of IBM Insight 2015 started with a keynote focusing on IBM Watson and Cognitive Computing. If you’ve never been to Insight, it’s an experience to enjoy the keynote with 14,000 other conference goers. If you can’t be in Las Vegas, do check out IBMGo. Even if you’re here, by watching IBMGo, anyone can research some of the companies and software products that are mentioned, take notes more easily and then after the keynote, rush over to the conference center for the breakout sessions. IBM clients fly in from all over the world to share their success stories and their lessons learned. Monday’s sessions provided two noteworthy and well-attended client presentations from Honda and Nationwide.
The Honda presentation, “Honda's Big Data Approach with PMQ, SPSS and WCA” was made by Kyoka Nakagawa, who represents Honda R&D. With 27 years at Honda R&D, Nakagawa was introduced as the “Big Mama of Big Data.” There were some fascinating success stories, but it was even more valuable to hear her candid assessment of things that could have been improved with the benefit of experience. A particularly interesting example involved trying to distinguish between “panic breaking” and other breaking events using telemetrics. When the company discovered that the standard data was not granular enough, it produced a test car to use at its proving ground, gathering more detailed data, trying to sort out how to solve the problem.
A large number of team members have received training in IBM SPSS Modeler. As they attempted to build models, they’ve run into some of the classic challenges around predictive analytics, including the data silo problem. Their division, Honda R&D, can’t combine its data with Honda Motor Company as well as they would like to look at issues such as warranty claims. They learned that a detailed knowledge of the data must be achieved before safely building models, and that models can fail if this is not done thoroughly. Rushing to modeling is common, and one of Nakagawa’s regrets was waiting until mid-project to consult with the IBM Data Science team, instead of drawing on their experience at the very start of the project.
Another lesson learned, in her view, was not involving the end user of the model more closely at the beginning of the project. In some cases, her team brought powerful data to support subject-matter experts, but the experts were skeptical at first and preferred their established approach. Finally, she intends to formalize the test and evaluation process before models go live—something she thinks can be improved. Her candor is sure to help other organizations achieve what Honda R&D has achieved, while avoiding some of these common stumbling blocks.
Alexander Bork and Sam Bass, both of Nationwide, presented “How Nationwide Uses Predictive Analytics” and explained how they leverage SPSS Modeler and Collaboration and Deployment Services to produce what they called a “Model Factory.” Earning some laughs around “Back to the Future Day,” they described what modeling at Nationwide was like back in 2010. Each model took 12–20 weeks and cost $250,000. Now they have decreased that development time to less than a month. They have truly mastered the art of collaboration between the “SPSS Admin” team, which they represent, with the modelers who are spread out within lines of business.
The list of improvements to the process through five years of hard work is impressive. First, they have made massive upgrades to the development environment for the modelers. After recognizing the large amount of historical data, and the day-to-day importance of the development environment to the modelers, they have increased resources to the environment to rival or even exceed the resources given to the production environment. They have formalized the test and evaluation process before going live in the production environment with an assembly line–like process that is a driving force in decreasing model development time so effectively.
Their deployment model is interesting, and seems a perfect fit to the problem. In a case study involving the proper routing of IT service center tickets for resolutions, they don’t fully automate the process, but rather provide the human team member with the necessary information at the moment they need it, so team members can combine tribal knowledge with the model predictions, inducing confidence. They use a similar deployment model to support the underwriting team. As their growth continues, they have established a pipeline of data science talent from nearby Ohio State University, and have a long list of goals for 2016 and beyond including streaming source data, Spark and more real-time use cases.
Keith McCormick will be signing books at the Insight 2015 bookstore on Wednesday, and co-presenting with his colleague Ram Himmatraopet on at Insight on Thursday. Learn more about predictive analytics, statistical analysis and IBM analytics resources.