From sandbox to massive scale: Getting it right the first time
When IBM announced innovations in information integration and governance for big data last fall, we IBMers believed we were on to something. Experience with clients worldwide suggested that automated integration, enabling self-service integration for big data repositories, would be a big deal. A Forrester Research study at that time revealed the importance of agile information integration and governance (IIG) to big data. So a giant leap forward in visual context, with a dashboard for information governance, seemed to address a growing need for governance and visibility. But how would potential users respond to the capabilities when they were delivered?
These comments from customers in the insurance industry provide a window into the feedback we’ve received about InfoSphere Data Click, which saves time and enables self-service integration by data scientists and others without reliance on an IT team that already has a backlog of projects:
- “This is the way sandboxing is supposed to work.”
- “I have a feeling before long Gartner will be telling us if we’re not doing this, something is wrong.”
As these new capabilities take hold, we’re sometimes asked how IBM can focus on innovations like self-service integration for big, and not-so-big, data and a dashboard for big data governance, without concern about the big data itself and the impact of the enormous volume, variety and velocity of data that are suddenly available to so many organizations? We can do it because of confidence in the underlying platform—a platform that was designed from the start to provide massive scalability and support unlimited volumes of data. While data scientists are easily moving ahead with their projects in the sandbox, their IT colleagues in the production world are integrating big data on a proven platform known for massive data scalability.
That solid but flexible foundation—now, incidentally, part of Watson Foundations—has enabled the design team to focus on innovations to make big data easier to understand, easier to digest and easier to deploy as part of an ever-changing array of business initiatives, all without re-design, re-work or re-anything. When it’s done right the first time, innovation on a production-proven foundation is a natural.
- White paper: Who’s Afraid of the Big (Data) Bad Wolf?