The Cloud, Visualizations and Apps: Wikibon’s Big Data Predictions for 2014
2013 certainly was an eventful year for Big Data. Some highlights:
- Over the last 12 months, we saw the technologies that constitute Big Data, namely Hadoop and NoSQL data stores, continue to mature with a special focus on those elusive “enterprise-grade features” and on bringing SQL-style, interactive Big Data queries to these platforms. This includes the release of Hadoop 2.0 and YARN, which promises to enable data warehousing, machine learning, graph analysis and other applications on top of the open source Big Data framework.
- The vendor ecosystem got a lot more interesting in 2013. EMC spinoff Pivotal officially hit the scene, though plenty of questions remain about how the company will stitch together so many disparate parts. Intel unveiled its own Hadoop distribution, promising to increase performance and improve security by embedding capabilities directly into the silicon. IBM announced a partnership with leading NoSQL vendor MongoDB and made Watson available as a cloud service, making cognitive computing and advanced analytics available to SMBs. And GE - yes, the industrial equipment maker GE - announced its own Big Data platform and analytic applications to power the Industrial Internet.
- The cloud became an increasingly important part of the Big Data conversation this year. AWS RedShift, a high-performance data warehousing as a service offering, became the fastest growing service in AWS’s history. AWS also gained significant traction with its own Hadoop service, AWS Elastic MapReduce IBM acquired SoftLayer, potentially laying the groundwork for a comprehensive cloud platform for Big Data analytics and applications to compete with AWS. And Pivotal released the first version of its own Big Data Platform-as-a-Service, built on open source Cloud Foundry.
- From a pubic relations standpoint, Big Data took its fair share of knocks in 2013. Many wondered aloud if Big Data had entered Gartner’s dreaded “Trough of Disillusionment,” when the heady early days of a new technology give way to the realization that there is no silver bullet. There were too many article and blog posts to count that claimed Big Data was little more than a fad, or buzzword, or marketing spin. That didn’t stop a number of Fortune1000companies from diving headfirst into Big Data projects in 2013, however.
So what does 2014 have in store. Here are my predictions for Big Data in the New Year.
The public cloud will become the on-ramp for the majority of new Big Data exploratory analytics projects in 2014.
There are all sorts of reasons that the public cloud is not ideal for Big Data projects, critics will tell you. Its not secure. Moving large volumes of data to the cloud is challenging. Corporate policies and regulatory demands preclude moving many workloads to the cloud. True, true and true. But the benefits of the public cloud for Big Data projects are also many, and often outweigh the negatives. Getting Big Data projects up and running in internal corporate data centers is complicated. You have to procure the hardware, find experienced practitioners and configure clusters of machines - and that’s before you even start working on analytics and applications that will actually deliver business value. The public cloud removes these bottlenecks and removes the burden of needing to plan for future growth (how many machines will you need this year? next year? next month?) The cloud is also ideal for supporting the iterative nature of Big Data exploratory analytics. With the public cloud, Data Scientists can quickly spin up clusters of machines, load some data and begin exploring. If project doesn’t lead anywhere - as is often the case in early Big Data analytics projects - he or she can just flip a switch and spin the machines down again. In short, the public cloud allows Data Scientists to focus on what they do best - deriving insights fro Big Data - and not have to worry about the underlying hardware and infrastructure.
Data visualization finally brings analytics and insights to business users next year.
Traditional business intelligence platforms and applications did not live up to the promise of democratizing data throughout the enterprise. At most enterprises that have deployed traditional business intelligence platforms and apps, adoption rates begin to stall out at between 18% and 20%. This is because traditional BI is inflexible and time-consuming to deliver, not always intuitive from a user perspective and often just plain frustrating to work with. As a result, Excel is still the tool of choice for most business users when it comes to working with data, Big or small. But there are a slew of vendors - both large and small - that today offer data visualization tools that put the power in the hands of business users. These modern data visualization tools come in a number of different flavors, but they have one important trait in common: they allow business users to both integrate data sources and visualize the resulting mash-ups through GUIs that employ drag-and-drop interfaces, wizards and other self-service tools. They take the middleman - IT - out of the equation as much as possible. The results are much faster times-to-insight and a lot fewer headaches for business users.
A plethora of new start-ups focused on Big Data applications will hit the scene in 2014.
Among the biggest of Big Data challenges has proven to be building Big Data applications, and it is the application layer where real business value is delivered. According to a survey we conducted last summer, Big Data early adopters are struggling to realize ROI from their investments, and one of the reasons is a lack of applications that bring vision to reality. And I’m not talking about data visualization applications. By Big Data applications, I’m referring to use-case specific applications that leverage Big Data analytics to suggest next-best actions to end-users and/or automate operational processes based on real-time intelligence. An example of the former would be a Big Data risk analytics application used by a loan originator that analyzes multiple data sources (not just credit score!) to deliver risk profiles and product recommendations for mortgage applicants. In this case, the user is not “playing with the numbers.” Rather, the Big Data application is providing insight and suggested actions for a specific, targeted problem or use case. An example of the latter might be an application that ingests and analyzes real-time temperature data and machine data to determine the optimal settings of a power generator at a given time and then executes the needed changes, all without any human intervention. I believe we’ll see many start-ups (and more than a few mega-vendors) offering these types of new Big Data applications in 2014 because the cloud makes it easier than ever to spin-up large cluster of machines to develop new applications, Big Data infrastructure technology (both hardware and software) has matured significantly in the last year (i.e. Hadoop 2.0) and (maybe most importantly, depending on your perspective) there is lots of capital available to developers with smart ideas for new Big Data apps.
Just a few predictions for Big Data in 2014. The great thing about covering this market, however, is that things are moving so fast, there are sure to be a number of developments in the coming year that nobody saw coming. Should be another fun 12 months.