The data lake can be considered the consolidation point for all of the data which is of value for use across different aspects of the enterprise. There is a significant range of the different types of potential data repositories that are likely to be part of a typical data lake.
It’s easy to be blinded (and impressed) with the rapid innovation and evolution in the arena of big data. Today’s most technically sophisticated companies have the opportunity to exploit big data tools to address mind-numbingly cool use cases and produce very enticing results. However, so many
This is the fourth in a series of blogs on analytics and the cloud. Read our introduction to the series. This blog concerns itself with the rise of open source software and how it is used for a whole host of analytical purposes. However, as will be seen in this blog, there are significant gaps in
Context-aware stream computing helps you become more responsive to emerging opportunities. By using innovative technologies to understand the context of data and analyze data in real time, you can put data to work.
Although NoSQL database technology has been around for a long time (before SQL actually), not until the advent of Web 2.0, when companies such as Google and Amazon began using the technology, did NoSQL’s popularity really take off. Market Research Media forecasts NoSQL Market to be $3.4 Billion by
Building a data lake is one of the stepping stones towards data monetization use cases and many other advance revenue generating and competitive edge use cases. What are the building blocks of a “cognitive trusted data lake” enabled by machine learning and data science?
In many cases the data lake can be defined as a super set of repositories of data that includes the traditional data warehouse, complete with traditional relational technology. One significant example of the different components in this broader data lake, is in terms of different approaches to the
This is the second in a series of blogs on analytics and the cloud. We will consider the rise of the Internet of Things (IoT), analytics used on that data and how the cloud can be utilized to drive value out of instrumenting a very wide range of ‘things’.
Fundamentally, machine learning is a productivity tool for data scientists. As the heart of systems that can learn from data, machine learning allows data scientists to train a model on an example data set and then leverage algorithms that automatically generalize and learn both from that example
In cognitive computing era, new revenue generation stream has emerged with data at center of the modern digital business model. One of the key capabilities cognitive computing enables for an organization is the ability to generate additional revenue streams by using data effectively. In the big
IBM’s community of big data developers continues to grow. As our Big Data Developer meetup program moves into its fifth year, this worldwide community of customers, partners and IBM developers is on the verge of enlisting its 100,000th member—when we published this blog, we counted 99,100.
The grand finale of the first IBM France Sparkathon invited Apache Spark developers to outthink the frontiers of client insights. Get the details on this event held during the IBM Business Connect conference and the application that took the top prize.
Analyzing streams of big data in real time can have a big impact on competitive advantage. In a world of bewildering stream processing engine choices, explore the use-case-dependent alternatives that can provide well-suited business outcomes, courtesy of expertise from Roger Rea and Jacques Roy.
Internet of Things data, devices and technologies are evolving into a core platform that is expected to impact business flexibility and more. Take a look at some key comprehensive best practices for Internet of Things–enabled application development that can put speed and agility into your business