Data, insights, cloud, agile, analytics. These are all terms that get thrown around a lot in technology these days. But the truth is that unless you can combine some or all of these concepts, the bottom line benefit to your business will likely not as great as you may expect.
This is the fourth in a series of blogs on analytics and the cloud. Read our introduction to the series. This blog concerns itself with the rise of open source software and how it is used for a whole host of analytical purposes. However, as will be seen in this blog, there are significant gaps in
In the past, the relationship between the different models that might be used in defining a data warehouse was a very linear one. There may have been different model artifacts used as the team responsible for developing the data warehouse progressed through the usually waterfall-type set of
Although NoSQL database technology has been around for a long time (before SQL actually), not until the advent of Web 2.0, when companies such as Google and Amazon began using the technology, did NoSQL’s popularity really take off. Market Research Media forecasts NoSQL Market to be $3.4 Billion by
IBM Analytics VP of Marketing Jeff Spicer sits down with Data Scientist and evangelist Dez Blanchfield to recap IBM InterConnect 2017 and give his insights into a few of the announcements from this year's event.
Quite often, we see that the need for data security and governance makes some organizations hesitant about migrating to the cloud. This is perfectly understandable given the types of data gathered and used by businesses today, the regulations they must adhere to on both a local and global level,
In many cases the data lake can be defined as a super set of repositories of data that includes the traditional data warehouse, complete with traditional relational technology. One significant example of the different components in this broader data lake, is in terms of different approaches to the
This white paper discusses the advantages of using the PySpark API, which enables the use of Python to interact with the Spark programming model. It starts with a basic description of Spark and then describes PySpark, its benefits, and when it is appropriate to use instead of "pandas" open source
This is the second in a series of blogs on analytics and the cloud. We will consider the rise of the Internet of Things (IoT), analytics used on that data and how the cloud can be utilized to drive value out of instrumenting a very wide range of ‘things’.
This is the first in a sequence of blogs that takes a peek at what is driving analytics onto the cloud, what are the challenges that will need to be overcome over the next 5 years and how they will be tackled.
J White Bear is a data scientist and software engineer at IBM. In this podcast, White Bear discusses simultaneous localization and mapping, an ongoing research area in robotics for autonomous vehicles and well-recognized as a nontrivial problem space in both industry and research.
Seth Dobrin is vice president and CDO, IBM Analytics, platform development, at IBM. In this podcast, Dobrin shares experiences using Apache Spark for data science transformation and some thoughts on a larger vision for data science transformation at scale.
It is said that more data has been created in the past two years than in the entire preceding history of mankind. It would be interesting to find out how much of this data has been analyzed and put to good use. Analyzing and harnessing big data is undoubtedly the major challenge of the day for all