Machine learning is being used at the heart of next- generation methods for self-driving cars, facial recognition, fraud detection and much more. At IBM, we’re applying machine learning methods to SQL processing so databases can literally learn from experience.
Why has IBM created its own distribution of Apache Hadoop and Apache Spark, and what makes it stand out from the competition? We asked Prasad Pandit, program director, product management, Hadoop and open analytics systems, at IBM to give us a tour of the reference architecture for IBM Open Platform
IBM extended Big SQL, which was formerly exclusive to the IBM Open Platform (IOP), to the Hortonworks Data Platform (HDP) in September 2016. I recently spoke with Berni Schiefer, an IBM fellow in the IBM Analytics group, to learn more about the offering and the ongoing IBM focus on SQL.
IBM Insight at World of Watson 2016, 24–27 October 2016, at Mandalay Bay in Las Vegas, Nevada, is the only place to be for people who work with data. Take a look at this list of top-ten reasons you wont’ want to miss out on one of the most intriguing and innovative events of the year.
As a foundation for data lakes and refineries, NoSQL databases provide access, processing and storage to structured and unstructured data for high-performance statistical modeling and exploration. Take a look at the multitude of advantages of NoSQL databases and opportunities to bridge them to open
Performing programmatic actions on data across services is quite possible in today’s technology ecosystem. And now, the transfer of data across services such as the dashDB data warehouse and deploying it in new environments is also possible. However, the questions often asked by customers center on
Spark just seems to be getting big play everywhere in the technology arena. What is Spark? And do you need it? Get a good glimpse into its in-memory execution capabilities, some of its key components, its integrations and its availability as a service.
A new version of the IBM DB2 for Linux, Unix and Windows (LUW) database was recently released, and it is well fortified with core advances, key management and security features and deployment and availability enhancements. Discover these advancements and much more in a blog series dedicated to this
The open source Hadoop framework accommodates distributed storage and processing of large data sets on clusters of computers through the use of programming models. If that description sounds complex, then dig into this breakdown of Hadoop components to gain an understanding of just how flexible
This short series of blogs for the business user is designed to turn key technologies into easy to understand concepts to help explain why they are needed in a modern digital enterprise. When looking at consumer and business transactions in today’s online world, many people may ask, “Why big data
On the heels of several key announcements to broaden the IBM Cloud Data Services portfolio, see how a wide range of technologies can be implemented in a cloud-based, data warehouse architecture to support operational and analytical workloads.
An open ecosystem thrives on a mature core platform. It also depends on partnering arrangements that incentivize solution providers to continue developing standards-based interoperability around the shared environment. Take a deeper dive into recent announcements of new open ecosystem milestones