This white paper discusses the advantages of using the PySpark API, which enables the use of Python to interact with the Spark programming model. It starts with a basic description of Spark and then describes PySpark, its benefits, and when it is appropriate to use instead of "pandas" open source
This is the second in a series of blogs on analytics and the cloud. We will consider the rise of the Internet of Things (IoT), analytics used on that data and how the cloud can be utilized to drive value out of instrumenting a very wide range of ‘things’.
Fundamentally, machine learning is a productivity tool for data scientists. As the heart of systems that can learn from data, machine learning allows data scientists to train a model on an example data set and then leverage algorithms that automatically generalize and learn both from that example
J White Bear is a data scientist and software engineer at IBM. In this podcast, White Bear discusses simultaneous localization and mapping, an ongoing research area in robotics for autonomous vehicles and well-recognized as a nontrivial problem space in both industry and research.
In cognitive computing era, new revenue generation stream has emerged with data at center of the modern digital business model. One of the key capabilities cognitive computing enables for an organization is the ability to generate additional revenue streams by using data effectively. In the big
IBM’s community of big data developers continues to grow. As our Big Data Developer meetup program moves into its fifth year, this worldwide community of customers, partners and IBM developers is on the verge of enlisting its 100,000th member—when we published this blog, we counted 99,100.
Seth Dobrin is vice president and CDO, IBM Analytics, platform development, at IBM. In this podcast, Dobrin shares experiences using Apache Spark for data science transformation and some thoughts on a larger vision for data science transformation at scale.
In this white paper, discover how programmers and data scientists can use SparkR to transform R into a tool for big data analytics, taking advantage of parallel processing and near-linear scaling to tackle much larger challenges than would normally be possible with other methods.
The grand finale of the first IBM France Sparkathon invited Apache Spark developers to outthink the frontiers of client insights. Get the details on this event held during the IBM Business Connect conference and the application that took the top prize.
Analyzing streams of big data in real time can have a big impact on competitive advantage. In a world of bewildering stream processing engine choices, explore the use-case-dependent alternatives that can provide well-suited business outcomes, courtesy of expertise from Roger Rea and Jacques Roy.
Internet of Things data, devices and technologies are evolving into a core platform that is expected to impact business flexibility and more. Take a look at some key comprehensive best practices for Internet of Things–enabled application development that can put speed and agility into your business
Holden Karau is a software engineer at IBM, an active open source contributor and coauthor of Learning Spark (O'Reilly Media, February 2015) and the soon to be released High Performance Spark (O'Reilly Media, March 2017). In this podcast, Karau examines how to effectively search logs from Apache
Nick Pentreath is a principal engineer at IBM, a member of the Apache Spark project management committee (PMC) and author of Machine Learning with Spark (Packt Publishing, December 2014). In this podcast, Pentreath covers the basics of feature hashing and how to use it for all feature types in
Today’s businesses need a culture of collaboration that empowers knowledge workers to glean cognitive insights from data that help transform and modernize operations. See how cloud-based platforms and solutions enable data scientists and other experts to exploit artificial intelligence, machine