Machine learning enriches the private cloud
Machine learning can infuse every application with predictive power. Data scientists use these sophisticated algorithms to dissect, search, sift, sort, infer, foretell, and otherwise make sense of the growing amounts of data in our world.
Fundamentally, machine learning is a productivity tool for data scientists. As the heart of systems that can learn from data, machine learning allows data scientists to train a model on an example data set and then leverage algorithms that automatically generalize and learn both from that example and from fresh data feeds. With unsupervised approaches, data scientists can dispense with training examples entirely and use machine learning to distill insights directly and continuously from the data.
To achieve machine learning’s full potential as a business resource, data scientists need to train it from the rich troves of data on the mainframes and other servers in your private cloud. For truly robust enterprise analytics, you need machine-learning platforms that are engineered to deliver the following:
- Automation and optimization: Your enterprise machine learning platform should enable data scientists to automate creation, training, and deployment of algorithmic models against high value corporate data. The platform should assist them in choosing the optimal algorithm for every data set. The way to do this is by having a system that scores their data against available algorithms and provisions the algorithm that best matches their needs.
- Performance and scalability: The platform should be able to continuously create, train and deploy a high volume of machine learning models against data maintained in vast corporate databases. It should allow data scientists to deliver better, fresher, more frequent predictions, thereby speeding time to insight.
- Security and governance: The system should enable data scientists to train models without moving the data from the mainframe or other enterprise platform where it is secured and governed. In addition to minimizing the latency and controlling the cost of executing machine learning in your data center, this approach eliminates the risks associated with doing ETL on a platform separate from the node where machine learning execution takes place.
- Flexibility and programmability: The platform should allow data scientists to use any language (e.g., Scala, Java, Python), any popular framework (e.g. Apache SparkML, TensorFlow, H2O), and any transactional data type throughout the machine learning development lifeycycle.
The newly announced IBM Machine Learning for z/OS provides all of these capabilities. The solution incorporates the mature machine-learning technology from IBM Watson. Through a wizard-driven development interface, a single management UI, and RESTful APIs, it enables data scientists to create better models in less time, provide optimal parameters for any given model, simplify model creation, improve models over time, easily integrate with existing tools and applications, and simplify model management. The solution’s Cognitive Assistant for Data Scientists rapidly optimizes the algorithm that best fits the data and the business scenario. The solution will initially be available on the z System mainframe, which can process up to 2.5 billion transactions in a single day. It will be available for other platforms in the future, including IBM POWER Systems.