Blogs

SparkOscope gives developers more complete insights into Apache Spark jobs

Research Software Engineer, IBM

During the last year, the High Performance Systems team for IBM Research, Ireland has been using Apache Spark to perform analytics on large volumes of sensor data. The Spark applications it has developed revolve around cleansing, filtering and ingesting historical data. These applications must be executed daily; therefore, it was essential for the team to understand Spark resource utilization—and thus, provide customers with optimal time to insight and cost control through better-calculated infrastructure needs.

Presently, the conventional way of identifying bottlenecks in a Spark application include inspecting the Spark Web UI either throughout the duration of the jobs/staging execution or the postmortem, which is limited to job-level application metrics reported by the Spark built-in metric system (for example, stage completion time). The current version of the Spark metric system supports recording the values of the metrics in local CSV files and also integrating with external metrics systems such as Ganglia.

The team found it cumbersome to manually consume and efficiently inspect these CSV files generated at the Spark worker nodes. Although using an external monitoring system such as Ganglia would automate this process, the team was still plagued with the inability to derive temporal associations between system-level metrics such as CPU utilization and job-level metrics as reported by Spark (for example, job or stage ID). For instance, the team could not trace the root cause of a peak in HDFS reads or CPU usage to the code in the Spark application causing the bottleneck.

To overcome these limitations, IBM developed SparkOscope. This tool takes advantage of the job-level information available through the existing Spark Web UI and minimizes source code pollution by using the current Spark Web UI to monitor and visualize job-level metrics of a Spark application, such as completion time. More importantly, it extends the Web UI with a palette of system-level metrics about the server, virtual machine or container that each of the Spark job’s executor ran on. Using SparkOscope, users can navigate to any completed application and identify application-logic bottlenecks by inspecting the various plots to get in-depth time series for all relevant system-level metrics related to the Spark executors, while also easily associating them with stages, jobs and even source code lines incurring the bottleneck. On a related use, system-level metrics are essential when it comes to careful infrastructure planning for the set of Spark applications customers are running each day.

In the back end, SparkOscope leverages the open source Hyperic Sigar library to capture OS-level metrics at the nodes where executors are launched. The following screenshot shows users a selection of which metric to visualize, among the new family of OS-level metrics, for all executors launched by the Spark job under scrutiny.

The High Performance Systems team used the tool to evaluate the RAM requirements on one of its most frequent workloads. Initially, the team assigned each Spark executor 80 GB RAM and inspected the metrics of RAM utilization and kilobytes written on the disk.

It became clear that Spark uses all of the available RAM given to the executors—and after there is no more RAM available, Spark resolves to heavy disk-writing of intermediate data. However, in the plots produced, it became clear that the nodes have more RAM available that can be assigned to the executors. This helps reduce the need for heavy disk writes and helps contribute to faster completion time. With this tool, users can potentially plot additional metrics of interest such as the volume of HDFS read bytes.

Additionally, the SparkOscope project is open source. Take a look.

This contribution is part of an ongoing effort by the High Performance Systems team to better identify Spark performance bottlenecks through visualization of key metrics, a process that can extract useful insights to drive future work. IBM is exploring various potential avenues to make this tool available to the Spark user community.

Be sure to attend Spark Summit Europe—and when you’re there, check out Yiannis Gkoufas’s session on SparkOscope.

Learn more at Spark Summit Europe 2016