Blogs

How big data analytics are reinventing IT application management

Senior Technical Staff Member, IBM

When writing software, developers typically instrument their code and use the generated logs to debug the application’s functional and non-functional issues. Once the application software is in production, the IT operations team uses performance and availability monitoring tools to make sure that the application is healthy. These application management tools have traditionally collected performance metrics and some amount of log and trace information, and generated events to help detect and diagnose issues in a production setting.

Some of these silos between dev/test and production processes are fast disappearing. But what’s precipitating that movement? Factors include the rapid rate of change in IT, business demands for greater versatility and direct user experience feedback. Cloud-based delivery of applications, where users are not exposed to software versions and production deployments, is also a reason.

“DevOps” captures in a word this fundamental shift in the way development and operations processes are integrating. As a consequence, developers have more influence into how their applications are getting managed in production—after all, they know their applications the best. Their methods and knowledge of troubleshooting an application in development are also setting standards for how an Ops team manages the production version.

So what does that imply?

  • Monitoring data is not restricted to external metrics, but is largely log-based with built-in instrumentation provided by developers of middleware stacks and applications
  • This data is not necessarily structured like numeric performance data and can contain unstructured text beyond the header of an optionally used logging protocol
  • Developers across various software components do not follow a uniform logging standard. Every now and then, developers add new information to their logs depending on their debug scenarios and any new code added
  • It is hard to put this data into a fixed schema and process it using relational queries to create dashboards that help to detect and diagnose application health
  • Moreover, given the very nature of the data, it is high volume—running to terabytes per day for a large enterprise, which can stress any traditional management system

Given these shifts, IT application management is reinventing itself to serve new market needs based on big data technologies, which means these tools must now be able to:

  • Collect, analyze and store large volumes and varieties of unstructured, machine- generated data at low latency and low response times, enabling users to detect and solve app problems in near real time
  • Provide an user experience that can process the data and answer questions from IT users who do not possess knowledge of  big data platforms

We’ll talk about these interesting challenges, and solutions, this week at IBM Insight. Join us for Session IIH-5264A: Faster Problem Diagnosis of IT Application Infrastructure Using Machine Data Analysis at 10 a.m. on Thursday, October 30. You’ll learn how IBM SmartCloud Analytics, built on big data technologies, will help you process terabytes of log, event and ticket data faster, so you can detect and diagnose your application problems more quickly. Our discussion will cover:

  • IBM’s vision and main use cases for big data
  • Why managing your application is a big data challenge
  • Unique problems in analyzing machine data, the data generated by your IT infrastructure and applications
  • How IBM SmartCloud Analytics is built on a common big data platform with a user experience and backend customized to IT users

We look forward to seeing you at IBM Insight 2014