Blogs

Week's Worth of Koby Quick-hits: May 28 to June 1 2012

Big Data Evangelist, IBM

Here  are the quick-hit ponderings that I posted on the IBM Netezza Facebook page this past week. After I got back from the Memorial Day break, I had a few additional thoughts on experience (measuring it), Big Data analytics on smartphones (envisioning it), and next best action (modeling it). I closed out the week by putting Hadoop in its place:

 

May 29:

Experience optimization? Measure the love.
 

Customer experience has become the golden currency of business success. Hopefully, your customers love the cross-channel experience you provide. So how do you measure that love, aka quality of experience?
 

Yes, experience is touchy-feely. But, often, customers express quality of experience through measurable actions. Here are some leading metrics for gauging whether you're optimizing your way into their hearts:

  • Are they renewing, extending, and deepening the relationship because you offer new ways for them to satisfy themselves?
  • Are they enjoying the relationship enough to tell the world, or at least their friends and family, about the value they’re receiving?
  • Do they respond and accept new offers rapidly?
  • Do they visit one or more of your channels frequently?
  • Are they able to find and purchase what they need rapidly through your channels?
  • Are they recommending and influencing other people to become customers, stay with you, and/or extend and deepen their relationships with you?
     

So, you can indeed manage customer relationships by the numbers, aka analytics. But heed the concern I expressed in the preceding quick-hit on this topic: connect and chat with customers periodically. You can't truly know whether you've won their hearts till you hear it from their lips.
 

They won't love you much longer unless you learn how to listen. Are you letting your customer analytics get in the way of listening? Are you inadvertently using analytics to muffle what customers are trying to get you to hear?
 

Respond Here

 

May 30:

Smartphones as Big Data analytics platforms? If history is any precedent, yes.
 

Miniaturization remains the juggernaut uber-trend, and subatomic density is its frontier. We all know that today's handheld consumer gadgets have far more computing, memory, storage, and networking capacity than the state-of-the-art mainframes that IBM and others we're selling back in the Beatle era. And we're all starting to get our heads around quantum computing, atomic storage, synaptic computing, and other "scale-in" approaches that will keep pushing Moore's Law forward for the foreseeable future.
 

Today's smartphone is much more than a phone, of course, and in fact fewer people are using these to make or take phone calls. What today we call a smartphone may or may not evolve into a wearable or embedded other type of personal tech. But whatever it morphs into, it will almost certainly grow into a dense-packed and cost-effective data analytics platform.

It'll be a Big Data analytics platform by today's standards, just as much as an iPhone is a mainframe by the standards of the "Mad Men" era. Dropping storage costs, as alluded to in the previous quick-hit on this topic, are just one of many factors that will almost certainly bring this vision to fruition by the end of the current decade.
 

No, you probably won't be carrying a miniature enterprise data warehouse around in your backpack in 2020, but there'll be no technological reason why you couldn't. That's only 8 years from now. Are you thinking this far ahead?
 

Respond Here

 

May 31:

Recommendation engines? Only as useful as the "next best model" powering it.
 

Recommendation engines, as discussed in the previous quick-hit on this topic, are a glob of infrastructure technologies that optimize interactions, both customer-facing and back-office. Their central importance to next best action and decision automation--hence Smarter Planet--is undeniable. And the importance of data scientists in developing the models that power these engines is also well-understood.
 

But are your data scientists plugging the very best models into your recommendation engines, and tuning them with the very best data, at all times?  Just as important as the models themselves is the need to keep them continually optimized for maximum business benefit. You should instrument transactional applications so that continuously self-optimizing predictive models are always driving the next best actions in each of the linked processes. Done right, these next-best models would leverage such sophisticated capabilities as strategy maps, ensemble modeling, champion-challenger modeling, real-time model scoring, constraint-based optimization, and automatic best-model selection.
 

Keeping these process-embedded models fit and fresh is the thankless daily chore of your data scientists. Are they up to it? Do you have all this baked into your data science best practices?
 

Respond Here

 

June 1:

Hadoop uber-allies?
 

In spite of what you may have heard, Hadoop is not the sum total of Big Data. In the larger evolutionary perspective, Big Data refers to a paradigm under which Hadoop, enterprise data warehouses (EDW), in-memory columnar, NoSQL, and other approaches figure into new architectures for comprehensive cloud analytics. The inexorable trend is toward a hybrid environment that has the following architectural features:
 

  • Extremely scalable and fast:
    • Scale-out, shared-nothing massively parallel processing, optimized appliances, optimized storage, dynamic query optimization, mixed workload management
  • Extremely flexible and elastic:
    • Data persisted in diverse physical and logical formats; seamless grid of interconnected memory and disk; with subsecond delivery to downstream applications
    • Pluggable storage engines: relational, dimensional, columnar, file-based, graph, etc.
    • Advanced analytics massively parallelized across distributed data, content, file, and cache
    • Application service levels ensured through an end-to-end, policy-driven, latency-agile grid
    • Private and/or public clouds
  • Extremely affordable and manageable:
    • Flexible packaging/pricing: licensed software, modular appliances, “pay as you go” subscription-based pricing

 

Hadoop's a key component of this emerging Big Data cloudscape, of course, but it's not the whole story. And it's still a work in progress: as an open source community, as a commercial market, and as a body of best practices.
 

How far is your enterprise down this Big Data evolutionary path?
 

Respond Here

 

At the end of the week, I recommend putting today's Hadoop mania in its proper context. IBM's "Big Data Platform" graphic does that quite succinctly.

 

See you all at the 2012 Hadoop Summit in San Jose June 13-14th. Anjul Bhambhri, IBM vice president of Big Data, Information Management, will be speaking on the marriage of Hadoop and data warehousing.

 

Follow IBM Netezza On:

Follow Jim On: