In a recent LinkedIn discussion group posting, I sketched out a five-layer framework for low-latency analytics in the cloud. Those layers were:
- data latency
- execution latency
- modeling latency
- insight latency
- results latency
What they all address, in the ultimate extreme, is the need for continuous analytic-driven optimization across distributed business and/or consumer environments. Call that “zero latency” or perhaps “predictive” assurance of desired outcomes.
To ensure zero-latency optimization in business operations, what you need, stoking this infrastructure, is a cadre of data scientists and subject-domain experts that collaborates continuously to ensure that the best-fit models are driving predictive “next best actions” throughout all infrastructure, middleware and applications. I sometimes refer this as the “next best model” imperative.
Next best model is founded on the premise that you never know what’s coming at you next—opportunities, threats and other challenges—which is why business process agility is key. Your organization must have a near-immediate data/analytic-driven response for anything that you can reasonably foresee. You must make sure that every stakeholder can identify, at their level, what that response might be, so you can all take appropriate collective action. That’s where “next best action”—aka decision management/automation—comes in.
Ideally, the embedded analytic models and rules that drive next best action must be built from the best collective expertise you can leverage. For every business scenario, you should have multiple alternate models that are adaptive, dynamic and self-learning. At each point in time, there is always for each scenario a “champion” (i.e., best fit) model in production, with one of more “challenger” models ready to be promoted to production if the champion’s predictive power decays. To determine what’s best fit at any time, you should continue to score all of these models—champion and challenger(s)—against continuous feeds of fresh information from applications, enterprise data warehouses, Hadoop clusters and others sources.
Let’s look at how this works in practice. On customer churn scenarios, for example, your champion and challenger churn models might use different customer data sources, different customer data samples, different customer segmentations, different independent variables associated with customer loyalty, and so forth. If you’re continuing to score both your challenger and champions in real time with fresh churn data from your customer service platform, you might find that the champion’s power to predict customer churn is starting to wane during particular times of the day, or in particular channels, or geographies. If you’ve already built one or more challengers that factor time, channels or geographies as independent variables, and if one of those starts scoring higher than the champion, it may automatically be promoted to champion status (and stay there until its predictive power declines).
The next-best-model paradigm falls apart if you aren’t able to get the best experts at your disposal together to collaborate and sign off on the models in the first place. You can never tell where the next best modeling ideas might come from. You should create a data-science collaborative culture and offer incentives that encourage experts to share and reuse each other’s best ideas. You should also encourage subject-matter experts in key business areas to undertake predictive modeling projects and to team with modeling experts in other projects, applications and business units. And you should provide incentives for modelers to regularly move between business units and subject domains, thereby spreading their expertise throughout the enterprise.
If you can reduce the latency of smart minds coming together, you have a powerfully low-latency business agility story.
What do you think? Do you see a good place for a next best model in your processes? What challenges do you see in implementing it?