Data Scientists: Run Your Mad Experiments
June 18, 2012
Smarter business is a game of incremental improvements. It depends on your ability to produce a steady stream of innovations in your operational processes.
Incremental tweaking is not usually a glamorous activity. Minute process adjustments don't usually call attention to themselves. And that's a good thing, because you can roll them out in stealth, with competitors not suspecting or customers detecting any disruptions in your quality of service.
Incremental improvements need not be insignificant. In the competitive wars, strategic process tweaks can make all the difference. If we can make our business model a tad smarter - in other words, more speedy, responsive, efficient, or flexible - we can differentiate where it counts. And if we can keep fresh innovations coming - week after week, quarter after quarter - we can make our competitive advantage durable over the long term. In so doing, we can shift the competitive playing field in our favor through process innovations that competitors can't easily match.
The key to achieving steady incremental improvements is the "real-world experiment." Leading-edge organizations have begun to emphasize real-world experiments as a fundamental best practice within their data-science, next-best-action, and process-optimization initiatives.
At heart, a real-world experiment involves iterative changes to the process logic that is embedded in operational applications, or in the decision automation, recommendation engine, or other runtime platforms that power those applications. Under this practice, key performance metrics are monitored with each run of the process logic. This allows businesses to determine which specific piece of process logic - predictive analytic models, deterministic business rules, process orchestrations, etc - contributed the most to desired outcomes. In this way, organizations can establish a closed feedback loop in which processes are steadily and systematically improved from run to run, under the oversight of statistical data scientists and process domain specialists.
Essentially, real-world experiments put the data-science "laboratory" at the heart of the big data economy. Under this approach, business model fine-tuning becomes a never-ending series of practical experiments. Data scientists evolve into an operational function, running their experiments 24x7 with the full support and encouragement of senior business executives.
In this new order, data scientists experiment continuously by deploying new predictive models, business rules, and orchestration logic into front-office and back-office applications. They might experiment with different logic to drive customer handling across different engagement channels. They might play with different models for differentiating offers by customer demographics, transaction history, times of day, and other variables. They might examine the impact of running different process models at different times of the day, week, or month in back-office processes, such as order fulfillment, materials management, manufacturing, and logistics, in order to determine which can maximize product quality while reducing time-to-market and life-cycle costs.
The beauty of real-world experiments is that you can continuously and surreptitiously test diverse scenarios inline to your running business. Your data scientists can compare results across differentially controlled scenarios in a systematic, scientific manner. They can use the results of these in-production experiments - such as improvements in response, acceptance, satisfaction, and defect rates - to determine which models work best in various circumstances.
In assessing the efficacy of models in the real world, your data scientists will want to isolate key comparison variables through A/B testing. They should iterate through successive tests by rapidly deploying a challenger models in place of the former in-production champion models as soon as the latter become less predictive. The key development approaches that facilitate these experiments include champion/challenger modeling, real-time model scoring, and automatic best-model selection. Data scientists should also use adaptive machine-learning techniques to generate a steady stream of alternate “challenger” models and rules to automatically kick into production when they score higher than the in-production “champion” models/rules in predictive power.
These same approaches apply whether you're doing real-world experiments inside a private business or in the management of public utilities and other infrastructure. Closed-loop feedback/control systems are essential to dynamic traffic control, waste management, pollution mitigation, smart-grid monitoring, and other Smarter Cities functions.
Real-world experiments are the new application development paradigm for big data. As you develop your data science centers of excellence, be sure to make freewheeling experimentation your standard operating procedure. Inject a little of this "madness" into your business method.
For More Information: