Blogs

Recommendations Galore: Part 2

The ubiquity and complexity of recommendation engines can create transparency and accountability challenges

Big Data Evangelist, IBM

In part 1 of this column, I discussed the ubiquity of recommendation engines, the principal platforms, and the perception that these engines are mysterious “near-invisible slabs of assembled infrastructure.” Here, I will dive a bit deeper into the reasons for this perception.

Indeed, there is a grain of truth to the perception that the workings of recommendation engines can be a bit mysterious. They operate under logic that is so complex, and based on so many variables, that their precise future behavior in all scenarios may not be predictable by any one human in advance.

That unpredictability raises a key question: Who is accountable if the recommendation that an engine automatically dishes out in a particular circumstance is incredibly wrong, stupid, or counterproductive? In other words, who, if anybody, is responsible if the recommendation is closer to the next-worst-action spectrum of things? For example, does chewing out well-meaning data scientists make sense if their models, deployed into production applications, produce unintended adverse consequences?

I'm not a lawyer, and this installment is not an inquiry into a rat's nest of issues. But this accountability is a concern. Does next best action, as powered by automated recommendations within a channel strategy, blur the issue of accountability to the breaking point? If we grant the people staffing the touch points—call center agents and salespeople—some capability to use their judgment in deviating from or ignoring auto-generated recommendations, does that capability shift the responsibility for bad recommendations to them? Are they, and the organization as a whole, prepared to accept that responsibility?

 

Anonymous algorithms

And the accountability issue raises another question: How transparent are the complex algorithms that are the very beating hearts of modern recommendation engines? Most educated people know that an algorithm is simply any stepwise computational procedure. And most businesspeople recognize that an algorithm is a business resource that may survive for decades and may have been authored and modified by many diverse programmers over the years. People naturally grow nervous when some human artifact—such as an algorithm—seems to take on a life of its own, surviving its creators.

Algorithms’ seeming anonymity—coupled with their daunting size, complexity, and obscurity—presents a seemingly intractable problem: Can anyone personally be held accountable for the decisions they make? Put another way: How transparent is the authorship of the data and rules that drive algorithmic decision-making processes? Or, alternatively, how can public and private institutions in a democratic society establish procedures for effective oversight of algorithmic decisions, such as those taken by recommendation engines?

From a management theory perspective, the accountability issues of algorithmic decision making are related to the issues of personal accountability—or lack thereof—in complex modern organizations. Much as large bureaucracies tend to shield the instigators of unwise decisions, convoluted algorithms can obscure the specific factors that drove a specific piece of software to operate in a particular way under exact circumstances.

Lack of transparent accountability for algorithm-driven decision making tends to breed conspiracy theories. These sorts of concerns underpin the recent article “Rage Against the Algorithms.” That particular discussion focuses on the recommendation-engine algorithms that filter, rank, and display online reviews of restaurants, movies, and so on, which often have direct monetary impacts—positive and negative—on the subjects of those reviews. The author, Nicholas Diakopoulos, expressed the broad concern this way:

  [A]lgorithms are becoming ever more important in society, for everything from search engine personalization, discrimination, defamation, and censorship online, to how teachers are evaluated, how markets work, how political campaigns are run, and even how something like immigration is policed. Algorithms, driven by vast troves of data, are the new power brokers in society, both in the corporate world as well as in government.”1  

This statement is akin to the concerns I discussed in my recent LinkedIn post on the increasingly opaque statistical models that drive decision automation by recommendation engines:

  The more dimensions of individual preference/taste that your data scientist attempts to capture in their decision-automation model, the more complex the model grows. As it grows more complex, the model becomes more opaque, in terms of any human—including the data scientists and subject-matter experts who built it—being able to specifically attribute any resultant recommendation, decision, or action that it drives to any particular variable. Therefore, in spite of the fact those values—aka numbers—ultimately yield accurate recommendations, the number-driven outcomes become more difficult to understand or explain.  

 

Complex ubiquity

The IBM® Smarter Planet® promise is inconceivable without recommendation engines, which represent an indispensable utility infrastructure. But their very ubiquity and complexity can make understanding the causes of their behavior in every instance increasingly challenging for organizations and individuals. Their sheer complexity is expected to make ensuring full transparency and precise accountability for their actions difficult as well.

Please share your thoughts and questions in the comments.

1 Rage Against the Algorithms,” by Nicholas Diakopoulos, The Atlantic, The Atlantic Monthly Group, October 2013.
 

[followbutton username='jameskobielus' count='false' lang='en' theme='light']
 
[followbutton username='IBMdatamag' count='false' lang='en' theme='light']