Blogs

Big Data Ethics for Targeted Segmentation

Organizations should evaluate their intentions when engaging models for profiling customers

Big Data Evangelist, IBM

All customers are created equal. But often they’re treated separately and differently, depending on who they are. Is that treatment necessarily a bad thing? Not always. To determine whether separate treatment is good or bad, consider customers not just in the context of their commercial relationship with a specific business, but in the context of their status within the overall social fabric.

Another word for differential treatment is discrimination. From a civil rights point of view, discrimination is a pejorative term. But from a commercial perspective, it can have a positive ring. For example, a discriminating consumer is often referred to as someone who knows precisely what he or she wants, which is usually high-quality goods and services.

However, we rarely hear references to a discriminating business. Instead, the verb profiling is more likely to be used. That term often has a neutral ring, referring to the statistical, empirical, data-driven process of distinguishing segments, classes, or categories of customers, both current and prospective. Customer profiling is an essential element of many vital business functions, including marketing, sales, services, product development, and public relations.

From a civil rights perspective, though, profiling usually refers to a systematic, institutionally sanctioned program of unfair discrimination against people of various racial, religious, age, and other demographic groups. Sometimes, selected police agencies fall into suspicion of unfairly discriminating against various demographic groups. Similarly, some taxi drivers adopt a policy of neither picking up nor dropping off riders in particular neighborhoods where certain minority groups may reside.

All those profiling practices are anathema in any society that prides itself on providing equal opportunity for all. When unfair profiling threatens to rend or strain the fabric of trust that holds a community together—as in the examples of discrimination practices by some police agencies and some taxi drivers—its eradication is in everybody’s best interest.

Commercial-based profiling

Nevertheless, profiling in its commercial context can be a legitimate practice if every customer segment is content with the value they’re realizing and believe they’re being treated fairly. Consider the main thrust of customer-facing, big data analytics. It often powers profiling applications of astonishing sophistication. Indeed, modern data-driven business rides on a 360-degree view of the customer, fine-grained customer segmentation, and targeted customer engagement.

Businesses, however, can intentionally or inadvertently use these profiling tools in ways that, from the general societal standpoint, many people might regard as discriminatory. To the extent that, say, one racial group feels a particular commercial-targeting strategy is unfair, it may seek legal recourse through regulatory actions or class-action lawsuits. Or, if those avenues are unavailable, the aggrieved group may campaign for change through boycotts and appeals to the court of public opinion.

Financial services and insurance firms face these practical dilemmas more often than businesses in other sectors, perhaps because denying people equal access to insurance, loans, and credit can be an effective tool for disenfranchising specific segments of the population. For example, consider scenarios in which an insurer decides to shunt specific minority groups into high-risk categories that, as a consequence, pay higher premiums or receive lesser benefits than other groups and/or are less likely to receive any coverage whatsoever.*

Intention-based strategies

Whether some profiling decision crosses over from an ordinary business matter to an actionable civil rights issue often comes down to intentions. What specifically were the intentions of a business in defining a particular segmentation strategy? And how did those intentions shape the customer attributes that were factored into the associated statistical models?

The practical difference between intentional and inadvertent discrimination can often be difficult to discern. For example, an insurer’s decision to include a particular demographic group in a particular high-risk segment is usually shaped by myriad statistical variables. The type of discrimination often comes down to adjudicating the fine line between causation and correlation. Judges, juries, and arbitrators can have a tough time sorting out intention-driven causation from data-driven correlation. For example, consider an insurer refusing coverage because of race versus an insurer refusing coverage because data shows that people in a particular neighborhood, occupation, education, and income level are a bad risk.

Correlations among in-model and not-in-model risk factors can significantly blur any clear determination regarding the intentions that influenced a specific profiling decision. The particular in-production segmentation model that an insurer employs may omit civil rights–sensitive variables—for example, race, religion, age, sexual orientation, and so on. But the model may include many considerations that are significantly correlated with one or more of these variables. If an insurer was required to omit one or more of these variables from its policy underwriting decisions, it might legitimately argue that this action forces it to assume greater risk than if the variables were retained. It might then argue that it will need to jack up premiums for all customers, not just those in the segments who claimed to be victims of unfair profiling practices.

From the perspective of society at large, would shifting the costs to society at large be a fair solution? Or is the appearance of intentional discrimination—in every public or private sector context—an evil that must be eliminated whenever and wherever it presents itself, no matter the cost?

Please share any thoughts or questions in the comments.

*Is Big Data the Next Civil Rights Issue?” by Alex Woodie, Datanami, September 2014.