The Ethical Boundaries Of "Digital Oil"

~6 min read

Like storm clouds on the horizon that blissful sunbathers choose to ignore, the data privacy crisis swirling around Facebook has been a long time coming, yet was still slightly shocking when it burst open. Nevertheless, it has prompted a reckoning between companies whose business model is based on the processing of user data, and those same users who are often surprised to discover that a free service isn't exactly free after all.

Moreover, the range of companies that hold sensitive data on every aspect of consumers' lives extends far beyond the obvious candidates such as Facebook. Mobile handset manufacturers, MNOs, Google and app developers all hold masses of data on every one of their users, but how should they use it? This question takes us beyond the normal realm of inquiry relating to the sorts of permissions that companies seek in terms of how they harvest user data, how securely companies hold that data and whether the uses they put it to are legal. The answers to those questions are still in the process of being thrashed out, and they are likely to surface in the form of law and regulation.


"Just Because You Can..."

However, another level of inquiry should be welcomed in the current environment of heightened awareness, one that moves past the mere legal limits on what companies are allowed by law to do with user data. At one point or another, we've probably all been told that just because we can, doesn't mean we should, and this is the ethical dilemma that data-holding companies must confront. When should companies practice prudent self-denial and forego profits, because it is the "right thing to do?" Or do companies have the right, or even the duty to their shareholders, to use the data at their disposal to maximize profits?

By the time you read this, Mark Zuckerberg will have given testimony to the United States Congress, in the wake of the revelations surrounding Cambridge Analytica's use of Facebook user data in the 2016 U.S. presidential election, and the U.K.'s Brexit referendum, and it is unlikely that he is going to satisfy many people with his performance -- and performance it will be, in the full theatrical sense.

Politicians will line up, lines rehearsed, in a sequence of attempts to create a clip that, ironically, goes viral on social media. "What did you know?", "Why didn't you act sooner?", and "How will you prevent this from happening again?" are all themes that will feature in the heart of the production, but it seems improbable that the inquisition will get at the crucial, existential question: "Should you be doing this at all?"

Facebook is so big, valuable, and (crucially) such an effective marketing platform that it seems far-fetched that members of the United States Senate will question the principles and practices that underpin its existence. But if the current controversy is going to have a meaningful long-term impact on how companies of all hues, big and small, tech or MNO (for example) handle customer and user data, then questions that cut to the heart of the ethics of the digital economy must be asked.

Facebook is in the dock simply because it is the biggest and most obvious proponent of practices that have come to define the digital economy. And because of this, large numbers of people passively accept that this is what the digital economy must look like. But instead of accepting the world that Sergey and Mark and Elon and Jack have created for us, we have reached an inflection point where it should be possible for consumers and politicians to decide what are the ethical boundaries within which we want these companies to operate, and force them to build their companies around that, and not the other way round.


Inferences > Data

The EU, to its credit, foresaw and took action to forestall these sorts of problems via the GDPR, but even that only provides consumers with the raw data that is held on them, which is not where the mischief lies. If a consumer downloads a .ZIP file with all the information held on them by Google, Facebook, Apple or Orange, even if they were able to wade their way through it (and it is likely that no-one other than a programmer running a script is going to be able to make head nor tail of it), what consumers really need to know is what does all this data tell the company about them? What are the computational inferences that the company's algorithms are making? What boxes are they putting each consumer in? What labels are they attaching? And following on from that, how is that inferred information sold on to advertisers?

This, of course, is the secret sauce that big companies would be loathe to hand over. But by forcing them to explicitly reveal the inferences that they make about their customers, and having that information available to the consumer every time they see an advert targeting them on that basis, we would have taken enormous strides towards solving the fundamental ethical questions about what is proper or improper use of our personal information. What's more, it would likely end up being be a market-led approach.

Consider the scenario: an MNO, mobile money deployment or handset manufacturer has compiled and/or purchased data that, based on the consumer's location, key words used, phone activity and spending patterns, leads to the inference that they are in financial distress.

The customer is then targeted by adverts offering "Payday Loan"-style, short-term, high-interest loans that in many instances can lead to borrowers getting stuck in a debt trap. If a user has access to both the data and the inferences that have been drawn, they can discover that they were targeted because they visited, in our hypothetical example, the hospital, unemployment benefit office, ran a low balance on their account, were late with a payment, and used the key words "broke" and "desperate" in text or instant messages.

Clearly, any company that targets this individual under these circumstances is running the risk of exploiting an individual when they are weak and vulnerable. The risk of market backlash would be big, meaning the market would police the activity of both the lender and the company that harvested the data and made the inferences. But even before arriving at this stage, the user should also have the right to select which data points about them are used, which inferences are permitted, and with which companies their information is shared.


Data Dashboards

Not a fan of Wells Fargo? Then don't click on the button to allow them to receive data on you. Don't want insurance adverts either? Choose not to see them. Uncomfortable with inferences being made about your sexual orientation? Don't click the button that allows such an inference to be made. Don't want your political likes forming part of the picture built up about you? Don't opt that data into the profile being built of you, etc. etc.

If this sounds like a lot of work, there's no denying that it is. But in the same way that it is increasingly accepted that password managers are a wise investment due to the high prevalence of hacking, data breaches and limits on the human brain's capacity to store a variety of passwords, so too could there be a future for data dashboards, from which consumers can control what information they want to give away and what they want to keep private.

Europe has led the way in this area. What was once seen as a stealth attack on American tech companies by the EU, the "Privacy by Design" principles that underpin the GDPR are now more likely to be seen as essential building blocks for the creation of a user-controlled, market-led ethical use of customer data by private enterprises (if U.S. politicians can overcome their own self-interest in having access to such data.) Until recently, GDPR was often portrayed in the U.S. as a massive over-reach and an unnecessary, expensive compliance burden being dispensed by bungling bureaucrats in Brussels.

These days, the question is increasingly being asked, simply how do we make it work better, and even does it go far enough? Should we wait for private companies to create the intermediating tools we need to manage our "data dashboards", or is there a case for a "Data Protection Bureau", through which citizens can manage how and to what extent they interact with the digital economy?


Show Us The Secret Sauce

Clearly, there should be ethical limits that govern what companies do with consumer data, at both the individual and aggregate level, but precisely what they are is currently almost impossible to discern. Consumers, advocates and even regulators still, unfortunately, know much too little about what companies are doing with our data, how they are processing it, and what inferences they are making. Only by shining a bright light on their algorithms, their "secret sauce", will we be able to determine where the ethical boundaries on their proper use lie. And that will require action and bravery on the part of consumers and their public representatives, otherwise tech companies will slide back behind their shields of apologies and promises to do better next time. Until the next time something goes wrong.

© Mondato 2018


Image courtesy of Pamela Carls

Click to subscribe and receive a weekly Mondato Insight direct to your inbox. 

Author image
Mondato is a boutique management consulting firm specializing in strategic, commercial and operational support for the Digital Finance & Commerce (DFC) industry.
Washington DC Website
top