Julia Inselseth (Law LLB, 2nd Year) discusses the notion of data harvesting in a post-Cambridge Analytica world.
The phrase scientia potentia est (“knowledge is power”) is gaining a new meaning in the age of Big Technology. In a market where some of the most valuable and powerful companies are technological ones (Google, Amazon, Facebook, Apple, Microsoft, and so on), personal information has been dubbed the ‘new oil’, or more crudely put, the creator of wealth and power.
The rise in usage and value of personal data by companies has been making antitrust authorities uneasy, as a push to break up big tech is happening. Such a push is largely fuelled by the public fear and outrage following the Cambridge Analytica scandal. People feel uneasy about big companies having access to their data. But what if you could actually treat your personal data as an asset? Would you sell it for something more of value than just personalised ads on your Facebook feed?
But First, What Do Companies Collect Exactly?
Ad clicks, your phone number, email address, search queries, any personal information on your profile, IP addresses, location, third-party connect data – anything you can think of!
So, how do they do it? The companies collect the data through three main ways:
- By asking customers directly.
- Indirectly tracking the information.
- Acquiring it from third parties.
Firms usually ask for information from customers directly at the beginning – for example via a registration form or a survey.
Furthermore, companies have a plethora of online resources – ever wonder why you see ads for the shoes you were looking at via a different website? It’s because these technologies enable the tracking of your browser history even after exiting the website – such as by the logging and tracking of ‘cookies‘ in your browser.
Then, Why Is This Personal Information Valuable?
Well, then why does Company X or Y care about the fact that you were looking at shoes at 10pm yesterday? There are a plethora of reasons.
First – companies want to give their consumers the most enjoyable experience possible, tailoring their online display to the consumer (so, for example, showing you a similar or exact same shoe you were looking at before).
Second – consumer data is used to analyse how consumers respond and engage with a company’s adverts. This allows companies to better their marketing strategies in order to generate more revenue.
Third – probably most controversial way that personal data is used and turned into wealth is selling it to third parties.
Having a specific company’s customer data is invaluable for advertisers – they can analyse customer’s behavioural patterns and use them to shape our opinions and choices: the shoes we buy, the coffee we drink, the issues we care about, or sometimes, even the politicians we vote for (remember the Cambridge Analytica scandal?). This is why personal data is called the new oil.
Sell Your Information: The ‘Data Dividend’ Proposal
If data is the new oil – why are companies profiting off it and not the general public? Politicians have been catching on to the surge of disagreement with the way Big Tech handles consumer data in order to generate revenue. This is the question that California’s governor posed while proposing a peculiar idea as the solution, calling for a ‘new data dividend’ bill.
The idea is that Californians would be compensated for providing access to their personal data to the Silicon Valley companies: Google, Facebook, etc. The reasoning behind the proposed bill is that these California-based companies have made a fortune of collecting, analysing and selling their customer’s data, and the public deserves a share as, ‘consumers should be able to share in the wealth that is created from their data’.
But is treating personal information as an asset the right route to take? What are the ethical implications?
Reliving the Cambridge Analytica Scandal
Selling your own data (data you have created) is legal. So is selling data that is someone else’s that you have acquired, as long as you have the necessary permissions (remember all those terms and conditions you never read?). However, if the public could literally start treating personal information as a currency and, let’s say, countries would actually start replicating and implementing the proposed ‘data dividend’ bill, or creating a system where personal information could be traded in for money or goods as an asset, this could lead to a lot of unfairness in the market as a result of demand-supply forces.
Many do not realise the sheer volume of data they give up for free already. Think about all the terms and conditions you accept when starting to use services or apps; if companies started paying for the access to user data it would be a further incentive for people to give it up – why not, if you already gave it up for free a hundred times prior?
Also, if personal information would be treated as a currency or asset, this could lead to an unfair market advantage for the main Silicon-Valley based tech companies. Whoever can pay the highest price can acquire the most sought-after data and use it to their advantage, using targeted ads to generate revenue.
But when does shaping consumer behaviour go too far? In my opinion, when it turns into psychographic targeting.
Psychographics are the ‘dark arts’ of marketing. It is the study of consumers based on their activities, interests, and opinions (AIO). This study goes beyond classifying consumers by their gender, age, race, and ethnicity. Instead it focuses on our internal attributes such as emotions, values, triggers and fears. It is commonly used in warfare; this field of study is deeply connected to the so-called ‘PSYOPS’ (psychological operations), a term that is used to describe everything in warfare that is not physical warfare, like, for example, propaganda.
These were exactly the methods used by Cambridge Analytica to influence Americans during the 2016 presidential election. Cambridge Analytica used a personality quiz on Facebook to harvest the data of 87 million users without their consent (in that the majority of users would not have read the terms and conditions, and blindly signed away their data rights without knowing it) and used it for targeted political advertising purposes.
The most simple explanation of how Cambridge Analytica operated would be this: the company grouped people into different personality groups. Using this information, and a variety of testing, they figured out how to shape and change the opinions of each group the most effectively. The company even openly bragged that they had 5,000 data points on every voter in the United States. Then, in 2016 Cambridge Analytica were hired by the Republican party to lead Donald Trump’s political marketing campaign (and they were also hired to lead the Brexit campaign in the U.K, however this is more complicated, as the Brexit campaign leaders deny any involvement).
The presidential marketing campaign was fuelled by unethical data harvesting and manipulation, prompting a huge political scandal. And then, well… Trump won the election. Of course, it is not logical to attribute Trump’s victory wholly to Cambridge Analytica, there were of course other economic and political trends that influenced the nation’s decision. But it is interesting to look at the statistics, as Trump’s campaign spent more on Facebook ads than Clinton’s campaign did by millions. To quote Brittany Kaiser, the ex-Director of Business Development for Cambridge Analytica’s parent company, ‘oops, we won!’.
This scandal is a perfect example on why treating personal information as an asset and implementing ‘data dividends’ could lead to unfair market advantage for some, or even infringe the basic principles of democracy. Of course, in this case the data was obtained without many voters’ overt consent; however, it still illustrates this point well: Trump was willing and able to pay the higher price (and, arguably, to bend moral codes of conduct) than Clinton and use voters’ data to manipulate them with psychographics.
Regardless of whether Trump would have won the presidential election of 2016 without Cambridge Analytica’s tactics, or not, I view this as still highly unethical. It points to the fact that if we could treat our personal data as an asset and sell it for the most competitive market price, this could lead to a dystopian future. In such a world the one that can pay the highest price has the power to shape our decisions. Is that ethical or fair?
A wake-up call for policy makers
Following the Cambridge Analytica scandal, Big Tech has been painted as the ‘great evil’ of the 21st Century, especially by politicians and the media. It is understandable that people feel uneasy about companies using their data for anything that goes beyond improving customer service.
However, it is not fair to condemn Big Tech as the only evil. Politicians and policy-makers should realise that this problem of data misuse arises from the speed that technology has been evolving in: laws are outdated and we need a new solution. We should not diminish the responsibility that we bear. We have a responsibility to protect our own data and democracy. This should be a wake-up call for each and every one of us, including the ‘big wigs’ in government and the judiciary.
But, at the end of the day, those personality tests are fun, and you need a bit of spare cash…so, if personal information were an asset, would you sell yours?