*Terms and Conditions apply.
Cambridge Analytica was involved in a firestorm recently following reports spearheaded by The New York Times that it’d obtained and used data on over 50 million Facebook consumers with the goal to promote the presidential campaign of Donald Trump. Part of its pitch to brands and candidates has been its expertise in ‘psychographic profiling’, meaning it could build models of users with nearly mythic ability to hyper-target messaging on social networks. This work is based not just on traditional demographics like age, gender, and location, but they tout they can map people’s habits, values, political inclinations, and the best way to manipulate each individual segment all extrapolated from their data with astounding accuracy. Naughty. The revelation obliterated users’ perceptions of privacy and vulnerability, triggering debate as to how aware Facebook were in the endeavour, whether it matters how aware they were, and indeed the level of culpability they share in the whole debacle. In social media, however, perception is often more important than reality…
There is an expected duty of care by Facebook (or any digital provider) when looking after the data of its users. The fact user data from Facebook allegedly contributed to the election of an incredibly divisive figure certainly fanned the flames. If Cambridge Analytica had used the same methods to farm data with a goal to create a House for Orphaned Puppies would the outcry be as loud or angry? Unlikely. People on the whole understand at some level – even if they’ve not read the tome that is Facebook’s Terms and Conditions – that in exchange for the free use of the platform they are offering access to some of their private data in an accord with the Privacy Calculus model, and they’re ok with that. But when does this get untenably creepy? It depends who you ask, as perceived vulnerability and the perceived control they have over their data shapes opinion, and a significant majority of the SNS user base don’t understand this dynamic in any real fidelity.
There’s an increasingly emergent vocal contingent born of this and similar incidents, creating sound outside the usual echo chamber of digital privacy advocates. This occasion has certainly given amplification to their message and seen a sharp rise in consumer empowerment, – Aggregation as Power specifically – notably on the very platform their rallying against, though self-liberation is a significant factor when an issue like this becomes public. What I term ‘The Melbourne Cup Effect’ sees people become experts on a topic they’ve just encountered for the first time, as they attempt to improve their social capital by taking to the digital pulpit.
When The Facebook was first founded, comments in 2004 were recorded, and reported on in 2010 by Business Insider that paint the picture of a cavalier Zuckerberg. Granted, the landscape for privacy has changed significantly since that time, but the recalcitrant nature of Zuckerberg and Facebook as a whole in the context of privacy concern doesn’t mirror his recent testimony…
From the dawn of human civilization to 2004 when Facebook materialised, human information has been estimated to be somewhere around the equivalent of one billion gigabytes. Way back in 2010, Google acknowledged they alone catalogue over 2 billion gigabytes of information every two days, much of that personal data. Like a toddler with a handgun, we as a species don’t know how to handle what we have yet and have no concept of just how dangerous it might be. We’re all learning – even SNS’s. But what we have learned is that the genie doesn’t go back easily into the bottle, so we should tread very carefully and err on the side of prudence.
GDPR, the right to be forgotten, and similar measures are starting to become more mainstream. But policy always lags behind innovation, and until it catches up and user education is in lock-step, there must remain some duty of care on the service provider to keep user data safe.
Looking at this incident, how much was driven by a profit agenda? How much was error? How much was poor judgement? A more cynical me would point out that the valuation of Facebook as a 20-something billion-dollar organisation hinges almost entirely on a userbase remaining willing to hand over the keys to their data in exchange for pictures of cats looking grumpy. It’s in their pecuniary interests to propound data safety even while ignoring it. If SNS want us to believe their messaging on data-hygiene and remain on their platform, they have to be transparent and provide the tools (with secure settings as a default) or we’ll go and be commoditised elsewhere, thank you very much!
What do you think – who has the ultimate responsibility for the security of user data?