Will privacy survive a Digital Age of corporate surveillance?
Loading...
Companies such as Google, Facebook, and Twitter have developed business models that rely on users being subjected to constant surveillance, argues Jacob Silverman, author of the new book, “Terms of Service: Social Media and The Price of Constant Connection.”
In order to avoid giving them too much power, Mr. Silverman says we need to better understand how tech companies convince consumers to give up control over their digital selves. I recently spoke with him about corporate dynamics that can threaten privacy. Edited excerpts follow.
Selinger: You’ve got a chapter in your book called “The Myth of Privacy,” and I’d like to start our conversation by addressing a longstanding misconception. Have we evolved so much that people think privacy doesn't matter because they have nothing to hide?
Silverman: No. Many still have a hard time understanding that privacy is a collective good that should be affirmed because of its shared social and political importance.
Selinger: And what about the recurring mantra that privacy is dead?
Silverman: This remains a popular meme, but I think it’s an affectation or pose. There’s a sense in which people profess it because it’s cool and shows an independent and laissez-faire attitude. If you say “I don’t have anything to hide” and “Why worry about being privacy outmoded,” you’re trying to demonstrate you can operate in the world as a fully modern citizen.
Selinger: What motivates people to think this way?
Silverman: The attitude comes from a social media culture that revolves around transparency, openness, self-promotion, and branding. Perhaps it’s a way of dealing with it. It’s easy to believe we’re all famous public figures, in one sense or another.
Selinger: Are you saying that social media companies push an ideology along with their tools that specifies who we should be when connected to the Internet?
Silverman: Yes. [Facebook CEO] Mark Zuckerberg has made comments about how people should act once social media became the new normal. Users are encouraged to toughen up and accept that’s how things are now. They’re told if you’re progressive, you’ll embrace the openness. [Facebook COO] Sheryl Sandberg and others also speak about what makes an “authentic” social media users, and it usually boils down to being online, employing your legal name, and other measures that allow companies like Facebook to better monetize your data.
There’s also a new part of the story. In the post-Snowden world, companies are conveying a sense that they should be trusted with our data because their platforms are safe, in comparison to what the government’s up to. Facebook now comes across as a guardian of our privacy. You can expose yourself on Facebook; it’s other places – and other actors, like the NSA – that you have to worry about.
Selinger: From this perspective, do you think there isn’t enough attention being paid to companies’ self-interest in the current debates on encryption?
Silverman: People have a sense that there’s a lot of data being collected about them, and that they’re vulnerable to it being exploited, but they often lack more specific information about what is being collected and how it might be used or sold. Silicon Valley sees this insecurity and is running with the paternalistic idea that they have the corporate muscle and cryptographic tools to protect everyone from threatening forces. I have a problem with this vision of reform. As long as collecting personal data is essential to the digital economy and how these companies are run, they’re going to continue acquiring it and the government is going to continue being interested in it. A potential false-dichotomy is being set up. Both government and industry want the same information, and there’s no guarantee that companies will be a solid bulwark in the long-run. By collecting information, these corporations are actually producing a strong desire on the part of the government to get it.
Selinger: Are you saying companies are over-selling their power to give customers a false sense of confidence?
Silverman: I think so. But I also think they’re trapped in a way. They’ve developed business models that rely on consumers feeling open and safe with them. Whereas originally consumers shared their information out of naivety, now even cynical ones will do so in order to feel protected or simply because they feel there's no better alternative.
Selinger: How else are companies convincing consumers to turn over their data?
Silverman: Lots of apps and services are being rolled-out that are designed to act on our behalf. We’re told they need lots of data to perform well, and the more you give yourself over to them, the more advantage you’ll have from using them. Essentially, in order to embrace how this technology operates, you’ve got to give up control over your digital self. Siri, Cortana, and Google Now, for example, are being sold with practical arguments about the importance of feeding them information so they know what we like and can do our bidding.
Selinger: Does this mean companies are getting consumers to lower their guard by emphasizing that predictive technology needs lots of data to be optimized?
Silverman: Saying that these services need your data, and that you’ll benefit from sharing it and deepening your relationship with the company, helps foster trust. And if you trust Google Now with your scheduling, why shouldn’t you trust Google with other services, and the prospect of continually giving it your data as the corporation expands its reach across industries?
Selinger: Why does this matter? How would you respond to the typical person who says something like the following: “Yes, data is the new oil and that’s why I get targeted ads. But they don’t impact what I purchase. And, sure, I recognize that data brokers sell sensitive information, but that probably won’t impact me. At the end of the day, the cost-benefit analysis is in my favor. I get lots of free and useful stuff.”
Silverman: This is the typical outlook, and it lacks any sense of social responsibility. It acknowledges that other people may suffer, such as marginalized citizens, but minimizes that reality, if not dismisses it entirely. It also minimizes the amount of self-harm that’s at stake.
We’re in early days with things like real-time health information and smart cars. To get ahead of the curve and anticipate some of the problems down the road, you’ve got to expect that things will get worse, that corporations will find more ways to extract value from this information and to manipulate consumers in ways that may be subtle, if not invisible. I expect that we'll see more Uber-type pricing for insurance – some kind of real-time or surge-pricing model. Social Security numbers and credit scoring are useful historical examples to consider. They were supposed to have limited uses, but over the past decades their use has exploded. Most people find credit reports opaque and don’t know what purposes they are serving. They have no idea how to change them, and feel like they only get in the way when purchases and jobs are involved. We’ve got to imagine these barriers and bureaucratic frustrations cropping up in more aspects of our data-driven lives without us knowing how the power is flowing.
We should also recognize that collective solutions – like government regulation over the collection, storage, usage, and trade of personal data – would help the most marginalized, yes, but they would also help even the more technologically savvy among us. Who wants to spend time, energy, and concern trying to monitor a dozen different app permissions, or correcting false data held by [data broker] Acxiom, or any of the other activities that are required to be in control of one's data – especially reputational data? I would hope that even privacy agnostics would see the advantages in being proactive about investigating and responding to these new data regimes.
Evan Selinger is an associate professor of philosophy at Rochester Institute of Technology. Follow him on Twitter @EvanSelinger.