Facebook is a specific, known, threat to democracy, not a general unknown threat to our capacity for rationality

Zeynep Tufeki has a TED talk ‘We’re building a dystopia just to make people click on ads’. In it she talks about the power of Facebook as a ‘persuasion architecture’ and she make several true, useful, points about why we should be worried about the influence of social media platforms, platforms which have as their raison-d’être the segmentation of audiences so they can be sold ads.

But there’s one thing I want to push back on. Tufeki’s argument draws some of its rhetorical power from a false model of how persuasion works. This is a model in which persuasion by technology or advertising somehow subverts normal rational processes, intervening on our free choice in some sinister way ‘without our permission’. I’m not saying she would explicitly endorse this model, but it seems latent in the way she describes Facebook, so I thought it worth bringing into the light, pausing just for a moment to look at what we really mean when we warn about persuasion by advertising.

Here’s Tufeki’s most worrying example: targeted Facebook ads aimed at mobilising, or demobilising voters, which are effective enough in changing voter turn out to swing an election. She reports an experiment which tested a fairly standard ‘social proof’ intervention, in which some people (the control group) saw a “get out and vote” message on Facebook, and others (the intervention group) saw the same message but with extra information about which of their friends had voted. People who saw this second message were likely to vote (0.4% more likely). Through the multiplier effect of the social networks they were embedded in, the researchers estimate that 340,000 extra people voted that otherwise wouldn’t have.

Now 340,000 votes is a lot, enough to swing an election, but it would be a mistake to think that these people were coerced or tricked into acting out of character by the advert. These were people who might have voted anyway, and the advert was a nudge.

Think of it like this. Imagine you offer someone an apple and they say yes. Did you trick them into desiring fruit? In what sense did you make them want an apple? If you offer apples to millions of people you may convert hundreds of thousands into apple-eaters, but you haven’t weaved any special magic. At one end, the people who really like apples will have one already. At the other, people who hate apples won’t ever say yes. For people who are in between something about your offer may speak to them and they’ll accept. A choice doesn’t have to originate entirely from within a person, completely without reference to the options presented to them, to be a reasonable, free, choice.

No model of human rationality is harmed by the offer of these apples.

Our choices are always codetermined by ourselves and our environment. Advertising is part of the environment, but it isn’t a privileged part — it doesn’t override our beliefs, habits or values. It affects them, but it no more so and in no different way than everything else which affects us. This is easy to see when it is offers of apples, but something about advertising obscures the issue.

Take the limit case — some political candidate figures out the perfect target audience for their message and converts 100% of that audience from non-voters into voters with a Facebook advert. Would we care? What would that advert — and those voters — look like? They would be people who might vote for the candidate anyway, and who could be persuaded to vote for someone else by all the normal methods of persuasion that we already admit into the marketplace of ideas / clubhouse of democracy. They wouldn’t vote for a candidate they didn’t sincerely believe in, and the advert wouldn’t mean that their vote couldn’t be changed at some later point, whether by another advert, by new information, by arguing with friend or whatever.
There are still plenty of reasons to worry about Facebook:

  • Misinformation —how it can embed and lend velocity to lies.
  • Lack of transparency — both in who is targeting, who is targeted and why.
  • Lack of common knowledge —consensus politics is hard if we don’t all live in the same informational worlds.

Tufeki covers these factors. My position is that it hasn’t been shown that there is anything special about Facebook as a ‘persuasion architecture’ beyond these. Yes, we should worry something with the size and influence of Facebook, but we already have frameworks for thinking about ‘persuasional harm’— falsehoods are not a legitimate basis for persuasion, for example, so we are particularly concerned to hunt down fake news; or, it is worrying when one interest group controls a particular media form, such as newspapers. Yes Facebook persuades, but it doesn’t do so in a way that is itself pernicious. Condemning it in general terms would be both misplaced, a harm to any coherent model of citizens as reasonable agents, and a distraction from the specific and novel threats that Facebook and related technologies constitute to democracy.