Part of the problem with the whole debate around the role of ‘fake news’ in contemporary politics is that we sometimes have difficulty distinguishing between purposeful fakery and sloppy reporting: thus, Trump can claim Clinton got 3-5 million ‘illegal’ votes while lambasting reporters who mistakenly wrote that he had removed the bust of Martin Luther King Jr from the Oval Office.
Unfortunately, the problem is even worse in the world of academic research: while careful reading and a bit of thought can often uncover bad or disingenuous work, most non-academic readers – including a lot of the journalists who report on it – don’t go to those lengths.
A case in point is a Bloomberg article making the rounds, citing academic research purporting to prove that fake news had little or no effect on the US election. I’ll leave the journalist out of it (though he should know better, about which more later) and focus instead on the study, by economists Hunt Allcott and Mathew Gentzkow. It’s timely, important and methodologically advanced. It’s also spectacularly bad social science.
Allcott & Gentzkow collect a list of 156 reliably tagged fake news stories, from which they selected a series of sub-sets, including pro-Trump and pro-Clinton fakes. A sample of these were then presented at random to a more-or-less representative sample of survey respondents, to measure the degree to which these stories were seen, received and remembered. The authors then use a complex (and, as best I can tell, generally valid) decision model to test the degree to which these fake stories – alongside more traditional news stories and campaign ads – may have contributed to voting decisions. Their headline conclusion is that, in their words, “for fake news to have changed the outcome of the election, a single fake article would need to have had the same persuasive effect as 36 television campaign ads.”
Now, the authors of this paper are smart people, and they know both the demands of this particular field, and the limitations of their method. Perhaps some of what I’m going to say here will be corrected in a revised version of the paper (in which case, on the off chance that they read this, notch one up for nil-blind peer review). But for whatever reason, Allcott & Gentzkow operate on a number of assumptions – some explicit, but most not – that are problematic in the extreme. While there are probably more, four come to mind.
The first of these problematic assumptions is that message reception equates to message retention: in other words, that in order for a message to have had an effect on someone, they have to remember it. In their study, respondents are only recorded as having received a fake news story if they report remembering it when the survey is conducted. Leaving aside the fact that memories are imperfect and subject to change, this approach assumes that we have to have a conscious memory of a message to have been affected by it. (On the problems of measuring media message reception, see, among many others, Bartels 1993, Geiger & Newhagen 1993, Price & Zaller 1993).
The second problematic assumption is that the effects of media messages are accumulative, as opposed to merely cumulative: in other words, Allcott & Gentzkow are equating the quantity of media messages received roughly to the quality of the media effect produced. When constructing formal models of human behavior, it’s not hard to see why this is an easy and attractive abstraction to make – but it’s still a significant departure from reality. One of the reasons that researchers often find seemingly minimal effects of mediated messages is that the effects they look for are too simply and directly conceived: did you hear about x, and if so, what did you think about it? In real life, we’re constantly exposed to a multitude of competing and overlapping messages, and to varying and inconstant interpretations and re-interpretations of those messages. The end result is that we come to conclusions that are very rarely shaped directly by the messages themselves; instead, these things accumulate in our minds, are processed and re-processed, and they emerge not as a sequence of discrete positions and responses, but as a more or less coherent world view. (On this, to start, see: Bennett & Iyengar 2008, and Beck et al 2002.) This world view then shapes what scholars have called our ‘motivated reasoning’: our propensity to resolve conflicting or uncertain messages in such a way as to reinforce the way we already understand things (about which, Redlawsk 2002 and Robertson 2015). In other words, people believe fake news before they even receive it – in which case, measuring the reception of individual messages rather misses the point.
The third problematic assumption is that media messages are themselves un-mediated, and that there is no interaction effect between media. To a certain extent, Allcott & Gentzkow try to solve this problem by asking about specific stories, rather than about sources – thus, if a fake news story started online but made its way to television or advertisements and only from there to a recipient, it would still register, so long as respondents remembered it. And it’s good that they do that. But they have no way of dealing with the broader media effects, which have to do with agenda-setting and framing. Even the non-fake stories that people do remember are produced, transmitted and received in an environment that has itself been transformed by fakery, which crowds, couches and colors the ‘real’ news in ways that are not necessarily overt but are no less real for that.
And lastly, Allcott & Gentzkow assume – because almost all quantitative research assumes – that the effects we’re interested in are ‘normally‘ distributed throughout society. They’re not. Political behavior is ‘lumpy’: people’s behavior is shaped not only by their individual characteristics (income, education, age, personality traits, etc, which are more or less normally distributed), but also by their social surroundings: we are likely to behave in ways that are similar to the behavior we observe around us, for a whole range of socially meaningful reasons. We congregate in networks of affinity, and we reinforce our belonging in part by choosing to receive and retransmit certain messages, while blocking out others; we also choose to believe. While this has always been part of politics, the emergence of online social media has made it easier for campaigns (and their ancillaries) to tap into these ‘lumps’ of society, motivating and mobilizing them and, effectively, priming them to receive and assimilate the campaigns’ core messages. (See Vaccari 2012.) This is one of the reasons why opinion polls miss outcomes like, well, Trump: they’re looking for normal distribution when, in fact, the pockets of support that matter are hidden away and hard to model.
There is a lot more that could be said, but I’ll close a word on that journalist. As he says in his article, the Allcott & Gentzkow argument is one to which he is prone to be sympathetic – a bit of ‘motivated reasoning’ that may have led him to ignore the rather significant caveats that the economists (to their credit) listed on page 5 of their study. Among these, Allcott & Gentzkow write, is this: “our calibrated election model relies on a number of other strong assumptions, including a persuasion rate of fake news that is the same for all stories and voters and that cumulates linearly.”
While this doesn’t stop them from making some rather sweeping (and, in my view, poorly supported) conclusions, it is in large measure an admission of some of the problems I’ve addressed above. But, for whatever reason, the journalist chose to ignore it. In the interests of charity, I’ll assume he just skipped that page. That’s unfortunate, though. In this day and age, we can ill afford to ignore the caveats.