Article: Why did the polls get it wrong at the general election? Because they lied
18th August 2015
Why did the polls get it wrong at the general election? Because they lied
Far from it being ‘too close to call’, the British polling industry collectively, deliberately, and cynically manipulated its own findings
British newspapers carrying headlines dominated by exit poll forecasts in favour of the Conservative Party
In this morning’s Times, Danny Finkelstein has a fascinating piece on opinion polls. Or, rather, he has a fascinating piece on the total, calamitous, ignominious failure of the polling companies at the last election.
In it, Danny lists some of the explanations put forward to him for this failure by the pollsters themselves. One was that they simply did not have enough traditional Conservative voters in their polling samples. Another was that they misjudged turnout. A third was that they got the basic demographic composition of their samples wrong.
There is talk of Shy Tories. Of Lazy Labourites. Of a good, old fashioned last minute swing.
In Danny’s view all, some or none of these theories may be correct. But at the end of the day, he concludes, “we’ll never know why the polls were wrong”.
This conclusion seems to me to be primarily based on two things. One: Danny’s legendary generosity of spirit. Two: a desire to head off calls – currently being led by Labour peer Lord Foulkes – for a regulation of the polling industry.
I bow to the former, and have some sympathy for the latter. But Danny is wrong in one important respect. We do know why the polls were wrong. And it has nothing to do with sampling or turnout or demographics or Shy Tories or Lazy Labourites or a last minute swing. The polls were wrong because the polling industry collectively and deliberately and cynically manipulated its own findings. In other words, the pollsters lied to us all.
Let’s take a trip back in time. Several trips, in fact. On April 13 the Guardian produced the article: “General election 2015: why do phone and internet polls give different results?”. A day later the website UK Polling Report produced a piece on “phone and online differences”. On May 2 the New Statesman’s website produced its own piece headlined: “The Tories are 3 points ahead in phone polls, but tied with Labour in online polls”.
Since the election, the narrative has been “the polls were wrong”. But what people are conveniently forgetting is that during the election the narrative was different. Back then, it was “why are the polls all over the place?”. More specifically, it was “why do online polls show a dead heat, but phone polls show the Tories beginning to build a quite significant lead?”.
But then something strange happened. Over the final 72 hours this discrepancy between the phone polls and online began to vanish. The polls suddenly began to cluster. The wide variation between the polling companies – variation that we had been witnessing for the best part of five years – mysteriously stopped. All the pollsters were suddenly, and miraculously, in agreement. The election would be a tie, give or take a single point in either direction.
I say “strange”, but it wasn’t strange. It was wholly predictable. Cynical, disreputable, despicable. But predictable.
It’s what known in the polling industry as “herding”. And herding, not to put too fine a point on it, is when pollsters cheat. Each polling company knows that however accurate their results are, they will ultimately only be judged on one poll. The final poll before the election.
Which presents them with a dilemma. What if their poll is at variance with all the other polls? What if that vital final poll looks like a rogue? What if they, alone, stand out from the crowd? What, God forbid, if they alone are wrong?
The commercial and reputational risk is too great. So they herd. They deliberately manipulate their results to bring their own figures back into the pack. There is, they believe, safety in numbers. What’s more, there is even greater safety in being able to say an election is too close to call. “It was too close to call. How were we supposed to know? No one knew. We couldn’t call it. No one could call it.”
Back in November, the US statistics guru Nate Silver – who famously called the 2012 election correctly – wrote a piece on this very phenomenon. In it he provided statistical proof of the way polling companies, in his words, were “putting a thumb on the scale”.
The example he used was the Iowa senate race between Republican Joni Ernst and Democrat Bruce Braley. At the start of the race there was significant diversity between the polls. One had Braley up by 5. A week later a different company had Ernst up by 6. But by the end, the polls had suddenly clustered, down to a rage of a 1 point lead for Braley and a 4 point lead to Ernst. Ernst won by 8.5 points.
Silver then went and looked at the polling averages for senate races across the country. What he found was at the start of the race, a typical poll would deviate from the average by around 3.5 per cent. This is exactly what should happen, given the basic statistical variation – we know it as “margin of error” – that is an inherent part of any individual poll. But by the eve of the election, that margin had shrunk to 1.7 per cent. Such a change is not statistically credible. Which is why Silver has branded the end of the campaign “CYA – cover your ass time” for the polling companies.
Silver also pointed out something else that those debating the late errors in our own general election polling have conveniently ignored:
It’s not the inaccuracy of the polling average that should bother you – Iowa was one of many states where the polls overestimated how well the Democrats would do – so much as the consensus around such a wrong result. This consensus very likely reflects herding. In this case, pollsters herded toward the wrong number.
That’s the clincher. Different companies. Different sample sizes. Different methodologies. And yet we are expected to believe all the polls coincidentally and entirely independently happened to herd to exactly the same place. The same wrong place.
But does any of this actually matter? As Danny Finkelstein correctly and subversively notes: “we don’t actually have to be certain. We can always, you know, just wait for the result.”
Yes, it does matter. For these reasons.
The polling companies cost a lot of people a lot of money. Not just the papers and broadcasters who literally bought into their fictitious predictions. But a lot of ordinary members of the public who placed bets on the election outcome. “More fool them,” you might say. And perhaps you’d be right. But they trusted the pollsters, and the pollsters deceived them in a bid to protect their own narrow commercial interests.
The political parties believed them too. I’ve spoken to several senior Labour strategists who have confirmed their tactics over the final 72-hours were in part governed by what the polls were telling them. Again, more fool them. But if polls are going to be part of the political landscape, then it is inevitable they are going to influence the calculations of the politicians.
And this is the final reason it matters. Polls aren’t just used to predict election results, they’re also used to try to influence election results. A few days before polling day the BBC produced an all-singing, all-dancing full page online interactive graphic detailing the “Close constituency battles” it said were “being fought with less than a week to go before the general election”. At the bottom, it carried this small disclaimer. “Most of the polling shown was commissioned from independent polling companies by former Conservative party deputy chairman and donor Lord Ashcroft. Seven were conducted by Survation on behalf of UKIP donor Alan Bown or trade union Unite. One was carried out by ICM for former Lib Dem peer Lord Oakeshott.”
If the polling companies don’t want to be regulated, fine. But let’s not pretend we don’t know why they got it wrong on 7 May. Because we do.
0 Comments