The polls were correct, we just didn't understand the question.
<![CDATA[Numbers are authorative – this is probably a perception picked up in our school years, when mathematics seems so black and white: the answer to a sum or calculation is either right or wrong; there is no scope for debate or subjective judgement. Only those who go on to degree level or higher see that subjectivity creeps back into mathematics.
It is no surprise, therefore, that Douglas Adams chose a number to be the answer to the question of life, the universe and everything.
So, that the numbers which constitute the polls which dominated the media during the election campaign should so dramatically fail to predict the outcome has created a small storm. Some have started to blame this on people changing their mind, or simply lying to the polls, although why this should be more prevalent in this election than others is not clear.
However, looking closely, the polls seem to be actually pretty accurate – the problem, like the answer to life, the universe and everything, is not being clear about the question.
Looking at the various polls included in the BBC Poll of Polls, all work in very similar ways. They select a group of 1000 people at random and ask them how they will vote. The differences between the polls include the wording of the question asked, how the group is selected, and any normalisation to remove bias in the group selection.
This means, of course, that the poll is indicative of how many people will vote for a given party.
Comparing the various polls (using the values from the 6th May just before the election; source http://www.bbc.co.uk/news/politics/poll-tracker), with the actual results are interesting:
Conservative | Labour | UKIP | LibDem | Green | |
BBC Poll of Polls | 34% | 33% | 13% | 8% | 5% |
Survation | 33% | 34% | 16% | 9% | 4% |
TNS-BMRB | 33% | 32% | 14% | 8% | 6% |
Opinium | 35% | 34% | 12% | 8% | 6% |
Populus | 34% | 34% | 13% | 9% | 5% |
ICM | 35% | 35% | 11% | 9% | 3% |
Ipsos Mori | 35% | 30% | 10% | 8% | 8% |
ComRes | 35% | 32% | 14% | 9% | 4% |
YouGov | 34% | 34% | 12% | 9% | 5% |
Result | 36.90% | 30.40% | 12.60% | 7.90% | 3.80% |
or as a graph
To me that looks pretty accurate – certainly well within the expected margins of error.
The problem is, of course, that in our First Past the Post system, seats aren’t allocated according to the total percentage, but by the winner in different electorial regions. The relationship between the percentage vote and the actual number of seats the party actual wins, is both complex and messy. As such the polls are (still) a good predictor of a parties overall votes, but a poor indicator of how many seats they would win.
This seems pretty obvious but given that this has always been the case, why has it suddenly become an issue in this election? Well, not least of all because we’ve moved from a two party system, with various smaller parties becoming more significant, the discrepancy between the overall percentage votes and seats is much greater this time, as many commentors have observed:
(
(source: http://i100.independent.co.uk/)
Some will argue that this is necessary to ensure we have definitive winners and strong parliaments, whilst others will argue that this is unfair and our electoral system needs reform, and that coalition parliments are more democratic rather than weak.
However, the real problem lies in a misunderstanding what the polls tell us when there is no clear leader in the overall percentage vote.
If a party has a 20% lead in the polls, it doesn’t mean it will get 20% more seats – it is instead a good indicator that the party will get a majority of the seats
If the party had a 40% lead rather than indicating an increase in the majority, it really indicates that you can be more confident that that party will win a majority of seats; on the other hand a 10% lead means the reliability of the prediction is less.
Hence, a poll with the two lead parties running neck to neck does not tell us that we will get a hung parliament (as the media constantly reported) but rather than the poll’s ability to make a prediction is extremely weak (which admittedly the media also talked about the outcomes uncertainty but not quite for the right reasons).
It remains to be seen what influence the polls (or rather the misinterpretation of the polls) on how people voted (and whether this had a significant impact), for example how many people voted tactically assuming on the basis that the outcome would be a hung parliament.
One notable feature of all the polls however is that the SNP was lumped together in the “Others” category, which may explain why its performance came as such a surprise to the media and politicians.]]>
The pollsters take this factor into account, they pronounce on the number of seats (as that is what matters) not just the share of the vote. The exit poll is still a poll, and it was very close.
The exit polls work because they poll the “right” thing. The polls leading up to the election poll quite a different thing, and then the pollsters extraoplate a guess in terms of the seats. My main point is that when there if a large difference between the leads in the (non-exit) polls the confidence is higher than when there is a small lead. An insignificant lead indicates not that the parliament will be hung, but that the number of seats cannot be predicated from that type of poll.
the fact the pollsters failed to predict the number of seats supports my point
The parties poll on a seat by seat basis too, and (apparently) didn’t see the result either, so I’m not sure your main point carries?
Do the parties poll all the seats or just key marginals to decide where to focus their campaign efforts? If the latter, then they too would be reliant on the % polls for the overall picture.
If the parties poll all the seats (or a large majority), do they do so in one go, or gradually over the campaign. If the latter, then as it is a flawed method (even if it might have given a correct answer in this case), any discrepancies with % polls and the general concensus would probably be discounted.as anomalous.
They focus on the marginals, there are only around 100 usually (more like 150 this time), and do so on a rolling basis. There were (I hear) still lots of surprise results, even in well polled seats like cardiff north (I live there), labour, Tory and public polls all gave it to labour, but it went to Tory (it was a slim Tory majority last time), same thing happened in Ed Balls former seat (although obviously this changed hands). Pollsters already adjust for shy Tories etc. so it is still (to me) surprising they were quite a bit out in quite a number of seats. CCHQ thought they would get 290, and they are normally seen as being overly optimistic. It is true that this was a much harder one to call than most since 1992, but even that (which predicted a small labour majority) was closer to getting. The number of Tory seats than this one (and the pollsters bought they had learnt their lesson then!)
How did the BBC Poll of Polls give a higher percentage to UKIP than any of the others? Or was it based on a longer time period?
Nope, just a transcription error on my part – now fixed.
Andy Clarke liked this on Facebook.
Having worked with market research data for nearly 20 years I have a pretty good eye for spotting those kind of typos! I personally don’t think the polls were that great – generally understating tories by 3 points and overstating labour by 3-4. The Ipsos Mori wasn’t too bad apart from the Greens.