Variation in polling response

Andrew Gelman and others wrote,

apparent swings in vote intention represent mostly changes in sample composition, not actual swings. These are phantom swings arising from sample selection bias in survey participation. Previous studies have tended to assume that campaign events cause changes in vote intentions, while ignoring the possibility that they may cause changes in survey participation. We will show that in 2012, campaign events more strongly correlated with changes in survey participation than vote intentions.

Cited in Hugo Mercier’s book.

The hypothesis is that when your candidate is perceived as doing poorly, you are less likely to respond to a poll. If pollsters had a consistent panel of responders, they would see that people are not changing their minds. So, if you see Mr. Biden moving up or down in the polls as the campaign goes on, it is likely that the swing is over-estimated. But I wonder if being discouraged about answering a poll is also a predictor of being discouraged about voting on election day.

5 thoughts on “Variation in polling response

  1. USC ran a unique kind of poll during the 2016 election- they polled the same people over and over, though, if I remember correctly, each day they cycled through a larger pool of identified- like 500 out of a pool of 5000. You didn’t see enormous swings in that poll after early September, Trump regularly led in the poll by about 4-6%, obviously reflecting the intial pool selection. In other words, after that initial selection, you could follow trends in voter preference, but you couldn’t say much about whether or not Trump was actually ahead. However, the stability of the results after the conventions and when the debates started does suggest Gelman is correct.

    On your last question- I would say you are right- the same dynamic that changes who responds applies to turnout. It is one reason I don’t trust most of the pollsters any longer- they also recognize their power to suppress turnout, and are quite willing to use it, though they may have shot themselves in the foot in 2016 by proclaiming the election over before it was run- people can fail to turn out if they think their vote for the winning candidate doesn’t matter.

  2. I don’t doubt that political polling is fraught with sampling issues, but I don’t think that this establishes that there is no such thing as an undecided ticket-splitter, independent, or even swing voter, as the excerpt would seem to imply. To the extent that this paper would support the “polarization is everything” meme, there is no doubt that it exists but not necessarily outside the population major party die-hards which may not be as large as one would expect.

    And while It is no surprise that individuals aligned with a major party didn’t swing much, does this paper actually establish that among the “moderate” and “other party affiliation” individuals there is no swinging?

    The authors readily admit that X-Box Live subscribers are not representative of the electorate but are able to “correct” for the demographics. They don’t claim however that the independent and other voters not aligned with the two major parties in the X-Box Live subscriber sample are actually representative of the independents and other party voters in the electorate as a whole.

    I am not sure how they can conclude “that real swings were quite small” even though they find “Not surprisingly, the small net change that does occur is concentrated among independents, moderates, and those who did not vote in 2008” if they cannot demonstrate that their corrections of this group are reflective of its share of the broad electorate.

    Pew Research cited by the paper and others try to argue that there is no such thing as independents and ticket splitters, only “leaders,” but that is just what you get with a two party system. The fact remains that people do in fact vote across party lines.

    Alexander Agadjanian and others in a 2019 paper examined anonymized cast ballots to find:

    “split-ticket rates range from 5 percent to 27, with a median of 12 and mean of 13. The survey rates vary a bit more, going from a minimum of 4 to as high as 42, with a median of 13 and mean of 15.”

    And ticket splitting is even more pronounced at the state and local level. Shiro Kiruwaki published a paper in 2019 that is summarized:

    “Although many believe that party loyalty in US elections has reached heights unprecedented in the post-war era, this observation relies on evidence only from presidential, congressional, and gubernatorial elections. Theories of partisanship as a heuristic would predict that voters would be more reliant on party labels and more often cast a straight party ticket in less salient races such as those for state legislatures, county executive, and county council. But neither surveys nor election returns can reliably measure individual vote choice in such races. I create a database from voting machines that reveals the vote choices of 6.6 million voters for all offices on the long ballot. I find that voters defect from their national party allegiance in state and county offices at similar or higher rates than they do in congressional ones. These defections systematically favor the incumbent, and candidates who receive more newspaper coverage. Even in a nationalized politics, voters still cross party lines to vote for the more experienced candidate .”

  3. Sample set is not a well rounded bell curve. There statistical test is no good.

    I have the same problem with treating a price index. Until I know mean, variance and skew, I know nothing.

Comments are closed.