If Polls Can’t Predict a Presidential Election, Why Should I Trust Marketing Research?

Spread the love

Polling SurveyWithout a doubt, polls did a spectacularly poor job of predicting Trump’s success in the Electoral College. And the unreliability of polls is not new. As 538’s Nate Silver said, “As Americans’ modes of communication change, the techniques that produce the most accurate polls seems to be changing as well. In last Tuesday’s presidential election, a number of polling firms that conduct their surveys online had strong results. Some telephone polls also performed well. But others, especially those that called only landlines or took other methodological shortcuts, performed poorly…” What’s noteworthy about that quote is that he said it in 2012, not about this year’s election polls.

And in 2008, Pew Research reported that “The consequences of a poor performance were dramatically demonstrated in the reaction to the primary polls’ inaccurate prediction that Barack Obama would win in New Hampshire, portrayed as one of polling’s great failures in the modern political era.” However, in the same article, Pew indicates that 8 out of 17 national polls accurately predicted the outcome of the presidential election, comparable to previous presidential elections. (Forgive us if we are not impressed with 8 out of 17 being “accurate”!)

It is no surprise that the public is questioning the value of polling. But the problem worsens when the public compares polls to all marketing research. Also from Pew, “The performance of election polls is no mere trophy for the polling community, for the credibility of the entire survey research profession depends to a great degree on how election polls match the objective standard of election outcomes.”

Yes, marketing research and polls use many of the same techniques. But the application is vastly different. Here are the main differences between polls and marketing research:

  • Behavior Attitudes. It has been well documented in marketing that human beings are very bad at predicting how they will behave in the future. So most marketing research projects (with the exception of ad claim substantiation research) will include many questions about perceptions, opinions, beliefs, and attitudes, as well as questions about current and past behaviors. Polls are focused on one question: for whom will you vote? This creates a significant weakness, especially the farther out the polls are from the actual election.
  • Sample source. A representative sample is one where every member of the target population has an equal chance of being included in the survey. When we had 95% household penetration of landline telephones or we collected data door-to-door, that worked fairly well. But with today’s fragmented approach to sample, we are not certain about sample representation. Landline penetration is only about 40% of households. Cell penetration is about 65% of individuals. Add in social media polling, the proliferation of panels, etc., and it is simply not clear how poll samples reflect the population of “likely voters.”
  • Data weighting. We (pollsters and marketing researchers) often attempt to rectify a non-representative sample by weighting the data to reflect actual population proportions. However, without a large enough sample size to begin with, you end up weighting too few individuals. When we assume one or two individuals’ responses reflect a larger segment of society, we run the risk of creating bias and error in our findings.
  • Related to sample source, the methodologies used by different polls can themselves yield different results. People who will speak to a telephone interviewer may be different (or more reluctant to express their true views) than those who respond to IVR or an online survey invitation. And they may be different than those who visit a website and decide to complete a passive poll.
  • Over-reliance on a single tool. Most pollsters choose one sampling method and one methodology, and repeat it over time between primaries and elections, in order to maintain their ability to report trends, and changes in that trend. While marketing researchers also use tracking studies, we also recommend additional research to clarify and expand on what we see in the trends.
  • Over-reliance on quantitative survey data. Business decision makers rarely rely on one type of information for decision-making. We look at internal data, secondary industry analysis, behavioral data, as well as qualitative and observational data. This is perhaps the biggest weakness of poll data: the inability to take in other information sources to validate or mediate poll findings. While the campaigns typically do qualitative research to identify issues, they are rarely reported to the public. But if pollsters had considered qualitative measures (e.g., size and passion of the crowds at campaign events, social media activity, etc.), they might have had less confidence in their numbers.

At the end of the day, data – whether from marketing research or polls – is simply information. And that information should be combined with other information sources (observation, qualitative research, text analytics, etc.) to form insights. Surveys are not decisions, but they yield and nourish insights, which are invaluable inputs to the decision-making process.

Author Image

Lenni Moore

Lenni Moore is the Director of Operations at Infosurv. She’s always been passionate about fostering strong professional relationships. It’s precisely these relationships that allow her to exceed her clients’ expectations because she knows exactly what they want and then leverages her experience to get it for them.