Improve Marketing Research by … Being Human?

Spread the love

Human SurveysBig Data. Predictive modeling. Social Media analytics. If you read the marketing research news, you might think that we don’t do surveys anymore. However, while marketing researchers have many qualitative and quantitative methodologies available to them, surveys remain the main source for marketing information. Surveys let us collection opinions and behavioral statistics about defined topics using standard questions and answers, thereby avoiding the biases inherent in some other methodologies.

Unfortunately, surveys have been suffering from low response rates and a general lack of respondent engagement. There have been many solutions offered to solve this problem, including gamification, creative respondent incentive programs, and even embedding “fun” tasks in your survey design to keep respondents engaged and deliver higher-quality data.

Perhaps the solution is simpler than we think. Marketing researchers, in their attempt to be as accurate and precise as possible, tend to write questions that are so specific and unambiguous that they may be difficult for the respondent to read and understand. Consider this recent example:

In the past two months, between October 7, 2018, and December 7, 2018, do you recall seeing or hearing any advertising for any of these brands? This includes television or radio, roadside billboards, receiving something in the mail, or seeing an ad on the internet.

That is not an easy question to answer! By asking respondents to think back to something as unimportant (to them) as billboard advertising, could we be setting up barriers to completing the survey? And if respondents don’t break off at that point, do we really think they are relating to the specific timeframe we specified? And finally, must our questions be so boring?

Do we really believe we get better data with that question rather than something simpler and more conversational, such as:

In the past two months, have you seen any advertising for these brands?

We know that this type of question will deliver different data than the example above. But is it so different that it would lead us to different action?

A Humanizing Experiment

Annie Petit, writing in Quirks, describes research-on-research undertaken to explore this issue. Using respondents from an online panel and matching the control and test samples demographically, two surveys were written with one using the traditional marketing research approach and one written in a more contemporary and conversational style. The results were then compared for data quality (five different metrics including straight-lining, open end quality, speeding, speeding and acquiescing (choosing too many of the multiple-choice options). The results showed that the humanized survey resulted in better data quality than the more traditional survey.

Next, this research looked at respondent engagement. In this test, more respondents broke off and did not complete the humanized version, but we cannot know (without further research) whether this was by chance or a result of the survey design. Another measure of engagement is the quality of responses to open-end questions, and by this measurement, the humanized survey resulted in slightly longer open-end responses than the traditional survey. Finally, a question at the end of the survey asked respondents to rate their perception of the survey, and the humanized version of the survey scored much better on fun, complexity, and whether the respondent would recommend participation, as well as in the open-end question about the survey completion experience.

While we know that if this research were replicated on another day, or with a different online panel, the results could very well be different. However, what is important here is whether the actions taken as a result of the survey would be different. Therefore, the questionnaires also included metrics to evaluate purchase intention, value perceptions, advertising approval, and brand preference. In all four cases, the humanized instrument and the traditional instrument would not have resulted in different marketing outcomes, even though the raw scores might differ.

Summary

This experiment demonstrates that writing in a more human, contemporary, and conversational style does not impact marketing outcomes, and may contribute to better data quality. And yes, there should probably be more research on this topic. Considering the decline in response rates, the difficulty of recruiting respondents to panels, and the variety of surveys being delivered to respondents through social media, it may be time to rethink our traditional approach to survey design. Media – both online and offline – have been talking to consumers as if they were human for years. Why can’t surveys do the same? We may benefit, and the best part is, there is no additional cost!

Author Image

Kyle Burnam

Kyle Burnam is the CEO of Infosurv and the leader of its sister company, Intengo, where he oversees all client research and R&D projects. Having been in the industry since 2005, Kyle brings a wealth of experience to the table and an innovative eye to every project.