Why You Should Leave Questionnaire Design to Professionals [Part 2]

Spread the love

survey questionnaire design professionalsIn Part 1 on questionnaire design, we took on questionnaire design with an emphasis on improving the respondent experience. By adopting a disciplined approach to the content of the questionnaire and focusing solely on the business challenge at hand, we can ensure that the questionnaire is the most efficient tool for collecting the necessary data. This week, we consider the questionnaire structure, and the array of question types, scales, and survey tools available.

Variety is the Spice of Questionnaires

One of the key decisions in writing your questionnaire is the type of the question that you use.  There are many of types of questions that you can use in your survey. The question type must first and foremost correctly gather the information that is needed, but mixing up question formats is also beneficial. There are several types of questions to keep in mind:

  • Dichotomous: as the name suggests, this question has two responses. Frequently, these are “Yes-No”, or “True-False” questions.
  • Multiple Choice: these questions offer several options to the respondent. In multiple choice questions, the response categories must be mutually exclusive and collectively exhaustive (abbreviated MECE and pronounced “mee-cee”). If the responses are MECE, the respondent should be able to easily select the one answer that best answers the question for them. If the researcher does not know if or can’t practically list response categories that are MECE, it is important to include a response for “other” (with a request to “specify” if that information is needed). And, to make these questions truly MECE, one should also include a response for “none” or “don’t know”.  In asking Multiple Choice questions, you also need to determine whether the respondent should be allowed to provide one or more than one answer.  For example, if you asked, “What is the most important reason that you shop regularly at Costco?”, you would obviously require only one answer.  But, for the question, “Which of the following reasons describe why you shop regularly at Costco?”, you would allow the respondent to provide multiple answers.
  • Rank Order: Questions that ask the respondent to put a list into some rank order are very popular, especially with the “drag and drop” capability of online surveys. It is important to keep the list of items short (5 or 6 items) so that there is reliability in the order produced. Research has shown that when asked to rank many items, respondents do a good job only on the top and bottom of the scale (those items about which they have the strongest opinions). If you have a long list of items to rank, one option is to ask respondents to rank their top and bottom choice only or to ask them to rank order just the top five items.
  • Constant Sum: In constant sum questions, respondents are asked to allocate a set amount of units across several items. For example, respondents might be asked to allocate 100 percentage points across a list of items. This technique is also often used to allocate dollars as a surrogate for value. So, you “give” the respondent 100 virtual dollars, and ask them to allocate their dollars to alternative product concepts according to how important or valuable each product is.
  • Scaled questions are frequently used in survey research to measure the strength of a respondent’s opinions, attitudes, experiences, behaviors, or situations. There are several different types of general rating scales that you can use. Here are some general guidelines to the selection of a scale:
    • Keep it simple: Typically at least a five-point response scale provides a respondent with a sufficient range of evaluation to be able to provide reliable answers. A 10-point or 100-point scale may be too complex and a three-point scale may be too limiting.
    • Keep it balanced: It is important to make sure that your scale does not present a bias.  For example, the scale, “Excellent, Very Good, Good, Fair, and Poor” is out of balance and presents a positive bias.  A better, more balanced scale would be, “Very Good, Good, Fair, Poor, and Unacceptable”.
    • Provide a midpoint: Some respondents may not have a strong opinion one way or another on every question.  Therefore, it is important to provide a midpoint option in your scale, such as “Very Satisfied, Satisfied, Neither Satisfied Nor Dissatisfied, Dissatisfied, and Very Dissatisfied”.
    • Provide Anchors: Adding a clear verbal description or anchor to each scale point gives clarity and meaning to the scale so there is not ambiguity in each respondent’s mind regarding what each answer choice means. So, it is better to use a scale such as, “ 5 =Very Satisfied, 4=Satisfied, 3=Neither Satisfied Nor Dissatisfied, 2=Dissatisfied, and 1=Very Dissatisfied” than a scale such as “1 to 5, where 1= Very Dissatisfied and 5 = Very Satisfied”.
  • Categorical Questions: Demographic, or in the case of business respondents, firmographic questions are used to describe and classify respondents. For consumers, these questions include age, gender, education, income, ethnicity and household size. For businesses, revenue, number of employees, industry, and years in business are very common questions.
  • Open-ended questions are used to provide respondents with a free-form way to respond to a questions. In an open-ended question, the respondent answers the question in their own To get the most information out of open-ended questions, you need to keep two things in mind. First, limit the number of open-ended questions that you ask. Including too many open-ended questions will wear out your respondents and they will either give short, incomplete answers or skip the questions entirely. Two, make sure that the question is important enough to respondents for them to invest the thought and time in answering the question.

Caution! Danger Zone: Error and Bias

Think of an interview as a conversation, and your questionnaire as your side of a conversation. Except you’re having the same conversation over and over with hundred, or even thousands, of individuals. And, you are not personally present in any of these conversations. Each respondent is unique and brings their specific understanding to the conversation. In order to get the best possible data, free from error and bias, every single question has to be written so that all of these diverse respondents understand the question in exactly the same way. Because you can’t be there to clarify.

Here are some guidelines that Marketing Researchers should follow when developing questions:

  • Ask about information respondents can remember. If the event you are asking about is not very memorable or happened too far in the past, you won’t get very good For example, a respondent will probably easily recall specifics about the purchase of a car last month but probably won’t be able to answer questions about how many soft drinks they consumed last year (of even last month).
  • Word the question so that respondents feel comfortable answering honestly. Sometimes, respondents want to give the socially acceptable answer, or the one they believe the researcher is seeking. Again, this will skew the information and lead to bad decision-making. For example, asking people how much money they donate to charities will probably result in positively biased answers.
  • Ask the respondents for information they know. Avoid second and third-hand information as it can be wildly inaccurate. Don’t ask a doctor what their depression patients want; go to the patient for that information. Also, don’t ask questions directly to respondents that are difficult or impossible to answer. For example, respondents may not be able to tell you about their future behavior, such as the brand of refrigerator they will buy next (unless they are currently in the process of buying one).
  • Be clear and specific. Here’s an example: “Generally, how often do you exercise in a week?” vs. “In the last seven days, how often have you exercised vigorously enough to raise your heart rate?” You will probably get very different answers, so you must avoid imprecise modifiers.
  • No double-barreled questions: Don’t ask questions that refer to two independent attributes.  For example, “How satisfied were you with the courtesy and knowledge of your CSR?”, may be hard to answer and hard to interpret.  The CSR may have been perceived courteous, but not knowledgeable, or vice versa.  So, split this up into two separate questions.
  • Avoid complexity. Respondents will not answer a long or complex survey. So keep the questions – and the overall questionnaire – short and simple. Also, keep wording as simple as possible.  Some words are needlessly long and difficult to understand, and substitutes can be found (a thesaurus is a handy tool).

There are many sources of bias and error that can be inadvertently introduced into the study through question-wording. Indeed, it can be easy to influence respondents’ answers through the questionnaire, so care must be taken to word questions neutrally and accurately. Consider this experiment: Pew Research asked a matched set of respondents two questions:

  1. Do you favor or oppose taking military action in Iraq to end Saddam Hussein’s rule?
  2. Do you favor or oppose taking military action in Iraq to end Saddam Hussein’s rule, even if it meant that U.S. forces might suffer thousands of casualties?

Not surprisingly, respondents were much more likely to say they favored military action when they weren’t reminded about U.S. casualties! Other sources of bias in questionnaire design include the order of the questions in the survey, the order of responses to a particular question, and the type of questions used.

Writing great questions and creating a well-developed questionnaire isn’t as easy as one would think. As much art, as science, good questionnaire design balances the needs of the research project with the capabilities of the respondent and the survey methodology. All of the pieces must fit together, or the data will be much less reliable – and you don’t want to hang your hat or your reputation on sketchy results.  Questionnaires should probably come with this warning: “Consult with a Trained Professional! Do Not Attempt Alone!”

Author Image

Christian Wright

Christian Wright is the VP of Client Services at Infosurv. With a master’s in marketing research, he’s equipped to design actionable research that yields impactful insights and drives change.