What You Don’t Know Can Hurt You [Part 3]

Spread the love

Published October 2005 in Quirk’s Marketing Research Review, pg. 48-53.

We’ve all had this experience: You walk into the store to buy something, say, a small appliance for your kitchen.You walk up and down the aisles looking for this item, but have a hard time locating it due to poor signage.You ultimately find the product you are looking for, but aren’t sure which is the best brand for your needs. Although you would have appreciated the help of a salesperson, no one is around so you just choose the lowest priced offering.

Your wait in the checkout line takes just a couple minutes longer than you think it should.Once it’s your turn at checkout, the cashier is neither rude nor polite.You pay for your new product and carry it out the door.

What just happened? From the perspective of the store, another satisfied customer just made a routine purchase. Everything went smoothly and this customer will surely come back in the future…why would they think othe rwise?

From your perspective, the experience was very mediocre. It wasn’t bad enough to complain to management, but just disappointing enough to make you hesitant to come back. Maybe next time you’ll try the competing store down the road.

This unfortunate scenario is repeated every day in every industry – not just retail.In fact, studies show that only 5 percent of dissatisfied customers file a complaint.What happens to the other 95 percent?

The “silent majority” is as powerful a force in the business world as it is in politics.But despite their enormous power,most companies allow this 95 percent to remain unheard. Many operate under the assumption that they know their customers , and when a customer is dissatisfied they will swiftly hear about it and be able to correct it.

That assumption is just plain wrong.

Dissatisfied customers are a slow, silent killer.They defect one by one, often without making a sound.What’s wo rs e, they often bring other cust omers or potential customers down with them – slowly, silently and viciously.

Luckily, there is a way to give voice to this otherwise silent majority and fix many problems before they cause customers to defect: customer satisfaction surveys.

Our firm has distilled the survey process into five easy steps that any company can take to design and execute an effective customer satisfaction survey program.The process is quick, inexpensive, and guaranteed to yield invaluable results.

Step one: the buy-in phase

An otherwise well-planned and -executed customer survey program can be rendered useless without the proper level of organizational buy-in. Everyone in your organization must be made aware of the upcoming customer s u rvey program, educated as to its bene fits, and committed to acting upon the results.

Experience shows that a memo released from the company’s CEO touting the importance of the customer satisfaction survey program is a fast and effective way to get employees on board.Not only does this demonstrate the value of the survey progr a m to upper management, but also alerts salesmen and account representatives that the silent majority is about to be heard.

As you may have guessed, the boost to customer service begins almost immediately.

Step two: survey design

A well-designed customer satisfaction i n s t rument is essential to the collection of valid, reliable, actionable data.Typical customer satisfaction surveys are designed to measure overall satisfaction, product-specific satisfaction, timeliness of delivery, customer service process satisfaction, returns and exchanges process satisfaction, and interest in new products and services. Naturally, these topic areas will vary by industry. It’s i m p o rtant to hit upon all aspects of the customer experience which might affect a customer’s likelihood to remain loyal.

The best-designed surveys measure the importance of each item along with satisfaction.By isolating the most i m p o rtant issues, companies can prioritize the areas for improvement.

A survey instrument is a measuring tool – and as with any measuring tool, the measurements it takes are only as accurate as the tool itself. Since meas uring customer sentiments can be tricky,we consider the questionnaire design stage the most sensitive in the process. Most customer satisfaction surveys are composed of a series of positive statements in which customers rate their level of agreement on a five-point Likert scale (i.e., agree strongly, agree somewhat, neither agree nor disagree, disagree somewhat, disagree strongly). Typical statements might include,“The time I spent on hold waiting for a sales rep was acceptable,” or “My product was delivered in a timely manner.” Notice how these statements measure subjective, rather than objective, matt ers. It is not the place of a customer satisfaction survey to ask about actual hold times or delive ry times – you can glean this data from your phone and shipping logs.A customer satisfaction survey should measure perceived hold times and delivery times.As the saying goes, perception is reality.

Common pitfalls to avoid in the questionnaire design stage are vague or double-barreled questions/statements. A vague question is one that may be interpreted differently by different respondents, and therefore yields less reliable data.An example is,“Company X sells high-quality products.” Let’s assume that Company X sells disposa ble pens.Does quality refer to how sturdy the pen feels in your hand? Or how long the pen will last? Or does it mean that the pen supplies a smooth flow of ink?

Since every customer’s perception of quality is different, Company X is better served by asking specific questions, less open to individual interpretation. Better-designed survey statements may include,“Company X pens last longer than similar brands,” or,“Company X pens provide a smooth flow of ink.”

A double-barreled statement is one that asks respondents to react to more than one thing at the same time.A classic example is,“Invoices are clear and free of erro rs .” How does one respond to this statement if invoices are clear, but tend to have lots of erro rs? What if they never have erro rs , but are nearly impossible to decipher?

The statement above should be separated and the vague term “clear” should be replaced by something more specifi c.Better statements are,“Invoices are easy to interpret,” and,“Invoices are free of erro rs .”

Step three: survey administration

The main goals of the survey administration stage are to achieve the maximum response rate possible while avoiding the introduction of re s p o ndent bias.Response rate is calculated by dividing the number of completed surveys by the total number of potential survey respondents. If a company invites 100 customers to take a survey and 25 of them complete it, the comp any has achieved a 25 percent survey response rate.

Though customer survey response rates will vary depending on the nature of your client interactions (transactional vs. relationship) and the degree to which clients feel they have a stake in your organization, experience shows that any survey response rate can be boosted through the use of respondent incentives, regular survey reminders, and a few other techniques.

Offering survey respondents something for their valuable time is import ant, even if it’s of relatively little monet ary value.Our customer survey clients are often surprised by how something so simple as entry into a $50 cash prize drawing can dramatically increase survey response rates. Other effective respondent incentives include a free sample of your own product or service, or a small donation to the customer’s charity of choice.

Regular survey follow-ups are also important as they remind the customer how valued and anticipated their feedback is to you. It also carries your survey back the top of their ever-growing to-do list.These follow-ups can come in the form of an e-mail reminder for online surveys or a postcard reminder for paper-based surveys.

Respondent bias is introduced when those customers who complete the survey are not representative of the customer population in general.A classic example of respondent bias is the story of a market research firm in the late 1930s that was charged with meas uring consumer interest in a new high-end automobile.The firm admini stered via telephone a well-designed questionnaire to a large sample of U.S. consumers.The survey results analysis concluded that consumers had a strong interest in the new automobile, and would likely buy it in droves.

Based on the strong survey results, the automobile manufacturer produced and launched its new product, which proceeded to flop in the market misera bly.What happened? The research firm failed to recognize that U.S. consumers with a telephone were not repre s e n t at ive of the population overall. In the 1930s, unlike today, only the most affluent households could afford a phone.Despite eve rything else going smoothly, this survey failed because respondent bias was introduced.

The modern equivalent to this dilemma is when companies try to administer a customer survey online to save cost, even when only 10 percent of their customers have Internet access. My firm specializes in online surveys, and we still dissuade clients from doing a survey online if the majority of their customers don’t have Internet access and a base level of computer proficiency.

Step four: survey analysis

The survey results are in. Now what? Blending the feedback of hundreds or thousands of customers into a single coherent voice can be tricky,but is of vital importance if meaningful conclusions are to be re a c h e d .The most common method of identifying trends in customer survey responses is through a range of mathematical techniques called statistical analysis.We are all familiar with the statistical concept of a mean or average – a basic statistical value calculated by adding a series of numbers and dividing by their quantity.

The mean analysis is useful,but re presents only one tool in a good tackle box of statistical analysis methods. Other handy statistical techniques include standard deviation analysis, frequency distribution analysis and re gression analysis.Although the mathematics behind these techniques goes beyond the scope of this art i c l e, I will briefly explain what each is good for in the realm of customer satisfaction survey analysis.

Standard deviations are useful for calculating the level of agreement that respondents exhibited to each survey question.The lower the standard deviat ion, the closer responses tended to hover around the mean. For example, if Question X had a mean response of 3 with a standard deviation of .5, and Question Y also had a mean response of 3 but with a standard deviation of 1, we know that customers felt a similar level of satisfaction with both issues but there was more agreement when it came to Question X.

Frequency distribution analysis simply counts the number of times respondents answered each question each possible way.This technique is useful not only for delving deeper into Likert scale responses but also analyzing verbatim questions.One little-known use of the frequency distribution technique is to code verbatim comments into categories and then count the number of times each type of comment occurre d .This presents a handy way to conve rt cumbersome qualitative data into succinct statistical conclus ions.

Regression analysis measures how often certain survey responses tend to correlate.Looking at such correlations can tell us things about respondents’ sentiments that they may not have even been consciously aware of. For examp le, if we notice that respondents who are very satisfied overall tend to also indicate an excellent relationship with their sales consultant (and vice versa), we might infer that positive salesperson relationships lead to satisfied customers. Of course you must bear in mind the old statistical adage,“Correlation does not imply causation,” and test such correlations with other survey data before drawing conclusions.

Step five: implementing the results

Far too many well-designed, -administ ered and -analyzed customer surveys fail to yield positive change due to a simple lack of results implementation. We recommend that if an organization is not ready to make changes for the better, it should save time and money by avoiding the customer survey process altogether.The results implementation stage is where the proverbial rubber meets the road. Every re p o rt summarizing survey findings must end with an action plan, including a list of actionable recommendations along with the names of those responsible for each change and a deadline for their completion. Intended changes must be realistic, achievable in the short – m e d ium term, and most importantly, backed by data.

Many organizations find that survey results confirmed their existing suspic ions. Even if this is the case with 100 percent of the results, however, the survey project was not done in vain. Quantitative results in the form of graphs and numbers are much more compelling and actionable than hunches or anecdotal evidence.

It is important to remember that a customer satisfaction survey is not an isolated event, but rather the beginning of a continual improvement cycle. Even while the results of the first customer survey are being implemented, plans must be made for the next survey administration.I recommend that survey be readministered at least every six months. Luckily the first survey administration is always the hardest, as subsequent administrations are simply a matter of repeating an already-established process.

In addition to testing your organizat ion’s success in acting on the survey results, subsequent survey administrations also test the pulse of customer sati sfaction over time.By plotting how satisfaction measurements change over months and years, organizations can identify trends and plan accordingly. If customer satisfaction levels are decreasi ng, you know that action must be taken quickly to avoid a future decrease in sales.On the other hand, customer satisfaction measurements increasing over time imply an impending rise in sales – especially once you consider the many positive referrals that satisfied customers tend to provide.

Companies that have committed to an ongoing customer satisfaction survey process will often tell you that it’s one of the best investments they have ever made. Many can’t imagine how they would get by without the instantaneous and actionable feedback that their surveys provide.Those companies who fail to give a voice to their own silent majority do so at their peril.

Author Image

Christian Wright

Christian Wright is the VP of Client Services at Infosurv. With a master’s in marketing research, he’s equipped to design actionable research that yields impactful insights and drives change.
Posted in