top of page
Search

On a scale from "1-5"

Three types of people that will skew your survey results and what to do about it

Asking for feedback on a scale from one to five is a common tool among researchers of all types. Requiring respondents to indicate their opinion from a 1 (“very dissatisfied”) to a 5 (“very satisfied”) allows scientists, political pollsters, and market researchers to quantify people’s subject preferences and beliefs. The problem is, this scale itself isn’t as objective as we would like to think.

As a market researcher who focuses on identifying the Voice of the Customer for organizations of all sizes, I’ve consistently seen these three types of people. All are well-meaning and have no intention of tampering with survey results. They simply have their own beliefs that can’t be expressed on a numerical scale. Understanding these folks exist is key to understanding how to get to the truth that the numbers don’t reflect. (Hint: qualitative data! But I’ll come back to that later…)


 

Type One: The Perfectionist Rater

Did you ever have a professor who refused to give perfect scores on papers, or a manager who refused to dole out the highest performance review ratings, simply because they didn’t believe in perfection? This is a persona that exists in the real world as well as the world of opinion research. There are some people who will simply never choose the “highly satisfied” or “5” rating because to do so would indicate they see no room for improvement. Having someone like this (or, more likely, many of them) in your respondent pool can skew your results lower for no explicable reason. People, products, or services that may receive perfect scores from many people, will receive lower scores from those who simply don’t give top ratings—ever.


Type Two: The Precision Rater

This person takes his or her role as a survey participant very seriously and doesn’t want to exaggerate or lie. Understandably, a five-point scale will be limiting to someone who feels their opinion falls somewhere in between one of the listed numbers. This can go both ways: Someone who feels highly satisfied, but still had a couple of small complaints might say, “I would have given a 4.5 but that wasn’t an option, so I had to choose 4.” Whereas, someone who was highly dissatisfied may hesitate to score a 1 if they got what they needed in the end. Without the option for a 1.5, they may choose a 2 and mask a truly terrible experience with a “somewhat dissatisfied.”


Type Three: The Safe, Middle Ground Rater

If you’ve ever studied for a large, multiple-choice exam, you may have heard the advice to always choose “C” if you don’t know the answer. The idea behind this is, if you choose the same answer each time, you’re more likely to get a few of them right rather than if you guess a different random answer each time. While this may not be a true advantage in terms of statistical probability, there’s something about the third option (and sets of three in general) that is very appealing to our brains. When it comes to Likert scales, the middle answer, 3, is equally safe for those who just don’t know what to say.


A “true” three expresses that someone is neither satisfied nor dissatisfied: simply neutral. But, thanks to our innate mental propensity towards the third option, this answer can also serve as a kind of default. Three is a go-to for someone who feels they don’t have enough information to give an opinion but has no option to skip the question. For example, if a hair salon customer is being asked to rate the quality of their color job but they only got a simple haircut, they’ll likely choose a 3 out of 5 even though they really have no information or opinion on the matter. The inclusion of these default raters can drag down high scores or boost low scores from those with real knowledge, thus diluting the accuracy of the entire data set.


 


As these examples demonstrate, using a scale from one to five—though common—isn’t going to be a completely accurate reflection of the respondents’ opinions. Don’t get me wrong: Likert Scales, in either a five-point or seven-point form do have advantages when it comes to research and should not be thrown out entirely. My point is, they don’t tell the whole story and as such, need to be taken as a starting point, not the final word.


How to solve the problem of skewed survey data:


It probably comes as no surprise that I am a huge advocate for qualitative data. Surveys with numeric ratings are definitely a great starting point and have the advantage of being quick and easy enough to gather large numbers of responses and see general trends. But, if you really want to learn the opinions behind the numbers, you have to ask real people, real questions.


An in-person (or phone, or video chat) interview is the very best way to understand what the numbers in your survey results mean. Speaking to someone and being able to ask follow-up questions can reveal so much: From “Actually, that rating was because I was thinking about my experience with a partner-company, not you,” to “I rated a 5 because it all turned out okay, but there were actually these issues I would like to address,” – and everything in between. Using a professional researcher to learn why your respondents rated things the way they did is the best way to ensure you don’t draw false conclusions from your quantitative results.


But what if you just can’t do that? Cost, logistics, and time are just a few of the barriers that might prevent you from digging deeper into your data with follow-up interviews. In that case, there are still a few things you can do to help your participants avoid using numbers as a way to inaccurately express their opinions.


  • Use a 10-point scale: Instead of a 3-point, 5-point, or even 7-point scale, a 10-point scale allows for more variation and degrees of accuracy. The Perfection Rater may still give a 9 out of 10 even when they are highly satisfied but that certainly looks more accurate than a 4 out of 5. The Precision Rater can give a 3 or 6 or 7 out of 10 to accurately reflect their opinion without having to round up or down.


  • Provide a "N/A" option: To avoid a clustering of mediocre ratings from people without enough information to form an opinion, provide the option to choose “Not Applicable.” Your overall results will be more accurate when they don’t include people who select the middle option because they have no real choice.


  • Design your survey with built in follow-up questions: It won’t replace the live human who can dig deeper into each response, but providing an open text field with a prompt to elaborate on why the rating was given will provide some insight you wouldn’t otherwise have.


  • Ask if it’s OK to contact your participants for more information: This may not be an option if your survey requires complete anonymity, but giving your participants the option to provide a phone number or email if they are interested in telling you more is one way to learn details the numbers simply can’t show. I can’t stress enough the importance of actually following up by phone or email if someone gives you the chance. From my experience, if someone volunteers to give more information it’s because they have something they’d like to say—so don’t ignore their offer!


With these reasons, and many more, why a simple 5-point customer satisfaction survey doesn’t tell the whole story, it’s our goal to assist you in obtaining the best quality input you can get from your pool of customers, prospects, users, buyers, employees, or any other type of persona you are seeking feedback from.


If you’re interested in building a research plan to meet your needs and help you solve business problems by listening to the voice of your customers, get in touch with us now!


When you need insights powered by people, we deliver custom solutions.

28 views0 comments
bottom of page