Reducing Bias

How To Reduce Bias in Your Survey Questions

• • • • • •


When we talk about data quality in online market research, fraud — including duplicate responses, gibberish, or taking advantage of the reward system — can often go hand-in-hand with bias. However, while preventing fraud is a focal point for researchers, bias is also a key part of the equation that often gets overlooked.

Outside of fraud, the vast majority of respondents mean to do well when they answer questions, and attempt to offer their honest opinions. When someone selects an option that is blatantly wrong, it’s likely not intentional on their part. After all, there are over 100 different kinds of documented biases that can impact human decision making.

By making a few adjustments to the way you ask questions, you can greatly reduce respondent bias in your data sets and improve the quality of your results. These adjustments, combined with Suzy’s best in class audience quality measures, work together to greatly reduce bias in your datasets. 

Select All That Apply

We typically use “select all that apply” questions to segment users and determine their product awareness, usage, purchasing behaviors, and engagement. What some researchers find is that users select more answers than they should, or even every answer.

“Select all that apply” questions can also trigger the following biases: 

  • Misinformation effect: Over time, a person’s memory becomes less accurate.

  • Serial position effect: People are more likely to recall items at the end of a sequence, followed by items at the beginning. Items in the middle are the hardest to recall. (This is why you need to randomize your options!).

  • Satisficing: Sometimes when there are no exact options to describe a person’s experience, they’ll select whatever’s available or satisfactory to them. 

To minimize these effects, consider the following guidelines:

  1. Ask users to select a specific number of options. Don’t ask them to “select all that apply.”

  2. Be precise in your answer options.  Limit options to a list of a reasonable number (between 5 and 10), and be sure to keep the list consistent, either with a list of brands or product categories, but not both.

  3. Limit the time frame. It can be hard to ask someone to think about all the brands of paper towels they’ve ever seen. Ask them to think about the last six months or less.

  4. Always include “None of the above.” Respondents are often forced to lie if they don’t see any answer options that speak to them. Give them a way out.

  5. Randomize answer options. This reduces the bias effect of someone selecting the first option given to them.

  6. Introduce a “ghost” or “red herring” option. This should be an obviously non-existent product that can’t be confused with any existing products or brands.

Below is an example of a question that does NOT minimize bias:

Which of the following have you purchased in the last year? (Select all that apply.)

  1. Paper towels

  2. Kleenex

  3. Paper Napkins

  4. Bounty

  5. Chinet

  6. Kirkland brand

  7. Toilet paper

  8. Bounty

  9. Charmin

  10. Scott paper towels

So if we reformulate this question using best practices, it would look like this:

In the last three months, which brand of paper towel(s) have you personally purchased? (Select up to your top three most frequent.)

  1. Brawny

  2. Scott

  3. Bounty

  4. Augusta (Ghost)

  5. Kirkland

  6. Cottonelle

  7. Viva

  8. None of the above


Often, researchers are looking to find whether or not consumers are familiar or unfamiliar with a certain brand or product. Sometimes the strength of brand options in questions can skew towards more positive results. In some occasions, brand names are introduced in previous parts of the survey, which can result in a respondent claiming to be familiar with a brand when they wouldn’t have known about it otherwise. 


Other bias effects include:

  • Acquiescence bias: When given a binary choice between a positive and negative response, people tend to select the positive one. 

  • Mere-exposure effect: People tend to develop a preference for things solely because they’re familiar.

Instead, here’s what you should do:

  1. Make the initial question more concrete. Limit the time frame and be specific. Do you want them to be familiar with a recent ad they saw? Or a product they saw in a grocery aisle?

  2. Don’t ask about a single brand or product in a question. Asking a yes/no question oftentimes ends up telegraphing to respondents which response you want to hear.

  3. Include a “ghost” option. This should be an obviously non-existent product that can’t be confused with any existing products or brands.

Here are some examples of what NOT to do when asking familiarity questions:

Are you familiar with Google Drive?

  1. Yes

  2. No

Which brand of potato chips have you eaten in the past six months?

  1. Lays

  2. Pringles

  3. Ruffles

  4. Utz

  5. Kettle

  6. Spuds

  7. Muchos

  8. Cape Cod

Instead, ask familiarity questions like this:

Which cloud storage option do you currently use most frequently? (Select one.)

  1. Box

  2. DropBox

  3. Apple storage

  4. StoragePlus (Ghost)

  5. Google Drive

  6. Microsoft OneDrive

  7. pCloud

  8. I don’t use any of these cloud storage options.

Which of the following potato chip brands have you personally purchased in the past month? (Select up to your three most frequently bought brands.)

  1. Utz

  2. Kettle

  3. Spuds

  4. Eight Eyes (Ghost)

  5. Cape Cod

  6. Miss Vickies

  7. I haven’t purchased any of these potato chip brands in the past month.

Minimizing Bias to Find Niche Audiences

There are plenty of reasons why a researcher might want to tap into a niche audience, such as health-conscious consumers or video game enthusiasts. However, it can be difficult to find these types of consumers, and it’s tempting to boil down your criteria into a single question to streamline the process, which can lead to false positives. Instead, you often need a series of questions to find these users, and include questions that have a definitive right or wrong answer to validate someone’s responses. 

Let’s go through some general best practices:

  1. Know the demographics of your intended audience. Having some basic understanding of what types of people fall into the category you’re looking for will help you to spot outliers and determine whether or not they fit your criteria. 

  2. Ask multiple screener questions. The more data points you have, the easier it becomes to filter out respondents that don’t exactly match what you’re looking for.

  3. Find ways for your segment to identify themselves. Let’s face it, people like to talk about themselves — otherwise, market research wouldn’t exist! However, take care to make sure that your criteria aren’t obvious to your survey respondents, who might feel incentivized to tell you what you want to hear. 

  4. Ask about things that only your desired segment would know. This includes testing them on knowledge, but it also includes asking them to describe their unique experiences in ways that will make it easier for you to pick out and remove bad actors.

So how would this work in the real world? Let’s think through an example. Say you want to target an audience of veterinarians... 

  1. According to Data USA, veterinarians are likely to be over the age of 30 and to have a decent income. You might consider setting your initial demographic targeting to reflect that, maybe by filtering out anyone between the ages of 18-21. (If they’re still undergrads, they can’t be veterinarians yet, right?) 

  2. Ask them to select from a list of several different professions that includes veterinarians. This does two things: it gives them the opportunity to self-identify, and it doesn’t make it obvious that you want to hear from veterinarians, so respondents will be more likely to tell the truth. 

  3. Test them on information that veterinarians would know more than other respondents would, using multiple-choice questions like “How many years are required for veterinary school?” (Answer: four years). or “What is the governing body of veterinarians in the U.S.?” (Answer: the AVMA). This can increase the quality of your retargeting sample.

  4. Make sure you use more than one screener question, and that not every one of them is an easy Google search away.  While asking if they would like to participate in future studies for veterinarians, you might also inquire about which veterinary school they graduated from There are only 30 of these schools in the United States, so it’s easy to verify — and, because claiming to have gone to a school you did not actually attend is a much more active lie than simply looking up a multiple-choice answer, non-vet respondents will be more likely to trip up on it or filter themselves out by not finishing.

While you can’t control the answers you get back when you screen and survey your consumers, you can control the way you frame your questions. This does more than just improve your data quality: it also gives you the best chance of understanding what your consumer is trying to tell you, leading to more innovation, better products, and deeper trust in your brand’s reputation. 


Click here to learn more about Suzy’s consumer network, or get in touch with us for a demo.