How survey quality control checks support accurate insights

Explore the value of quality assurance measures and learn about eight types of survey data quality checks that flag unreliable respondents and improve data quality.
13 April 2022
How survey quality control checks support accurate insights
Anna Shevchenko
Anna
Shevchenko

Research Director, US

Get in touch

In today’s cacophony of cell phone beeps and buzzes, maintaining a survey taker’s attention is no small feat. Now more than ever, respondents are prone to survey fatigue, which can introduce non-sampling survey errors and skew statistical accuracy.

Fortunately, market researchers have a tool in their arsenal to use during data collection to support unbiased and accurate survey results.

In-survey quality checks typically serve as an added measure of quality control to pre-survey quality checks that can reduce the level of potential poor responders entering the survey in the first place.

What are in-survey quality checks and why do they matter?

In-survey quality checks are a way of gauging whether respondents are mindfully completing a survey. A quality check can be as simple as a filter that eliminates respondents who complete a survey too quickly.

Or it can be more complicated, like asking survey takers to confirm an earlier response.

Since a survey is only as good as the data it generates, quality checks are considered a survey design best practice. However, they shouldn’t be your only line of defense against straight-lining and other types of response bias.

What do quality checks say about survey design?

Generally, market researchers follow the ‘baseball rule’ when analyzing ‘tripped’ quality checks. That means if a survey taker raises any three quality check flags — perhaps taking the survey too quickly, answering a knowledge check incorrectly, and entering gibberish in an open-ended field — they are automatically removed from the data set.

It’s not unusual to remove a portion of completes from the data set –  typically less than 5%. The removal of these respondents tends to make little difference to the overall results. However, if a higher portion (10%+) of responses are flagged, it could indicate survey design mistakes. Maybe it doesn’t follow best practices for survey length or the questionnaire design isn’t mobile-friendly.

It’s important to remember that no amount of quality checks will make up for poor questionnaire design. To ensure quality in your survey research, always follow survey design best practices.

What are the most popular types of survey quality checks?

1. Speeder Flag

This is typically an automated check with an overall goal of eliminating respondents who complete the survey in an unrealistically short amount of time. The survey taker may be rushing off to work or distracted by a child tugging at their sleeve. Or, in the case of internet bots, the survey taker may not be a person at all.

When designing a speeder flag, remember to account for varying survey lengths (e.g., in the case of branching, some survey takers only answer a subset of questions) Otherwise, you may screen out valid responses and, in return, compromise the quality of your data. Also consider the scripting platform’s ability to manage such checks.

Kantar finds it is best practice is to soft launch the survey and review any threshold that has been applied before moving into a full launch. If the speeder flag is generating over 5%, then the data and threshold should be re-evaluated.

2. Straight-Lining Flag

When a survey drags on for too long or becomes redundant, respondents can sometimes be tempted to give nearly identical answers to different questions. They may, for example, select the same response for all statements. Many in the industry call this “straight-lining”.

However, it’s best to be cautious when applying “straight-lining” checks. Sometimes, responses that appear to be straight-lining are actually valid responses. Research Kantar has conducted indicates that the most egregious responses through grids tend to be more random responses rather than “straight-lining”! 

A better indicator of attention would be to look for data points that aren’t logical in combination (e.g., agree that the weather today is too hot and also agree that the weather today is too cold).

3. Red Herring

A red herring is a type of survey question that incorporates a fake option among a set of valid ones. For example, a questionnaire may ask: ‘What is your favorite type of cereal?’ The answer choices may include Cheerios, Lucky Charms, Frosted Flakes, and Baseball Crispies. Obviously, the latter is not something you would find in a grocery store and you can assume that respondents who select it aren’t paying attention.

Though red herrings may seem silly, they can be helpful. Just make sure the fake answer choice is actually fake or doesn’t sound too similar to an actual product, company, or service. You may also consider asking the question in the context of usage. It’s one thing to say you have heard of Baseball Crispies, but another to say you eat them every morning.

It’s important to note that Kantar has found that there is a halo effect around non-existent brands. Apply these checks with caution and in combination with other quality checks.

4. Knowledge Check

‘What color is the sky?’ This question may seem simple enough. But when respondents are speeding through an online survey or selecting answers at random, it’s likely they will select ‘purple’ or even ‘green.’

This type of quality assurance measure is known as a knowledge check.

Knowledge checks ask respondents to answer a question that tests basic knowledge, like what year it is or what ‘UK’ stands for. These questions should be factual in nature and have a clear answer. The respondent is either right or wrong; there’s no room for interpretation or debate. This can be challenging, so ensure you’re not frustrating or confusing the panelist with these questions.

5. Attention Check

Unlike a red herring or knowledge check, an attention check asks respondents to select a specific option. For example, survey takers may be shown a grid of photos and then asked to select all pictures featuring motorcycles. They may then be prompted to select photos of rabbits or lakes.

Attention check questions in surveys should measure attention, not other constructs related to memory, education, or specific cultural knowledge. These questions should also allow for a few accidental selections (e.g., selecting a photo of a car instead of a motorcycle).

6. Duplicate Check

When respondents take a survey multiple times, either on purpose or on accident, they can compromise the quality of a data set. This is typically done using some of the device information that’s captured when someone visits a website.

Fortunately, most duplicates are captured using checks pre-entry to the survey at Kantar. So, if you’re working with us, this is something you can worry less about.

Most duplicates are captured using duplicates checks pre-entry to the survey.

7. Open-Ended Validation

Open-ended validations can alert market researchers to respondents who are mindlessly blowing through questions. They may enter gibberish into the open-ended question text field or offer an irrelevant answer.

However, just because a survey taker typed a string of random letters doesn’t mean they aren’t paying attention. Rather, the question may be confusing or irrelevant, they may not care about the topic, or they may be experiencing survey fatigue. Whatever the case may be, review the respondent’s open-ended data along with the rest of the responses before removing them from the data set.

8. Conflicting Answers Check

Checking for conflicting answers often involves asking the same question twice or asking similar questions and looking for conflicting responses. However, this is another check that requires careful consideration and proper implementation.

A conflicting answers check may be designed with good intentions but not work well in reality, leading to a high disqualification rate. Convoluted questions can also confuse survey takers, causing them to offer dissimilar responses.

A quick caveat: Are quality checks always accurate?

Quality checks can vastly improve the reliability of a survey, weeding out bots and various forms of bias. However, if these flags are too stringent, you risk losing valuable data.

Remember: Screening out too many respondents may, in some cases, be worse than screening out too few. You can always clean the data afterward, but can’t always get data you don’t have. As previously noted, consider terminating respondents only if they fail multiple checks.

You should also consider the mental load the survey takes on a respondent. Respect the person sitting behind the other side of the screen by applying an empathetic approach to survey design.

Learn more

Good data starts with good survey design, which is why it’s always best to consult with a successful survey partner. At Kantar, we’re dedicated to helping clients produce questionnaires that yield accurate data sets. To learn more about how we have transformed the world of digital marketing research, reach out to our experts today.

For monthly survey design and sampling tips direct to your inbox, fill out the form below to subscribe.

Get in touch
online survey research

Survey Programming, Data Processing & Reporting

Our quality experienced local teams provide one-on-one support from survey programming through data diversity. We'll help you analyse and visualise your data using custom dashboards or data exports. 
Learn More