Click Happy

David Ward this week uses his data processing expertise to show us how we can spot and weed out ‘rogue respondents’ to get the most reliable and valuable data from our online surveys.

The internet is a very useful tool in market research. We can reach a much wider and larger audience than using traditional pen and paper or CATI interviewing (spam filters allowing). We can make the interviewing experience more visual, route respondents to the questions relevant to them, view results in real time before the tabulation processes have begun, program logic checks in to the survey to catch errors before the analysis stage…the list of positives goes on. Of course there are, as with any interview method, pros and cons to choosing a particular method of interviewing.

On our travels on the internet we’re never too far away from someone wanting to collect our opinions about this and that. For example, when I was looking online for a new car recently, nearly every website I went to had some sort of pop-up window asking me to take part in a survey and, more often than not, some incentive was provided to entice me to spend my time completing the survey. Personally I’ve never been tempted to complete one of these surveys appearing in pop-up windows, but that’s just me. However, what is to stop someone seeing the incentive, thinking ‘I’d quite like a new iPod’, and just randomly clicking through the survey in double-quick time with no thought to their responses? For the respondent, the incentive is there to get the survey completed and have a chance of winning the prize, but not necessarily to give each question the thought required. There isn’t a lot we can do to stop this happening; however, there are things we can do after the data is collected to spot these rogue respondents.

The scope for logic checks on online surveys would be vast if you took it to the nth degree. We might lose a large number of respondents if we did do this though – through frustration at constantly having their responses questioned – but there is a balance to be found. As Head of Data Processing at B2B International, what steps can I take to ensure the quality of our data? There are no guarantees but we can take steps when setting up online surveys and reviewing the data to highlight suspect records. As I have said, we can program logic checks into our online surveys to make sure the survey is filled in correctly. We can make sure numbers add up to 100% where needed, or that respondents select the correct number of items from a list.

One of the easiest ways to catch potentially bad records in the data is to quickly check the time stamp for each interview. Did someone manage to complete a 20-minute interview in 5 minutes? This would suggest someone has just clicked through the survey with a click happy mouse finger, without giving due consideration to their answers.

Another telltale sign to look out for is something termed as “straightlining”. Has a respondent gone through a grid or battery of questions and given the same response each time?

We can also look for inconsistencies in the logic of the answers and for unlikely values in numeric questions. Part of this can be done during the survey and more checks can be run once the data have been collected.

We could also add in questions purely for verification purposes to allow us to judge whether or not the respondent is reading the question. For example, in a grid of questions we could add one that simply says “for verification purposes please answer strongly agree”. Along the same lines as this we could use data from any panels we have purchased to ask respondents to verify certain details. Comparisons can then be made between the original panel data and our own data collected online. Differences between the two could be viewed as suspect.

Finally, we can look at responses to open questions. We can check that fields do not contain random characters, or single character answers. If we find this, can we be sure we can trust the other answers given?

Failing one of these checks does not necessarily mean the data is not to be trusted but failing two or more may be grounds for removing that respondent. Perhaps the strongest guide to base the decision on is the time taken to complete a survey, but whichever method or combination of methods are used, having the checks in place gives us added confidence in the findings we present to our clients.

I’m not sure we will ever be able to completely stop respondents just clicking through an online survey giving responses that are illogical, poor quality and clearly not much use to market research, but knowing that there are telltale signs we can look out for which can indicate a respondent we may need to exclude is certainly reassuring.

Show me: [searchandfilter id="13493"]