You are using a version of internet explorer not supported by this system. Please update your browser to Google Chrome to continue.



Questionnaire Design – Top 6 Tips from a Data Management perspective!

6th January 2022 in Blogs by Andy Madeley

When I first entered the world of Market Research (back in the nineties), the research and data teams roles were very well-defined, and in my view, ruthlessly well-defined. The researcher focussed on the client management, the questionnaire design and the report and the data processor focussed on the collection script, interview management and the analysis. I played along with this black-and-white rule book for a few years, whilst employed at MR agencies in the UK and in Europe, but I didn’t like it – because I didn’t get it! I’d even go so far as to say this is why I created SwissPeaks… curious? I hope so… please feel free to read on and see if through this you start nodding your head in agreement!

Over the many years, I have earned my living and my reputation in Market Research (and M&E in Africa and Asia), one thing is evident – the great divide between “the researcher” and the “data processor” still exists. I never got it 30 years ago and I still don’t get it now! Now though, I just smile and use one of my favourite sayings to capture the essence of this being, “c’est la vie!”. In a previous article I talked about the “data management role” and that the data management people are at the heart of any MR project.

Over the last 10 years, I have been less involved with scriptwriting and crosstab analysis work and more involved with pre-field activities, like questionnaire design. So, where the previous article was generically praising and informing about the wider value of data management people across the whole project, this article is specifically focussed on our value within the questionnaire design activity area and some of the “tricks and tips” I would recommend employing on any questionnaire design build! Here goes…

Tip 1 – font types, size and colour

Here is an example of a question construct we have put together in a “Core” questionnaire template for one of our clients:

We are trying to add as much information within the question construct as possible so that all “stakeholders” as I call them, are well informed. A “stakeholder” as I see it is anyone who has access to the questionnaire or any spinoffs from the questionnaire; e.g., client, researcher, collection scriptwriter, interviewer, supervisor, data collator (QC), etc…

We have used font characteristics to highlight filter instructions, interviewer instructions and even data instructions. It is clear too that this question is single categorical and that the positive answer categories to be read out by the interviewer – are above the thick black line and the negative answer categories that will not be read out but are still “must have” categories – reside below the black line. Even using the “circle” to denote a “single” answer option is key. And now, we keep these same font characteristics in all questions, irrelevant of question type.

Tip 2- use MS Word “tables”

We also use MS Word “tables” for each and every question. This approach not only keeps the structure uniform, it makes copying and pasting easy to do. The biggest advantage we now have due to maintaining the structure is that we can build automated applications to utilise the “tables” based document. As much as “codebooks” are not used a lot in ad hoc projects, they are more predominantly used in trackers or multi-national projects. We have implemented our own toolkit and one of these tools can take this “structured” questionnaire format and at the click of a button, a codebook is built. We also have another tool, that can create the same questionnaire in a local language – all because we have applied a structured template to the MS Word questionnaire. Our point is – even if you do not need such a toolkit -  MS Word is still the go to application for questionnaire design. No problem with this, just try to move away from free-format questionnaire building please – and at the very least, apply some formatting guidelines!

 

Tip 3 – use “INBOUND” filter logic at all times (and if necessary “OUTBOUND” skip logic to support the “INBOUND” filter logic)

We make sure every question has an “INBOUND” filter definition applied. We refer to filters as “INBOUND” instructions as they instruct “WHO” should answer the question and under what rule. If every question has such a filter instruction applied, then all stakeholders can more clearly see the rules by which a respondent can “ENTER”. In CAPI, CATI and CAWI projects you should never need to use SKIP instructions. For PAPI (paper-pen) there is merit to using SKIP instructions – that is, an “OUTBOUND” instruction that advises the interviewer where to “JUMP” to if a certain answer is given. The interviewer then “EXITS” from that point in the questionnaire and “JUMPS” to the location as instructed. Please be kind to all stakeholders on your projects and use “INBOUND” filter logic at all times!

In addition, strong filter logic definition is key to a successful project. The reason my QC job type has shifted towards pre-field activity is clear, at least to me! As much as I enjoy firefighting and problem solving, it costs a lot of time and money to clean data and to find and fix survey issues. I have found it is less expensive and less time consuming to tackle filter logic problems at the questionnaire design stage. A clear filter has 2 parts; an English definition part and a logic definition part.

For example;

FILTER: ASK IF C1=6 (Never)

FILTER: ASK IF C1=(3,4) OR IF C2=(3,4) [Tablet OR Mobile Phone]

 

Tip 4- Use shading in Grid type questions

If a grid needs to be employed in the questionnaire, then use shading to visually show the flow of the questions to all stakeholders. Normally, grids are employed when a battery of statements use the same question label. Let’s say the question is as simple as “In the last 7 days, have you watched any of the following TV programs?”. There is a list of 10 TV programs each with the same answer category options; “Yes”, “No”, “DK/NR”.

For such a question type, a grid is normally used. And if a grid is used, then think about adding some shading to emphasise the direction of intended question flow. I know this might seem like an overkill exercise, but remember, when you have set up once, it’s really easy to copy + paste and re-use. Any and all such techniques are useful tools for allowing all stakeholders to clearly see the question as designed.  

The most appreciative stakeholder when using shading would be the scriptwriter. In their job, they are effectively taking the “research design” instructions as detailed in the questionnaire and to “transpose” them into the chosen collection platform. Help them!

 

Tip 5 – Using circles and squares

It’s all about clarity and cross-stakeholder support. The CAWI collection applications dragged circles (radar buttons) and squares (select buttons) into our scope a little while ago. They are used predominantly in CAWI self-completion surveys and now in CAPI interviewer-completion surveys, so if they are used in the collection software, why don’t we use the same notation in our questionnaires? We should…

And, the more visual aid you can deliver in the questionnaire, then the more support you are giving to other stakeholders associated with the project.

 

Tip 6 – Use tracked changes sparingly

I have lots of tips, but this is the last of the “top 6” I can share with you in this small document. As much as I am not a big fan of the MS Word questionnaire document, I live with it because it is what I term the lowest common denominator document type that can be used as a communication type between the end client and the research project implementer. It makes sense – but what doesn’t is, once the end client supplies the “tracked changes” in the questionnaire document, why is this then just added to by the research implementor?

Those of you who have commissioned or collaborated with SwissPeaks might have seen our QC review documents. We effectively use these review documents to gatekeep the changes made. We take our client feedback and push this into these review documents. We then supply our client with a new “untracked” version of the questionnaire and the review document. We know this works, as we rarely miss anything and as far as an audit trail goes, this is a much better way of tracking changes.

Overall, we have data that proves our tips work. We have halved the delivery time from the point whereby the survey finishes in field and to when the report writer can start exploring the project insights. We have halved the delivery time because we have shifted our QC focus now starting with questionnaire design. We pick our clients based on their willingness to allow us to be directly involved with the researcher from this start point. We have a voice, we have abundance of experience and the value add to voicing our expertise when designing the questionnaire does pay dividends.

Over the next two articles I will share more tips, but these will be focussed on other pre-field and during-field activities. If you’ve got to this point of the article, then I hope you enjoyed! My e-mail address is andy.madeley@tabx.online. Drop me an e-mail at any time!




Categories

Feature Releases (13)

Tutorials (6)

Blogs (5)


Archive


Tags

analysis branding crosstab cx dashboards data analysis data management data visualization dataviz installed software market research online analysis online data visualiszation processing questionnaire design reporting segmentation spss swisspeaks tabx tracker

Register Free

Lifetime license, 1000 rows / 200 variables per project, 2 project slots.

This website uses cookies. Privacy Policy