Sep 032014
 

Last week I posted an article looking at the decline in survey research, which included some data from ESOMAR and some predictions.

This week, ESOMAR posted the latest Global Market Research Report and it includes some interesting figures on data collection modes. Figures which are broadly in line with my predictions.

The table below is mostly a repeat of the one I included in my previous post. It shows the data from the ESOMAR reports for 2007, 2010, and 2013, along with my forecasts for 2016 and 2019.

In this version, I have added the data from the 2014 ESOMAR Global Market Research report at the bottom.

Surveys 2014
Note, the ESOMAR data refer to the final figures for the previous year, so the 2014 report is based on the completed returns for the whole of 2013.

The decline in research spending on projects where the data was collected via surveys, from 53% in the 2013 report to 48% in 2014, is a very large drop and is even faster than implied by my predictions. The ESOMAR Pricing Study would suggest that some of the drop is due to falling costs for online research and a continued switch to online from face-to-face and CATI. However, the ESOMAR Global Market Research report also highlights the growth of non-survey alternatives.

The change in other quant is broadly in line with my predictions, and the 1% change in qual could be more wobble than message. The climb in Other is, however, large, and larger than my prediction, and is one of the drivers of the fall in survey research as a proportion of the total. The key elements in Other are desk research and secondary analysis and are an indication of the move away from data collection to analysis.

BTW, if you are interested in this topic you might want to read Jeffrey Henning’s riposte, Surveys A Century From Now.


 

Aug 262014
 
No More Surveys

Back in March 2010, I caused quite a stir with a prediction, at the UK’s MRS Conference, when I said that in 20 years we would not be conducting market research surveys. I followed my conference contribution with a more nuanced description of my prediction on my blog.

At the time the fuss was mostly from people rejecting my prediction. More recently there have been people saying the MR industry is too fixated on surveys, and my predictions are thought by some to be too cautious. So, here is my updated view on why I think we won’t be conducting ‘surveys’ in 2034.

What did I say in 2010?
The first thing I did was clarify what I meant by market research surveys:

  • I was talking about questionnaires that lasted ten minutes or more.
  • I excluded large parts of social research; some parts of which I think will continue to use questionnaires.

Why no more surveys?
In essence there are three key reasons that I think surveys will disappear

  1. The decline in response rates means that most survey research is being conducted with an ever smaller proportion of the population, who are taking very large numbers of surveys (in many cases several per week). This raises a growing number of concerns that the research is going to become increasingly unrepresentative.
  2. There are a growing number of areas where researchers feel that survey responses are poor indicators of true feelings, beliefs, priorities, and intentions.
  3. There are a growing number of options that can, in some cases, provide information that is faster, better, cheaper – or some combination of all three. Examples of these options include: passive data, big data, neuro-stuff, biometrics, micro-surveys, text processing of open-ended questions and comments, communities, and social media monitoring.

Surveys are the most important thing in market research!
There is a paradox, in market research, about surveys, and this paradox is highlighted by the following statements both being true:

  1. The most important data collection method in market research is surveys (this is because over half of all research conducted, in terms of dollars spent) is conducted via surveys.
  2. The most important change in market research data collection is the move away from surveys.
Because surveys are currently so important to market research there is a vast amount of work going on to improve them, so that they can continue to deliver value, even whilst their share of MR declines. The steps being taken to improve the efficiency and efficacy of surveys include:
  • Mobile surveys
  • Device agnostic surveys
  • Chunking the survey into modules
  • Implicit association
  • Eye-tracking
  • Gamification
  • Behavioural economics
  • Biometrics
  • In the moment research
  • Plus a vast rage of initiatives to merge other data, such as passive data, with surveys.

How quickly will surveys disappear?
When assessing how quickly something will disappear we need to assess where it is now and how quickly it could change.

It is hard to know exactly how many surveys are being conducted, especially with the growth of DIY options. So, as a proxy I have taken ESOMAR’s figures on market research spend.

The table below shows the proportion of global, total market research spend that is allocated to: Quant via surveys, Quant via other routes (e.g. people meters, traffic, passive data etc), Qual, and Other (including secondary data, consultancy and some proportion of communities).

The first three rows show the data reported in the ESOMAR Global Market Research reports. Each year reflects the previous year’s data. The data show that surveys grew as a proportion of research from 2007 to 2010. This was despite a reduction in the cost of surveys as F2F and CATI moved to online. From 2010 to 2013 there was indeed a drop in the proportion of all research spend that was devoted to surveys. However, given the falling cost of surveys and the continued growth of DIY, it is likely that the absolute number of surveys may have grown from 2010 to 2013.

Other quant, which covers many of the things that we think will replace surveys, fell from 2007 to 2010. In many cases this was because passive collection techniques became much cheaper. For example the shift from expensive services to Google Analytics.

The numbers in red are my guess as to what will happen over the next few years. My guess best on 35 years in the industry, talking to the key players, and applying what I see around me.

I think surveys could lose 9 percentage points in 3 years – which is a massive change. Does anybody seriously think it will be much faster? If surveys lose 9 percentage points they will fall below 50% of all research, but still be the largest single method.

I am also forecasting that they will fall another 11 percentage points by 2019 – trends often accelerate – but again, does anybody really think it will be faster? If that forecast is true, by 2019 about one-third of paid for research will still be using surveys. Other quant will be bigger than surveys, but will not be a single approach; there will be many forms of non-survey research.

I also think that Other (which will increasingly mean communities and integrated approaches) and qual will both grow.

What do you think?
OK, I have nailed my flag to the mast, what do you think about this issue? Are my forecasts too high, about right, or too low? Do you agree that the single most important thing about existing data collection methods is the survey process? And, that the most important change is the movement away from surveys?


 

Apr 072014
 

Last week Jeffrey Henning gave a great #NewMR lecture on how to improve the representativeness of online surveys (click here to access the slides and recordings). During the lecture he touched lightly on the topic of calculating sampling error from non-probability samples, pointing out that it did not really do what it was supposed to. In this blog I want to highlight why I recommend using this statistic as a measure of reliability, but not validity.

If we calculate the sampling error for a non-probability sample, for example from an online access panel, we are not representing the wider population. The population for this calculation is just those people who might have taken the survey. For example, just those members of the online access panel who met the screening criteria and who were willing (during the survey period) to take the study. The sampling error tells us how good our estimates of this population are (i.e. those members of the panel who met the criteria and who were willing to take a survey at that particular time).

If we take a sample of 1000 people from an online access panel and we calculate that the confidence interval is +/-3% at the 95% level, what we are saying is that if we had done another test, on the same day, with the same panel, with a different group of people, we are 95% sure that the answer we would have got would have been within 3% of the first test. That is a measure of reliability. But we are not saying that if we had measured the wider population the answer would have been within 3%, or 10% or any other number we could quote.

The sampling error statistic from a panel is not about validity, since we can’t estimate how representative the panel is of the wider population. But, it does give us a statistical measure of how likely we are to get the same answer again if we repeat the study on the same panel, with the same sample specification, during the same period of time – which is a pretty good statement of reliability.

Note, to researchers reliability is about whether something measures the same way each time. Validity relates to whether what is measured is correct. A metal metre ruler that is 10cm short is reliable, it is always 10 cm short, but it is not as valid as we would like.

My recommendation is to calculate the sampling error and use it to indicate which values from the non-probability sample are at least big enough to be reliable. But let’s not claim it represents the sampling error of the wider population, nor that it directly links to validity.

I would recommend adding text something like: “The sampling reliability of this estimate at the 95% level is +/- X%, which means that if we used the same sampling source 20 times, with the same specification, we would expect the answers to be within X% 19 times.”

Total Survey Error
Another reason to be careful with sampling error is that it is only one source of error in a survey. Asking leading questions, asking questions that people can’t answer (for example because we are poor witnesses to our own plans and motivations), or asking questions that people don’t want to answer (for example because of social desirability bias), can all result in much bigger problems than sampling error.

Researchers can sometimes be too worried about sampling error, leading them to ignore much bigger sources of error in their work.

 

Nov 242013
 

To help celebrate the Festival of NewMR we are posting a series of blogs from market research thinkers and leaders from around the globe. These posts will be from some of the most senior figures in the industry to some of the newest entrants into the research world.

A number of people have already agreed to post their thoughts, and the first will be posted later today. But, if you would like to share your thoughts, please feel free to submit a post. To submit a post, email a picture, bio, and 300 – 600 words on the theme of “Opportunities and Threats faced by Market Research” to admin@newmr.org.

Posts in this series
The following posts have been received and posted:

Aug 022013
 

This post has been written in response to a query I receive fairly often about sampling. The phenomenon it looks at relates to the very weird effects that can occur when a researcher uses non-interlocking quotas, effects that I am calling unintentional quotas, for example when using an online access panel.

In many studies, quota controls are used to try to achieve a sample to match a) the population and/or b) the target groups needed for analysis. Quota controls fall into two categories, interlocking and non-interlocking.

The difference between the two types can be shown with a simple example, using gender (Male and Female) and colour preference (Red or Blue). If we know that 80% of Females prefer Red, if we know that 80% of Men prefer Blue, and if there are an equal number of Males and Females in our target population, then we can create interlocking quotas. In our example we will assume that the total sample size wanted is 200.

  • Males who prefer Red = 50% * 20% * 200 = 20
  • Males who prefer Blue = 50% * 80% * 200 = 80
  • Females who prefer Red = 50% * 80% * 200 = 80
  • Females who prefer Blue = 50% * 20% * 200 = 20

These quotas deliver the 200 people required, in the correct proportions.

The Problems with Interlocking Quotas
The problem with the interlocking quotas above is that it requires the researcher to know what the colour preference of Males versus Females is, before doing the research. In everyday market research the quotas are often more complex, for example: 4 regions, 4 age breaks, 2 gender breaks, 3 income breaks. This pattern (of region, age, gender, and income) would generate 96 interlocking cells, and the researcher would need to know the population data for each of these cells. If these characteristics were then to be combined with a quota related to some topic (such as coffee drinking, car driving, TV viewing etc) then the number of cells becomes very large, and it is very unlikely the researcher would know the proportions for each cell.

Non-Interlocking Quotas
When interlocking cells become too tricky, the answer tends to be non-interlocking cells.

In our example above, we would have quotas of:

  • Male 100
  • Female 100
  • Prefer Red 100
  • Prefer Blue 100

The first strength of this route is that it does not require the researcher to know the underlying interlocking structure of the characteristics in the population. The second strength is that it makes it simple for the sample to be designed for the researcher’s need. For example, if in the population we know that Red is preferred by 80% of the population, then a researcher might still collect 100 Red and 100 Blue, to ensure the Blue sample was large enough to analyse, and the total sample could be created by weighting the results (to down-weight Blue, and up-weight Red).

Unintentional Interlocking Quotas
However, non-interlocking quotas can have some very weird and unpleasant effects if there are differences in response rates in the sample. This is best shown by an example.

Let’s make the following assumptions about the population for this example:

  • Prefer Red 80%
  • Prefer Blue 20%
  • No differences in colour gender preferences, i.e. 80% of males and females prefer Red
  • Female response rate 20%
  • Male response rate 10%

The researcher knows that overall 80% of people prefer Red, but does not know what the figures are for males and females, indeed the researcher hopes this project will through some light on any differences.

The specification of the study is to collect 200 interviews, using the following non-interlocking quotas.

  • Male 100
  • Female 100
  • Prefer Red 100
  • Prefer Blue 100

A largish initial sample of respondents are invited, let’s assume 1000 males and 1000 females. Noting that 1000 males at 10% response rate should deliver 100 completes.

However!!!
After 125 completes have been achieved the pattern of completed interview looks like this:

  • Female Red 67
  • Female Blue 17
  • Male Red 33
  • Male Blue 8

This is because the probability of each of the 125 interviews can be estimated by the combination of the chance it is male or female (10% male response rate and 20% female means that it is one-third likely to be a male and two-thirds likely to be a female) and the preference for Red (80%) and Blue 20%). Which to the nearest round percentages gives us the following odds: Female Red 53%, Female Blue 13%, Male Red 27%, Male Blue 7%.

The significance of 125 completes is that the Red Quota is complete. No more Reds can be collected. This, in turn, means:

  • The remaining 75 completes will all be people who prefer Blue
  • 17 of the remaining interviews will be Female (we already have 83 Females, so the Female quota will close when we have another 17)
  • 58 of the remaining interviews will be Male, Male Blues will be the only missing cell left to fill
  • The rapid filling of the Red quota, especially with Females, has resulted in interlocking quotas being created for the Blue cells.

The final result from this study will be:

  • Female Red 67
  • Female Blue 33
  • Male Red 33
  • Male Blue 67

Although there is no gender bias to colour preference in the population, in our study we have created a situation where two-thirds of Males prefer Blue, and two-thirds of the Females prefer Red.

In this example we are going to have to invite a lot more Males. We started by inviting 1000 Males, and with a response rate of 10% we might expect to collect our 100 completes. But, we have ended up needing to collect 67 Male Blues, because of the unintentional interlocking quotas. We can work out the number of invites it takes to collect 67 Male Blues by dividing 67 by the product of the response rate (10%) and the incidence of preferring Blue (20%), which gives us 67 / (10% * 20%) = 3,350. The 1000 male invites need to be boosted, by another 2,350, to 3,350 to fill the cells. Most researchers will have noticed that the last few cells in a project are hard to fill, that is because they have created unintentional interlocking quotas, locking the hardest cells together, which makes them even harder.

This, of course, is a very simple example. We only have two variables, each with two levels, and the only varying factor is the response rates between Male and Female. In an everyday project we would have more variables, and response rates will often vary by age, gender, and region. So, the scale of the problem in typical interlocking samples is likely to be larger than in this example, at least for the harder cells to complete.

Improving the Sampling/Quota Controlling Process
Once we realise we have a problem, and with the right information, there is plenty we can do to remove or ameliorate the problem.

  • Match the invites to the response rates. If, in the example above, we had invited twice as many Males as Females the cells would have completed perfectly.
  • Use interlocking cells. To do this you might run an omnibus before the main survey to determine what the cells targets should be.
  • Use the first part of the data collection to inform the process. So, in the example above we could have set the quotas to 50 for each of the four cells. As soon as one cell fills we look at the distribution of the data and amend the structure of the quotas, making some of them interlocking, perhaps relaxing (i.e. make bigger) some of the others, and invite more of the sorts of people we are missing. This does not fix the problem, but it can greatly reduce it, especially if you bite the bullet and increase the sample size at your expense.

Working with panel companies. Tell the panel company that you want them to phase their invites to match likely response rates. They will know which demographics respond better. For the demographic cells, watch to see that they are advancing in step. For example, watch to see that Young Males, Young Females, Older Males, and Older Females are all filling at the same rate and shout if this is not happening.

It is a good idea to make sure that the fieldwork is not going to happen so fast that you won’t have time to review it and make adjustments. As a rule of thumb, you want to review the data when one of the cells is about 50% full. At that stage you can do something about it. This means you do not want the survey to start after you leave the office, if there is a risk of 50% of the data being collected before the start of the next day.


Questions? Is this a problem you have comes across? Do you have other suggestions for dealing with it?

May 192013
 

1 It’s not your classic textbook
This book focusses on the questions that are part of the everyday practicalities of market research, the advice you don’t typically get from a textbook – the type of advice researchers would ideally have a mentor or more experienced colleague to ask – unfortunately not everyone has these support networks.

2 The contributors are practitioners
The content has been prepared by a team of experienced researchers, so the advice is relevant for researchers who are talking to clients, writing proposals, managing projects, developing questionnaires, analysing data, reporting results, etc.

3 A great resource for the generalist or research all-rounder
(Thanks to Sue Bell for emphasising this point.)
Many conferences and events, social media forums, and journals focus on specialist areas. This book, doesn’t cover everything, but aims to give a solid grounding on the basics, written and reviewed by experienced market and social research industry heavy weights who know what you need to know.

4 A balance between traditional and new techniques
The book covers the traditional areas – questionnaire design, qualitative, pricing research, B2B – as well as the emerging techniques, for example, communities and social media research.

5 A variety of views of expressed
In some areas of our profession there is not a consensus view – particularly in new and rapidly developing areas. This book highlights areas where consensus does not exist and presents the differing viewpoints.

6 The Client perspective is explored
Special attention is paid to one of the key relationships in market research, that of client and research provider, with an emphasis on the points of tension.

7 A Global Perspective
Unlike some textbooks, which focus on specific markets or regions, this book recognises many researchers are operating in international markets and also the issues and challenges faced by those working in markets with different levels of economic and technological development.

8 Ethics, Laws, Codes and Guidelines
As could be expected of book put together by ESOMAR, the book explains in simple and clear terms why we have these and how to fit them into everyday research.

9 Advice for both new researchers and more experienced researchers who are new to a topic
Thanks to Phyllis Macfarlane for emphasising this point.

10 It’s great value, at 20 Euros (including postage and packaging)
And, if you like it so much you want to bulk order for colleagues, clients, or students – better prices are available via ESOMAR!

Join us at the book launch
On Wednesday, 22 May, ESOMAR and NewMR are holding a virtual book launch, where contributors to the book will explain the book’s mission, its content, and more about how you can be involved. Click here to find out more details and to register to attend.

So what do you think?

Declaration of interest, I am one of the Editors and Curators of the project (as was NewMR’s Ray Poynter) – Sue York

Jan 162013
 

One of the questions I get asked fairly often is when should an answer list, in a survey, be randomised and when should it be presented in the same order to everybody.

In my opinion, the key issue is to think about how the respondent is answering the question in terms of:

  1. Does the respondent ‘know’ the answer? In which case the questionnaire needs to help them find their answer.
  2. Is the respondent looking at the answer list and picking the most applicable option or options? In which case randomising the list is highly desirable.

Examples of the first category, where people already know the answer are: How old are you (show the answers in ascending age), gender (nobody randomises male/female do they?), and where do you live (organise list alphabetically or regionally). I would also include, in this category, questions like what make of car do you own, what type of phone do you have, and I might include which of these supermarkets is your main one – if the list is short.

Examples of the second category, where the list guides the selection, include, which of these statements best describes your attitude to …., which of the following beers have you seen advertising for in the last month, which supermarkets do you ever use?

However, randomising can require more thought that simply clicking the randomise button in a survey scripting tool. If I want to ask which soft drinks have you drunk at least once in the last month, I want Coke, to appear next to Coke Light, Coke Zero etc. Sometimes, to help the participant, it is necessary to break the list into several lists, each randomised. So, when asking about which magazines has somebody read in the last three months, you might show a list of weekly titles, monthly titles, and online titles, rather than randomising them as one list.

Of course, even in a randomised list, key options, such as: Other, None of these etc, should remain fixed, at the bottom of the list.

One pedantic point is that researchers should avoid saying they have randomised the list to remove order effects. Every randomised list, the list as seen by a specific respondent, has order effects. The items near the top of that version of the list will be more likely to be picked and specific adjacency effects will exist (where seeing one item next to another changes the chance of it being selected). However, randomising flattens the order effects out, distributing them across all the items in the list and across all the participants, instead of focusing them on the items at the top of the list.

Do you have any guiding rules for when to randomise and when not to?