Aug 262014
No More Surveys

Back in March 2010, I caused quite a stir with a prediction, at the UK’s MRS Conference, when I said that in 20 years we would not be conducting market research surveys. I followed my conference contribution with a more nuanced description of my prediction on my blog.

At the time the fuss was mostly from people rejecting my prediction. More recently there have been people saying the MR industry is too fixated on surveys, and my predictions are thought by some to be too cautious. So, here is my updated view on why I think we won’t be conducting ‘surveys’ in 2034.

What did I say in 2010?
The first thing I did was clarify what I meant by market research surveys:

  • I was talking about questionnaires that lasted ten minutes or more.
  • I excluded large parts of social research; some parts of which I think will continue to use questionnaires.

Why no more surveys?
In essence there are three key reasons that I think surveys will disappear

  1. The decline in response rates means that most survey research is being conducted with an ever smaller proportion of the population, who are taking very large numbers of surveys (in many cases several per week). This raises a growing number of concerns that the research is going to become increasingly unrepresentative.
  2. There are a growing number of areas where researchers feel that survey responses are poor indicators of true feelings, beliefs, priorities, and intentions.
  3. There are a growing number of options that can, in some cases, provide information that is faster, better, cheaper – or some combination of all three. Examples of these options include: passive data, big data, neuro-stuff, biometrics, micro-surveys, text processing of open-ended questions and comments, communities, and social media monitoring.

Surveys are the most important thing in market research!
There is a paradox, in market research, about surveys, and this paradox is highlighted by the following statements both being true:

  1. The most important data collection method in market research is surveys (this is because over half of all research conducted, in terms of dollars spent) is conducted via surveys.
  2. The most important change in market research data collection is the move away from surveys.
Because surveys are currently so important to market research there is a vast amount of work going on to improve them, so that they can continue to deliver value, even whilst their share of MR declines. The steps being taken to improve the efficiency and efficacy of surveys include:
  • Mobile surveys
  • Device agnostic surveys
  • Chunking the survey into modules
  • Implicit association
  • Eye-tracking
  • Gamification
  • Behavioural economics
  • Biometrics
  • In the moment research
  • Plus a vast rage of initiatives to merge other data, such as passive data, with surveys.

How quickly will surveys disappear?
When assessing how quickly something will disappear we need to assess where it is now and how quickly it could change.

It is hard to know exactly how many surveys are being conducted, especially with the growth of DIY options. So, as a proxy I have taken ESOMAR’s figures on market research spend.

The table below shows the proportion of global, total market research spend that is allocated to: Quant via surveys, Quant via other routes (e.g. people meters, traffic, passive data etc), Qual, and Other (including secondary data, consultancy and some proportion of communities).

The first three rows show the data reported in the ESOMAR Global Market Research reports. Each year reflects the previous year’s data. The data show that surveys grew as a proportion of research from 2007 to 2010. This was despite a reduction in the cost of surveys as F2F and CATI moved to online. From 2010 to 2013 there was indeed a drop in the proportion of all research spend that was devoted to surveys. However, given the falling cost of surveys and the continued growth of DIY, it is likely that the absolute number of surveys may have grown from 2010 to 2013.

Other quant, which covers many of the things that we think will replace surveys, fell from 2007 to 2010. In many cases this was because passive collection techniques became much cheaper. For example the shift from expensive services to Google Analytics.

The numbers in red are my guess as to what will happen over the next few years. My guess best on 35 years in the industry, talking to the key players, and applying what I see around me.

I think surveys could lose 9 percentage points in 3 years – which is a massive change. Does anybody seriously think it will be much faster? If surveys lose 9 percentage points they will fall below 50% of all research, but still be the largest single method.

I am also forecasting that they will fall another 11 percentage points by 2019 – trends often accelerate – but again, does anybody really think it will be faster? If that forecast is true, by 2019 about one-third of paid for research will still be using surveys. Other quant will be bigger than surveys, but will not be a single approach; there will be many forms of non-survey research.

I also think that Other (which will increasingly mean communities and integrated approaches) and qual will both grow.

What do you think?
OK, I have nailed my flag to the mast, what do you think about this issue? Are my forecasts too high, about right, or too low? Do you agree that the single most important thing about existing data collection methods is the survey process? And, that the most important change is the movement away from surveys?


Jul 162014

Sometimes when I run a workshop or training session people want detail, they want practical information about how to do stuff. However, there are times when what people want is a big picture, a method of orientating themselves in the context of the changing landscape around them. Tomorrow I am running a workshop for #JMRX in Tokyo and we are looking at emerging techniques, communities, and social media research – so a big picture is going to be really useful to help give an overview of the detail, and to help people see where things like gamification, big data, and communities all fit.

So, here is my Big Picture of NewMR (click on it to see it full size), and I’d love to hear your thought and suggestions.

Big Picture

The Big Picture has five elements

The heart of the message is that we have reached an understanding that surveys won’t/can’t give us the answers to many of the things we are interested in. People’s memories are not good enough, many decision are automatic and opposed to thought through, and most decision are more emotion that fact. Change is needed, and the case for this has been growing over the last few years.

The four shapes around the centre are different strands that seek to address the survey problem.

In the top left we have big data and social media data, moving away from working with respondents, collecting observations of what people say and do, and using that to build analyses and predictive models.

In the top right we have a battery of new ways of working with respondents to find out why they do things, going beyond asking them survey questions.

In the bottom left we have communities, which I take as a metaphor for working with customers, co-creating, crowdsourcing, treating customers and insiders, not just users.

The bottom right combines elements from the other three. ‘In the moment’ is perhaps, currently, the hottest thing in market research. Combining the ability to watch and record what people do, with interacting with them to explore why and what they would do the options changed.

So, that is my big picture. Does it work for you? What would you add, change, delete, or tweak?


Jun 222014

We like to think of ourselves as rational creatures and we like to think we can trust our ears. However, watch the video below and be ready to change your mind.

The Mckurk effect , the understanding of which dates back to 1976, shows how hearing and vision interact with each other. One of the interesting things about this effect is that even once you are aware of it you still experience it.

From a marketing and market research point of view key messages are:

  1. Changing the sound can change the perception, which means that the real sound should be tested as part of the research.
  2. More generally, the behavioural sciences, such as behavioural economics and neuromarketing are changing the our understanding of how marketing works and how it should be evaluated.
  3. Perception is not reality, which in terms of persuasion means that reality is not always relevant.
  4. People exposed to this sort of effect may be tricked, but if they are they are likely to be angry once they are aware – so include checking to post purchase remorse as part of the research.

Can you suggest other similar effects that help remind marketers and market researchers that they can’t trust their model of the rational consumer.


Jun 212014
Nissam Small Car

A very large part of market research is based on asking people questions, for example in surveys, focus groups, depth interviews, and online discussions. In general, people are very willing to answer our questions, but the problem is that they will do it even when they can’t give us the right answer.

At IIeX last week Jan Hofmeyr shared the results of some research where respondents had been asked about which brand they buy most often and he compared it to their last 3 and last 6 purchases from audit data. He found that in the last 3 purchases 68% of people had not bought the product they claimed to buy ‘most often’, and in the last 6 purchases 58% of people had not bought their ‘most often’ brand.

The video below is designed for entertainment, but it illustrates the bogus answer problem really well:

There are two key reasons why asking questions can produce bogus answers:

  1. Social desirability bias. People are inclined to try to show themselves in the best possible light. Ask them how often they clean their teeth and they are going to want to give an answer that makes them look good, or at least does not imply they are lazy or dirty. In the video, many of the people know that music fans are supposed to know about music, so they don’t want to appear dumb.
  2. Being a poor witness to our own motivations and actions. Writers like Daniel Kahneman, Dan Ariely, and Mark Earls, have written about how people tend to be unaware of how they make decisions. Some of the people in the video are being primed in the question to assume that they know about the brand may possibly be deceived by their own thought processes, with what they do know being used a s pattern generator to produce plausible thoughts.

Of course, in addition to these two reasons, some people simply lie – but in my experience that is a tiny proportion (when seeking the views of customers and the general public) compared with the two reasons listed above. However, the problem of conscious lies increases if incentives are offered.

One way to reduce the number of false answers is to make it much easier for people to not answer a question, ideally by not having to say “I don’t know”, and letting people guide you to the strength of their answer. Look at the video again and you will see that many of the people being interviewed are trying to signal they don’t really know about the bands, for example “I don’t know any of their music but I’ve heard from my friends that ….”. For the sake of the interview and the comedy situation the interviewer presses them into appearing to know more. In an information gathering process we should take that as a cue to back off and make it safe or even ‘wise’ to avoid going any further.

Another important step is to avoid asking questions that most people won’t ‘know’ the answer to, such as “What is the most important factor to you when selecting a grocery store?”, “How many cups of coffee will you drink next week?”, “How many units of alcohol do you drink in an average week?”.

If you’d like to know more about asking questions, check out this presentation from Pete Cape.

The problems with direct questions are one of the major reasons that market researchers are looking towards techniques that use one or more of the following:

  • • Implicit or ‘neuro ‘techniques, such as facial coding, implicit association, and voice analytics.
  • • Passive observations, i.e. recording what people actually do.
  • • In the moment research, where people give their feedback at the time of an event, not at a later date via recall.

Jun 172014

Most samples used by market research are in some sense the ‘wrong’ sample. They are the wrong sample because of one or more of the following:

  • They miss people who don’t have access to the internet.
  • They miss people who don’t have a smartphone.
  • Not representing the 80%, 90%, or 99% who decline to take part.
  • They miss busy people.
Samples that suffer these problems include:
  • Central location miss the people who don’t come into central locations.
  • Face-to-face, door-to-door struggles with people who tend not to be home or who do not open the door to unknown visitors.
  • RDD/telephone misses people who decline to be involved.
  • Online access panels miss the 95%+ who are not members of panels.
  • RIWI and Google Consumer Surveys – misses the people who decline to be involved, and under-represents people who use the internet less.
  • Mobile research – typically misses people who do not have a modern phone and who do not have a reliable internet package/connection.

But, it usually works!

If we look at what AAPOR call non-probability samples with an academic eye we might expect the research to usually be ‘wrong’. In this case ‘wrong’ means gives misleading or harmful advice. Similarly, ‘right’ means gives information that supports a better business decision.

The reason that market research is a $40 Billion industry is that its customers m(e.g. markets, brand managers, ec) have found it is ‘right’ most of the time. Which begs the question “How can market research usually work when the sample is usually ‘wrong’?”

There are two key reasons why the wrong sample gives the right answer and these are:

  1. Homogeneity
  2. Modelling

If different groups of people believe the same thing, or do the same thing, it does not matter, very much, who is researched. As an experiment look at your last few projects and look at the data split by region, split by age, or split by gender. In most cases you will see there are differences between the groups, often differences big enough to measure, but in most cases the differences are not big enough to change the message.

The reason there are so often few important differences is that we are all more similar to each other than we like to think. This is homogeneity. The level of homogeneity increases if we filter by behaviour. For example, if we screen a sample so that they are all buyers of branded breakfast cereal, they are instantly more similar (in most cases) than the wider population. If we then ask this group to rank 5 pack designs, there will usually be no major differences by age, gender, location etc (I will come back to this use of the word usually later).

In commercial market research, our ‘wrong’ samples usually make some effort to reflect target population, we match their demographics to the population, we screen them by interest (for example heavy, medium, and light users of the target brand). The result of this is that surprisingly often, an online access panel, or a Google Consumer Surveys test will produce useful and informative answers.

The key issue is usually not whether the sample is representative in a statistical sense, because it usually isn’t, the question should be whether it is a good proxy.

The second way that market researchers make their results useful is modelling. If a researcher finds that their data source (let’s assume it is an online access panel) over predicts purchase, they can down weight their predictions, if they find their election predictions understate a specific party they can up weight the results. This requires having lots of cases and assumes that something that worked in the past will work in the future.

So, what’s the problem?

The problem for market research is that there is no established body of knowledge or science to work out when the ‘wrong’ sample will give the right answer, and when it will give the ‘wrong’ answer. Some of the cases where the wrong sample gave the wrong answer include:

  • 1936 US presidential election, a sample of 2 million people failed to predict Roosevelt would beet Landon.
  • In 2012 Google Consumer Survey massively over-estimated the number of people who edit Wikipedia – perhaps by as much as 100% – see Jeffrey Hennings review of this case.

My belief is that market researchers need to get over the sampling issue, by recognising the problems and by seeking to identify when the wrong sample is safe, when it is not safe, and how to make it safer.

When and why does the right sample give the wrong answer?

However, there is probably a bigger problem than the wrong sample. This problem is when we use the right sample, but we get the wrong answer. There are a wide variety of reasons, but key ones include:

  • People generally don’t know why they do things, they don’t know what they are to do in the future, but they will usually answer our questions.
  • Behaviour is contextual, for example choices are influenced by what else is on offer – research is too often either context free, or applies the wrong context, or assumes the context is consistent.
  • Behaviour is often not linear, and quite often does not follow the normal distribution – but most market research is based on means, linear regression, correlation etc.

A great example of the right sample giving the wrong answer is New Coke. The research, evidently, did not make it clear to participants that this new flavour was going to replace the old flavour, i.e. they would lose what they saw as “their Coke”.

In almost every product test conducted there are people saying they would buy it who would certainly not buy it. In almost every tracker there are people saying they have seen, or even used, products they have not seen – check out this example.

The issue that researcher need to focus on is total error, not sampling error, not total survey error, but total error. We need to focus on producing useful, helpful advice.

Apr 212014
Path into salt

“Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” is one of the most commonly quoted comments about advertising, being variously attributed to John Wanamaker and William Lever. Perhaps as a consequence, one of the key uses of market research is to test, monitor, and track advertising. However, it might well be that half of the money spent on testing and tracking advertising is also wasted.

How does advertising work?
In the distant past we used to think advertising worked along the lines of the AIDA model, it helped create Awareness/Attention, Interest, Desire, and Activation. However, more recent research, including behavioural science, econometrics, and media mix modelling, have shown that the picture is much more complex.

One of the best studies of how advertising works is one carried out for the IPA by Les Binet and Peter Field, which produced the report “The Long and the Short of It”.

Short Term and Long Term?
One of the key findings in the work by Binet and Field is that short-term success is not a good or reliable indicator of long-term success. Rational measures, such as standout and attention are quite good at predicting short-term effects (such as whether people will try something, click on it, etc). However, these measures are not good predictors of long-term success.

What is long-term success? Perhaps the best way of encapsulating long-term success is to say that it reduces price elasticity. If we become more attached to a product or service we keep buying it, even if the price goes up, i.e. we are less price elastic, i.e. it is directly related to the ability to make more net profit, not just to the ability to move the volume of sales.

If the short term is so bad at predicting the long term, why do we focus on measuring short-term predictors?
In my opinion, the key reason for focusing on short-term predictors is that we can measure them, and they tend to correlate well with short term results – and in today’s short-term world an immediate correlation is quite reassuring.

The problem with long-term measures is that there is no clearly established research technique that predicts the long-term effects of advertising, although many agencies are working hard to find solutions. There is a feeling that focusing on emotional messaging might be capable of being predictive of long-term results, but the jury is out at the moment.

Watch the video
You can hear Les Binet and Peter Field talk about their findings in the video below.



Nov 242013

To help celebrate the Festival of NewMR we are posting a series of blogs from market research thinkers and leaders from around the globe. These posts will be from some of the most senior figures in the industry to some of the newest entrants into the research world.

A number of people have already agreed to post their thoughts, and the first will be posted later today. But, if you would like to share your thoughts, please feel free to submit a post. To submit a post, email a picture, bio, and 300 – 600 words on the theme of “Opportunities and Threats faced by Market Research” to

Posts in this series
The following posts have been received and posted:

Nov 112013

There seems to be broad agreement that in the large developed markets, for example the US, about 80% to 90% of new products fail. The definitions of success vary but they tend to centre around things like achieving good sales and distribution in year 2.

The key point about new products is that there is a relatively absolute limit to how many products can be successful in a given year and there is no, practical, limit to the number of new products that can be launched. The reasons for this are covered below in this post, but the consequence of a relatively fixed number of successes and a growing number of new products is a declining success rate for products.

Behavioural Economics and New Products
One of the things we have been reminded of by the likes of Daniel Kahneman and Dan Ariely is that making choices takes effort, changing behaviour takes effort, and the energy for these choices and changes is in limited supply. Consider products like your regular/normal brand of coffee, detergent, or toothpaste, changing requires a decision, not changing simply requires the heuristics of repeat behaviour.

Households don’t buy an endless supply of brands/products
In ‘Differentiate or Die’ Jack Trout estimated that most US households cover about 85% of their needs by repeatedly buying 150 SKUs. Unless consumers are willing to buy a growing range of products, the growing number of SKUs will increase faster than people can adopt them, and the success rate will fall.

According to Insight Out of Chaos, 2001, in 1980 there were 2,689 new food launches, by 2000 this had grown to 16,390, and increase of over 600% in 20 years. Growth in GDP and waistlines can increase the number of food products bought but there is still a limit to what consumers can buy and consume. New non-food categories, such as male grooming, will increase the absolute number of successful products/brands in shoppers’ repertoires, but the impact is modest in absolute terms.

How many times do we change brands/products?
In order to explore product/brand changes in the UK, I partnered with Vision Critical’s Spingboard panel to collect a broadly representative sample of 2004 adults (in October 2013). Using a shopping basket of frequently bought branded products, we asked the sample whether they had changed the brand, flavour, or size they buy in the last month. For example, had they changed their brand/flavour/size of Tea or Toilet roll.

The table below shows the findings.

63% of the sample said they had not made any changes, on these nine products, in the last month. The lowest figure was for Frozen food, with 7% saying they had made a change in the last month, a category where shops own brand is very strong. The highest category was Toilet roll, which might be a result of the substantial price promotions that are taking place in UK retail in this field, which impacts both brand and size.

The data showed very few demographic differences. The main difference was between the younger respondents (aged under 25 years) where 53% said they had not changed any of their products, and people 25 and over, where 64% said they had not changed any of these nine in the last month.

When considering these changes, we need to remember that many of the changes will be to an existing brand, not necessarily to a new brand.

The table below shows how many people made 0 changes, 1 change, through to 9 changes.

The data shows that there is a power law type distribution for making changes. Most people don’t change, of the rest, 40% make just one change, of people who make more than one change, over 30% made two changes. The slight blip in the size of the blue bar at ‘9 changes’ probably represents the limited range used in the study, some of these people would have made 10 or more changes.

In total 53% of the product changes in the study came from the 9% of people who made four or more changes in the last month. However, these easy win people might also be easy lose people?

The table below stretches the data quite a bit, but it makes an interesting picture and suggests potential for further research. The table shows the results of multiplying the number of changes in one month by 12 to get an annualised figure.

The table suggests that the average consumer is likely to make just under one change to the Frozen food brands they buy in a typical year, and just under two changes to the regular brand/size of toilet paper. The changes are likely to be distributed with some people making very few changes, and a few people making many changes. If these numbers are broadly correct, then a typical product category is going to have about one change per consumer per year, which means the number of changes is limited to number of people in the market. The proportion between changes to existing brands and new brands will, in turn, govern the number of new product launches that will be successful.

There are good theoretical reasons for believing that the number changes in the brand/products purchased that most people will make in a year has an upper limit, based on the cognitive effort required to make changes. There is data to suggest that the number of products any individual house uses is relatively low (most needing only needing 150 SKUs to meet most of their needs). There is plenty of data showing that the number of new products launched is large and has been growing over the decades.

These factors suggest that the reason that 80% to 90% of new products fail is a result of the way markets work, rather than to do with the products themselves. Even if every new product was yummy or effective, and well-designed, only a relative few could be successful.

When we tested this hypothesis, with the help of Vision Critical’s Springboard panel, we found, as predicted, that relatively few changes were made by most people. And, many of those changes would be to existing brands, not to new brands.

The quality of a new product and its marketing is going to have an impact on whether it is one of the 10% which are successful, rather than the 90% which are less successful (more on this in another post). But, the overall ratio of successful to unsuccessful product launches does not have to be related to the quality or marketing of those products. The absolute number of successes is likely to be related to the size of the market in people and GDP terms, and the maturity of the market. If the number of new products launched keeps increasing, then the failure rate will (all other things being equal) increase.

Thoughts? Comments? Data?

Nov 032013

As part of the preparation for the Festival of NewMR (2-6 December), we are running a study looking at the different sources of inspiration that contribute to market research thinking and innovation. The study is being supported, programmed, and fielded by Festival Gold Sponsor Survey Analytics.

Being co-creational by nature, and given that there is no good current research to ‘borrow from’, the draft questions are set out below in this post – or you can downloaded it from here. We’d love to hear your suggestions.

We are aiming to program the study Saturday 9th November, so suggestions before then would be greatly appreciated.

Draft Survey

What are the sources of market research inspiration?
This short survey has been sponsored and programmed by Survey Analytics, a Gold Sponsor of The Festival of NewMR 2013. The study looks into the places where market research draws its ideas and inspiration. The results will be presented at the Main Stage of the Festival and published via the NewMR website.

This study is purely about your opinions, there are no right and wrong answers, which is why there are no ‘don’t know’s. Nobody ‘knows’, we want opinions.

We are going to start the study thinking about books.

1) Recent Books
Which one of these recent books do you think is having the most impact on market research practice and thinking? (Select one)

  1. Predictably Irrational – Dan Ariely
  2. Switch – Chip and Dan Heath
  3. The Signal and the Noise – Nate Silver
  4. Thinking fast and slow – Daniel Kahneman
  5. To Sell is Human – Daniel H Pink
  6. Other (please specify)

2) Older Books
Which one of these slightly older books do you think has had the biggest impact on market research thinking? (Select one)

  1. Herd – Mark Earls
  2. The Long Tail – Chris Anderson
  3. The Tipping Point – Malcolm Gladwell
  4. The Wisdom of Crowds – James Surowiecki
  5. Wikinomics – Don Tapscott & Anthony Williams
  6. Other (please specify)

3) Wider Books
And, which one of these books do you think is having the biggest impact on the way companies are doing business? (Select one)

  1. Lean In – Sheryl Sandberg
  2. Nudge – Richard Thaler & Cass Sunstein
  3. Steve Jobs – Walter Isaacson
  4. The New Digital Age – Eric Schmidt & Jared Cohen
  5. To Sell is Human – Daniel H Pink
  6. Other (please specify)

4) Business Thinkers
Which one of these business thinkers, writers, bloggers do think is most relevant to today’s market researcher? (Select one)

  1. Warren Buffet
  2. Guy Kawasaki
  3. Rosabeth Moss Kanter
  4. Seth Godin
  5. Tom Peters
  6. Other (please specify)

5) Information Sources
Thinking about how you get your information about new market research, which one of these do you find most useful? (Select one)

  1. Blogs
  2. Company websites
  3. Facebook
  4. LinkedIn
  5. Twitter
  6. Other (please specify)

6) Presentation Thinkers
Which of the following would you most recommend to somebody wanting to improve their presenting? (select one)

  1. David McCandless
  2. Edward Tufte
  3. Presentation Zen
  4. Nancy Duarte
  5. TED Talks
  6. Other (please specify)

7) Key Region
Which region do you think will lead the way in new MR over the next five years? (Select one)

  1. Africa
  2. Asia Pacific
  3. Europe
  4. Middle East
  5. North America
  6. South & Central America
  7. None of them

8) Drivers of Change
Which one of the following is the most likely to improve the research we do over the next ten years? (Select one)

  1. Advances in technology
  2. Changes in the business landscape
  3. New thinking from business
  4. New thinking from mathematics, statistics, analytics & computing
  5. New thinking from psychology and the social sciences
  6. New thinking from market researchers
  7. Left field unknowns

We will also ask four demographics, Age, Sex, Country, and relationship to the research industry (e.g. buyer, seller, academic etc).

HT (hat tip) to Jon Puleston, the idea for this study came from Jon’s 2011 presentation at the Festival of NewMR where he created his own awards for transformative, events, sources, and technologies.

Oct 062013

I quite often hear somebody say that X is the best research approach, where X might be eye-tracking, ethnography, behavioural economics, discrete choice models, nano surveys, or any one of twenty other contenders. However, any answer that starts with an approach is, in my opinion, wrong.

The best market research approach starts by looking at a specific research question and then trades-off three elements, quality, speed, and cost – typically by trying to find something that is good enough, fast enough, and cheap enough. Assessing the speed and the cost of an approach is normally straightforward. In terms of cost, if everything else is equal, the lowest price is best. In terms of speed, there are speeds that are too slow, speeds which are OK, and sometimes a point when faster adds no additional value.

Quality is based on supplying something which meets the needs of the client, and it is this element that guides the researcher to determine the best approach, i.e. to recommend the cheapest/fastest solution that provides what is needed.

The seven questions below suggest a possible hierarchy in assessing what is likely to be the best research approach in a given situation. If level 1 answers the research question, it is likely to be best answer, i.e. the best trade-off of quality, speed, and cost.

1. Does the answer (data) already exist? All too often research is conducted when the answer is already sitting on a shelf (although these days the shelf is typically virtual).

2. Can we just ask people? For many research problems, a simple question is the best way of finding out an answer. What is your address? What type of car do you drive? Do you use Facebook? In most of these cases, subject to issues like social desirability bias, simple questions, asked via a survey or form, work well.

3. Do we need to quantify it? If we want to know what people think an ad means or whether they understand how to use a website, then a qualitative piece of research, for example focus groups, is quick, easy, and effective. If the research questions are relatively simple, an online focus group is likely to be sufficient.

4. Can we ask people questions and model the results? Asking people which party they are going to vote for or whether they will buy this new type of breakfast cereal leads to answers that do not directly relate to what people do (partly because of bias, partly because people don’t know what they will do), but in many cases the answers can be modelled, weighted, or compared with benchmarks to give guidance on the likely outcome, and about the probability of that outcome.

5. Can we modify the questioning to get people to reveal their inner motivations? If people can’t tell us how they make decisions, and if their answers to simple questions are not useful for modelling, then the research can be extended. For example, in qualitative research projective techniques can be used, in quant we can use tools such as choice experiments or prediction markets. There are a growing number of ways of modifying the questioning, for example virtual environments, implicit association, gamification, and other techniques including neuroscience and facial coding.

6. Do we need real life observations? If we can’t find out the answers in a laboratory setting (such as a survey, a central location test, a focus group, or a depth interview), then observations need to be gathered from real life, for example by asking people to collect slices of their own life via a smartphone, or by recruiting people to visit homes, workplaces etc to gather information.

7. Do we need ethnographers, anthropologists, ethnomethodologists etc? If collecting data about people’s everyday lives is not enough, then the next level up (in terms of taking time and spending money) is sending trained researchers into real life situations, to seek out the clues, to interact with people, and to find the hard to reach answers. For example, to really understand kitchen hygiene practices, perhaps to find out about gaps in processes, questions and observation are unlikely to be enough, researchers will need to be there, and be there long enough for people’s behaviour to return to normal. Ethnomethodology, for example, might employ breaching to gain a deeper understanding, perhaps by wiping the bench with the hand towel and watching what happens.

These seven levels do not list every approach, but other approaches can be assessed against these seven to see if they provide a faster, better, or cheaper solution. For example if a specific question can be answered by social media monitoring it is likely to be at the faster/cheaper end of the spectrum. By contrast, even if semiotics can answer a particular research problem, it is unlikely to be very cheap or fast, so it tends to only be used when cheaper and/or faster methods can’t deliver the results needed.

It should be noted that the answer to a market research question does not have to be market research. If there is something else which provides a better solution to the three-way trade-off of quality, speed, and cost, then that is the best solution. For example, when websites first burst on the scene, the best way to find out the basics of who is visiting was via market research, often using pop-up surveys. However, the market developed and the ‘best’ solution for many situations was provided by analytics. Market research has no automatic right to exist, market research should only be used when it is the best solution. For example, many online service providers are finding that A/B testing is a faster/cheaper/better way of testing service and offer variations, making research redundant in some cases.

If you want to know more about answers to contemporary market research questions, check out ESOMAR’s new book, edited by NewMR’s Sue York and Ray Poynter.