Oct 272013
 

In 2013, ESOMAR published Answers to Contemporary Market Research Questions. A book which seeks to answer the questions that somebody new to a topic would often like to ask, but may be too embarrassed to ask. The book can be purchased from the ESOMAR website here.

In 2014, new chapters are being added to the book, and one of the new chapters will be international market research. At this stage we are identifying the ten (approximately ten) questions that the chapter should answer. Below are our initial thoughts.

  1. What is meant by international research?
  2. Can the same questionnaire be used in every country?
  3. Can I use the same data collection method in every country?
  4. Can I use English in every country if there are ‘enough’ English speakers?
  5. How is multi-country research commissioned and organised?
  6. How is international qualitative research conducted?
  7. Does market research cost the same in each country?
  8. What are the differences in laws and ethics around the world?
  9. What are the key challenges in analysing international data?
  10. ?

We would welcome your suggestions, for changes, additions, or deletions?

We are also consulting on questions for a chapter on mobile market research, you can see the current suggestions here.

Oct 242013
 

In 2013, ESOMAR published Answers to Contemporary Market Research Questions. A book which seeks to answer the questions that somebody new to a topic would often like to ask, but may be too embarrassed to ask. The book can be purchased from the ESOMAR website here.

In 2014, new chapters are being added to the book, and one of the new chapters will be mobile market research. At this stage we are identifying the ten (approximately ten) questions that the chapter should answer. Below are our initial thoughts.

  1. Can I assume that my research can be conducted entirely via smartphones?
  2. What are feature phones and how are they used in mobile research?
  3. When should I use mobile only and when should I use mixed-mode research?
  4. What is a research app and when are they used?
  5. What is passive data collection?
  6. Does mobile research give the same answers as online research?
  7. What are the key uses of mobile in qualitative research?
  8. How is geolocation being used in mobile research?
  9. What are the key legal and ethical issues for mobile research?
  10. ?

We would welcome your suggestions, for changes, additions, or deletions?

Oct 202013
 

Tens of thousands of new products are tested each year, as part of concept screening, NPD, and volumetric testing. Some products produce a positive result, and everybody is pretty happy, but many produce a negative result. A negative result might be that a product has a low stated intention to purchase or it might be that it fails to create the attitude or belief scores that were being sought.

Assuming that the research was conducted with a relatively accepted technique, what might the negative result mean?

A bad product/idea
One possibility is that the product is simply not good enough. This means that if the product is launched, as currently envisaged, it is very likely to fail. In statistical terms this is the true negative.

The false negative
The second possibility is that the result is a Type II error, i.e. a false negative. The product is good, but the test has not shown this. Designers and creatives seem to think this is the case in a large proportion of the cases, and there are many ways that this false negative result can occur.

The test was unlucky
If a test is based on a procedure with a true negative rate of 80% then one-in-five times a success will be recorded as a failure. A recent article in the New Scientist (19 October, 2013) pointed out that since most tests focus on minimising Type I errors (false positives) the typical true negative rate is often much less than 80%, meaning unlucky results will be more common.

The sample size was too small
If a stimulus produces a large effect, it will obvious even with a small sample, but if the effect is small a large sample is needed to indicate it, and if it is not indicated, it will typically be called a failure. For example, if a sample of 1000 (randomly selected) people is used, the result is normally taken to be +/- 3%, which means relatively small benefits can be identified. However, if a sample size of 100 is used the same assumptions would imply +/- 10%, which means effects have to be much larger to be likely to be found.

The description does not adequately describe the product
If the product description is not good enough, then the result of the test is going to be unreliable, which could result in a good idea getting a bad result.

The product or its use can’t be envisaged
Some products only become appealing once they are used, apps and software often fall into this category, but so do products as varied as balsamic vinegar, comfy socks, and travel cards (such as the London Oyster). Some products only become appealing when other people start to use them, as Mark Earls has shown in “I’ll have what she’s having”. Generally, copying behaviour is hard to predict from market research tests, producing a large number of false positives and false negatives. In these cases, the purchase intention scale (and alternatives such as predictive markets) can be very poor indicators of likely success.

In many cases people may be able to identify that they like a product, but are unable to reliably forecast whether they will actually buy and use it, i.e. they can’t envisage how the product will fit in their lives. For example, I have lost count of the number of holiday locations and restaurants I have been convinced that I would re-visit, only to be proved wrong. This is another situation where the researcher’s trusty purchase intention scale can be a very poor indicator.

The wrong people were researched
If the people who might buy the product were not researched, then the result of the test is unlikely to forecast their behaviour. For example, in the UK, energy drinks are less about sports people than office workers looking for a boost. Range Rovers are less for country folk than they are for Londoners.

So, how should a bad result be dealt with?
This is where science becomes art, and sometimes it will be wrong (but the science is also wrong some of the time). So, here are a few thoughts/suggestions.

  • If you expected the product/concept to fail, the test has probably told you what you already knew, so it is most likely safe to accept the negative finding.
  • If you have tested several similar products, and this is the one of the weaker results, it is probably a good indication the product is weak.

In both of these cases, the role of the modern market researcher is not just to give bad news, it should also include suggesting recommendations for what might work, either modifications to the product/concept, or alternative ideas.

If you can’t see why it failed
If you can’t see why a product failed, try to find ways of understanding why. Look at the open-ended comments to see if they provide clues. Try to assess whether the idea was communicated. For example, did people understand the benefits, and reject them, or not understand the benefits?

Is the product/concept one where people are likely to be able to envisage how the product would fit in their life? If not, you might want to suggest qualitative testing, in-home use test, or virtual reality testing.

Some additional questions
To help understand why products produce a failing score, I find it useful to include the following in screening studies:

  • • What sort of people might use this product?
  • • Why might they like it/use it?
  • • What changes would improve it for these people?

Oct 142013
 

Last week, at the MRMW Conference in London, the future of market research as we know it was challenged by Jan Hofmeyr. Although there were many informative and interesting presentations, Jan was that the only person who was talking about a very different way of doing business.

In writing this post I am working from memory, so apologies if I misrepresent anything. The presentation in London appeared to be a continuation of a presentation that Jan made last year in Amsterdam, at the ESOMAR 3D Conference. A continuation in the sense that he had moved his thinking on, and an extension, in that he now appears to be offering a solution to some of the world’s largest agencies and clients.

The main points
Jan Hofmeyr’s main points were:

  1. The existing model of market research, in particular the large trackers, is broken. It is too slow, too expensive, and not sufficiently useful. Not many people would argue with this point of view.
  2. The best device, for collecting tracking interviews is the mobile phone. Jan’s key point is that nearly everybody has a phone and they have it with them almost all the time. And, by mobile phone, he means both feature phones and smartphone.
  3. The core of his suggested data collection should be text based and last about 120 seconds. It should be text based because more than half the world is still using a feature phone, and it can be 120 seconds because it is going to focus on just 3 key products per person (the three most relevant to each person). And, it needs to be 120 seconds to make it affordable and to reach enough people.
  4. Social media monitoring should be added to the tracking mix to make the information richer.
  5. Predictive analytics should be used to look at the data to predict what is likely to happen next – rather than reporting what did happen weeks or months ago.
  6. Artificial Intelligence or Expert Systems should be used to analyse the data, to produce reports that have not been written by people and which are automatically translated into the client’s preferred languages.

The key benefits
The research will be much cheaper, more insightful, predictive, and faster.

The implications for the MR industry
The MR industry would need far fewer employees. These employees would mostly be experts, salesmen, and accountants. Presumably, the sort of process Jan is talking about would not affect the small ad hoc projects, especially the qual projects nearly so much. What he is mostly talking about are the big projects; the brand, ad, and customer satisfaction trackers, the places where clients spend most of their money, and where most market researchers are employed.

However, if the big stuff were to change to the extent that Jan is forecasting, then there would surely be implications for other types of research too?

The reaction in the room?
Most people in the room showed no reaction to Jan saying they may not have a job within a year or two. Did they not believe him, not understand him, or not really listen?

My initial reaction was to focus on the bits of his plan that, as described, do not seem possible with current technologies. However, I do accept his central points about a) the need for change, and b) the possibility for change.

My reservations
My key reservations were, that on the evidence presented, I do not feel that the following items will work as well, or as accurately, as he intimated:

  • Collection of emotional data about brands, from short surveys or from social media.
  • Predicting brand movements. Jan seemed to be suggesting he could predict the score a brand would get in the future. I think that predicting a cloud of possible values is more realistic (a point covered by both Nate Silver and Nassim Nicholas Talib in their books The Signal and the Noise and The Black Swan).
  • The ability to create expert systems that could automatically produce non-trivial reports.
  • The ability to automate the tailoring of reports to different clients and languages.

What’s next?
From what I hear from the US and from Europe, Jan Hofmeyr has some very interested major clients/potential clients. So, some people are taking it very seriously. Perhaps some of the wackier elements of his presentation are not part of what he currently plans to deliver? Perhaps the core of what he is suggesting is more limited and more practical? Although I have some major reservations about what Jan Hofmeyr presented, I suspect he is closer to the truth than the majority of the room who seemed to believe that the MR industry in two and five years will be much like it is today.

Oct 082013
 

Why doesn’t the UK have more successful start-ups? That was the question posed on Radio 4 to Wendy Hall (Professor of Computer Science at the University of Southampton) on the BBC’s Life Scientific programme.

Her answer was that we don’t kill things quickly enough. She went on to talk about some of the start-ups that she has been involved with. She described how a start-up might receive millions in initial funding. However, if it fails to take off, the company is then often given more money, because people do not want to see or accept that their money has gone. Her argument is that this tends to delay the end and gets in the way of new ideas being tried. Since most ideas fail, she would like the UK to move to a fail fast style, a form of agile business. Not only does it tie up money, but it ties up the entrepreneurs and idea makers who spend their time trying to make a doomed idea work, rather than moving on to the next idea.

When I think back to my time as a trustee of the Nottinghamshire Pension Fund, where one of our duties was investing, I can recall making this sort of mistake several times. At the time we did not call it a mistake, we thought of it as ‘being brave’ or ‘thinking long-term’ and showing ‘commitment’. But, perhaps Wendy Hall is right? Additional funding, from the same funding source, to something that has not taken off is probably not only a waste of money, it is probably something that wastes the time and effort of people who should be trying something new.

Thoughts?

Oct 062013
 

I quite often hear somebody say that X is the best research approach, where X might be eye-tracking, ethnography, behavioural economics, discrete choice models, nano surveys, or any one of twenty other contenders. However, any answer that starts with an approach is, in my opinion, wrong.

The best market research approach starts by looking at a specific research question and then trades-off three elements, quality, speed, and cost – typically by trying to find something that is good enough, fast enough, and cheap enough. Assessing the speed and the cost of an approach is normally straightforward. In terms of cost, if everything else is equal, the lowest price is best. In terms of speed, there are speeds that are too slow, speeds which are OK, and sometimes a point when faster adds no additional value.

Quality is based on supplying something which meets the needs of the client, and it is this element that guides the researcher to determine the best approach, i.e. to recommend the cheapest/fastest solution that provides what is needed.

The seven questions below suggest a possible hierarchy in assessing what is likely to be the best research approach in a given situation. If level 1 answers the research question, it is likely to be best answer, i.e. the best trade-off of quality, speed, and cost.

1. Does the answer (data) already exist? All too often research is conducted when the answer is already sitting on a shelf (although these days the shelf is typically virtual).

2. Can we just ask people? For many research problems, a simple question is the best way of finding out an answer. What is your address? What type of car do you drive? Do you use Facebook? In most of these cases, subject to issues like social desirability bias, simple questions, asked via a survey or form, work well.

3. Do we need to quantify it? If we want to know what people think an ad means or whether they understand how to use a website, then a qualitative piece of research, for example focus groups, is quick, easy, and effective. If the research questions are relatively simple, an online focus group is likely to be sufficient.

4. Can we ask people questions and model the results? Asking people which party they are going to vote for or whether they will buy this new type of breakfast cereal leads to answers that do not directly relate to what people do (partly because of bias, partly because people don’t know what they will do), but in many cases the answers can be modelled, weighted, or compared with benchmarks to give guidance on the likely outcome, and about the probability of that outcome.

5. Can we modify the questioning to get people to reveal their inner motivations? If people can’t tell us how they make decisions, and if their answers to simple questions are not useful for modelling, then the research can be extended. For example, in qualitative research projective techniques can be used, in quant we can use tools such as choice experiments or prediction markets. There are a growing number of ways of modifying the questioning, for example virtual environments, implicit association, gamification, and other techniques including neuroscience and facial coding.

6. Do we need real life observations? If we can’t find out the answers in a laboratory setting (such as a survey, a central location test, a focus group, or a depth interview), then observations need to be gathered from real life, for example by asking people to collect slices of their own life via a smartphone, or by recruiting people to visit homes, workplaces etc to gather information.

7. Do we need ethnographers, anthropologists, ethnomethodologists etc? If collecting data about people’s everyday lives is not enough, then the next level up (in terms of taking time and spending money) is sending trained researchers into real life situations, to seek out the clues, to interact with people, and to find the hard to reach answers. For example, to really understand kitchen hygiene practices, perhaps to find out about gaps in processes, questions and observation are unlikely to be enough, researchers will need to be there, and be there long enough for people’s behaviour to return to normal. Ethnomethodology, for example, might employ breaching to gain a deeper understanding, perhaps by wiping the bench with the hand towel and watching what happens.

These seven levels do not list every approach, but other approaches can be assessed against these seven to see if they provide a faster, better, or cheaper solution. For example if a specific question can be answered by social media monitoring it is likely to be at the faster/cheaper end of the spectrum. By contrast, even if semiotics can answer a particular research problem, it is unlikely to be very cheap or fast, so it tends to only be used when cheaper and/or faster methods can’t deliver the results needed.

It should be noted that the answer to a market research question does not have to be market research. If there is something else which provides a better solution to the three-way trade-off of quality, speed, and cost, then that is the best solution. For example, when websites first burst on the scene, the best way to find out the basics of who is visiting was via market research, often using pop-up surveys. However, the market developed and the ‘best’ solution for many situations was provided by analytics. Market research has no automatic right to exist, market research should only be used when it is the best solution. For example, many online service providers are finding that A/B testing is a faster/cheaper/better way of testing service and offer variations, making research redundant in some cases.

If you want to know more about answers to contemporary market research questions, check out ESOMAR’s new book, edited by NewMR’s Sue York and Ray Poynter.

Oct 012013
 

Market researchers are really bad at taking surveys, first they mostly decline to click the link, and those that do complain that the survey is awful. However, there are some surveys that are so important you really need to take them, and the GRIT Survey is one of them (click here to start it now).

The Green Book Industry Trends report is not a scientifically valid measurement of the research industry. Even the ESOMAR studies fall a bit short of that goal. However, it is the best indication about what the leading edge is doing. The back data (this will be the 13th report) and the new data allow intelligent estimates to be made about what is hot, what has peaked, where people will be investing next in research.

However, to make this a valuable resource for you, we need you to take part in the survey. This year’s survey is much shorter and much less painful than any previous GRIT survey, indeed it is probably the most pleasant market research survey of market researchers around.

So, please take part, please share your views, and then let’s see if we can share some guidance on where the world of market research is going next and where we should all be investing our time and money.

You can take part in the survey by clicking here.

You can access the previous report by clicking here.