Sep 302013
 

Market research tends to look inwards when it tries to assess it strengths and weaknesses, but perhaps interesting comparisons can be drawn from the world of stock market research?

A recent article in The Economist reviewed the world of stock market research and it revealed some interesting comparisons with market research. The core of stock market research in the past has been provided by organisation such as banks, in the hope that good advice will lead to investors spending more money, which in turn drives revenues from equity trading.

The first key comparison is in the size of the market. Market research growth has been relatively flat over the last four years, but the stock market research industry has seen a fall from about $14 billion in 2009 to about $9 billion in 2013 in America. In Europe the fall was from 4 billion Euros to 3 billion. The Economist describes the decline as being driven by the shift to passive investing and algorithmic trading – which might have implications for market research automation and the use of DIY solutions by clients.

The Economist highlights some interesting changes in the structure of the reduced stock market research options. For example, analytics have been moved from expensive locations to less expensive ones. Asset managers report that they are increasingly not reading the research reports they are being presented with, a close analogy to many market research clients.

In the stock market research world there has been a growth in bespoke research, an analogy with the boutique agencies in the research industry. In addition, stock market research is seeing a growth in non-traditional solutions, such as commissioning satellite pictures of new mining sites to see if company reports are accurate. Again, this provides an analogy with non-traditional market research providers. As in market research, although there has been strong growth in non-traditional solutions, they remain a small part of the total picture in stock market research. One of the debates, that keep circulating in NewMR circles, relates to the speed that market research will change. Most pundits agree on what the industry will look like in 10 years (more on this in another post). But there tends to be a difference of opinion in the speed of the change, with some (such as Lenny Murphy) believing in fast change, and other people (such as myself) thinking the change will be slower.

The stock market research picture is an example of relatively small shares for new options, but a rapid decline in existing methods – a worst case example that we should hope we do not see in market research! If this model were to appear in market research, it would be akin to companies cancelling their customer satisfaction and advertising tracking studies without commissioning new and exciting alternatives.

Sep 252013
 

In recent online discussions, for example in LinkedIn and on the GreenBookBlog, there has been a growing number of research buyers talking about what they are looking for from research agencies, and the focus seems to be people. In particular, clients say, they are particularly seeking agencies who have people who see the big picture, who can synthesize multiple types of data, and who can create an engaging story to convey the insight. At ESOMAR Congress this week there were three presentations (from DVL Smith, Ruby Cha Cha, and a Truth/Nokia combination) reporting the same thing, including two studies amongst clients which played down the importance of brute force and scale, and which extolled creative synergy.

But, as researchers, we rarely believe that consumers can tell us what the motivations of their behaviour are. It is generally agreed that people can’t describe their own decision hierarchies – as Mark Earls says, we are poor witnesses to our own motivations. And, since clients are people too, why do researchers so often take what clients say at face value?

When researchers can’t be sure about what people mean, we look at what they do. So, let’s look at the recently published ESOMAR Global Market Research Study, to see what clients are doing (noting the 2013 is based on 2012 data). The change in spending over the last few years, and over the last 12 months has been away from the boutique companies, away from the insight consultancies, and towards the biggest agencies. The largest six agencies now account for 41% of all spend (up from 39% the year before). Interestingly, some would say depressingly, quantitative research grew by 1% and qualitative research fell by 1% – indicating a shift, potentially, from insight to bean counting. Looking at the breakdown by category, 40% of spend goes to the classic auditing and counting categories of Market Measurement (18%), Media/Audience (8%), Customer/Stakeholder Feedback/Satisfaction (7%), and Ad/brand Tracking (7%).

So, whilst some clients are clearly looking for the story telling insight diviners, and whilst many more are looking to make some use of insight specialists, the trend is away from what clients say they want and towards the large, scalable, data heavy solutions.

Some people will make the point that the traditional definition of market research used by organisations like ESOMAR ignores the competition from big data, social media companies, none MR uses of online communities, and a variety of innovative new ways to gain answers about consumers. However, almost all of these new channels are data heavy, people light – if they were included in the ESOAMR figures they would represent an even stronger trend away from what clients are saying to what clients are doing.

Why do clients say they want more big picture, story telling, synergy, when that is not what they tend to buy?
IMHO, clients do want the nice things they list in discussions and survey responses, such the ability to see the big picture. However, they don’t prioritise these over the things they feel they need, like large scale, structured, scalable data. Company insight managers are often mandated to audit usage, to measure satisfaction, and to track the performance of their brands and advertising. These core requirements can eat up well over 50% of research budgets, especially when system approved methods of NPD, concept, and ad testing are added to the list.

I am sure clients would like their big research projects to come with extra insight, interpretation, and explication. However, these extra levels of service make an enormous difference to the price. As the automation of research improves, the data becomes cheaper, making the analysis and interpretation seem ever more expensive. These days trimming the budget by reducing sample sizes only makes a modest impact on the cost, but reducing the analysis and reporting (i.e. the people costs) makes a big saving – so most clients, most of the time, protect the sample size and the survey length, and cut the analysis back.

The good news
The insight driven, people focused part of market research may be a small (and possibly declining share) of market research spend, but it is still large enough for small and even medium-sized agencies to specialise in it. Ethnography, Behavioural Economics, neuroscience, crowd-sourcing may be niches, but for the people who buy from and work in those niches, they are rewarding.

Another bit of good news is in the area of communities, which the ESOMAR report says are growing at over 30% a year. This a route that blends scalability and the ability to work closely with clients to synthesize a big picture – which is why I spend so much of my working with and talking about communities.

Two Industries?
At the ESOMAR Congress I asked the question about the gap between what the researchers reported clients said and what the data showed they did. The best answer, IMHO, came from Kristin Hickey of Ruby Cha Cha who simply said there are two MR industries. There is the large, factory like, continuous measurement and audits business, and a much smaller value-added, ad hoc, boutique business. I think this is a good way of thinking about the industry. Innovations like passive data collection, automated analysis, big data etc are likely to have profound effects on the factory type of research – increasing its reach, granularity, and reducing its costs. The other more boutique sector will, in all probability, focus on its people, its ability to work with multiple sources to create solutions. Prices in the second sector will continue to increase, and timelines probably won’t get much faster, because the limiting factor is people, by which I mean good people, well trained, and well supported.

Your thoughts? Do we live in an increasingly bifurcated industry, where the larger part will become ever more automated, and a craft part that prospers in its own terms, but will always be smaller, more expensive, and less real-time?

Sep 162013
 

I have just been reading an article about social media as a potential replacement for traditional market research on Research-Live and it made me want to scream!!! As the founder of NewMR I am a fan of new techniques, I was one of the first to use CAPI, one of the first to use simulated test markets, one of the first to use online research, and one of the first to use MROCs – and I wrote The Handbook of Online and Social Media Research.

But, there are some basic rules we all need to stick to if we are to assess new tools. We need to be able to tell clients whether they are the same, worse, better, or different to existing tools, when to have confidence in them and when not to. To do that assessment we need to stick to some very basic rules – and the rules are different for qual and quant.

Here are a few of my key rules for quant research.

A big sample is not a population. In the Research-Live article Mark Westaby said, about the UK, “We track tweets from millions of unique supermarkets users, who in fact represent between 1 in 10 and 1 in 20 of all consumers who ever use a supermarket. With these numbers we’re not just tracking a sample but the population itself.” NO!!! 1 in 10 is a sample. When we use samples we are using the 1 in 10 to help us assess what the 9 in 10 do – sometimes this works and sometimes it doesn’t. If the sample is a true random probability sample, it usually works, but it usually is not a random probability sample. The 1 in 10 left handed people in the UK would not give you a good guide to how the 9 in 10 use their hands. But, the 1 in 10 people in the UK with ginger/reddish hair would provide a pretty good assessment of beer, breakfast, and car preferences.

Causality Matters. Chris Anderson, from Wired, Free, and the Long Tail has said about big data that we won’t need to worry about the scientific method or causality once we have enough data. Nate Silver demolishes this in his book The Signal and The Noise. A model and an understanding of causality is more important when the amount of noise (and most big data is noise) increases.

Causality is more than a sequence. Every day I eat my breakfast and later the day becomes warmer, so, eating breakfast causes the world to heat up. Causality requires a model, in most cases it can only be tested via a controlled experiment.

Extrapolation is much less reliable than interpolation (inside the box is better than outside the box). This is true at the mathematical level, it is more reliable to fit a curve to a set of points and then to work out the spaces in between, than to estimate where the line will go next. But, it is also true for consumers answering our surveys. How many times will I eat breakfast next week? Easy question, inside the box, i.e. interpolation. How many times will I eat a burger next month? Not as easy, but I can give an estimate that will be close to the average of what I have done in the past. How many times will you go to the gym over the next 12 months with the new gym membership you have just bought – outside the box, you might be right in your estimate, but you probably won’t be – this is outside the box, i.e. extrapolation.

One test can disprove something, but it can’t prove something. If I test a new method (say social media or mobile) and it gives the same result as a method I feel is correct, then one of three things is true: a) the new method generally gives the same results as the old method, b) the new method sometimes gives the same result, or c) it was pure luck. More tests are needed, and the tests should be designed to show whether a), b), or c) is the most likely explanation.

All too often MR sees a study comparing two approaches, finding few differences, and implying that the two are broadly comparable. No! The test shows that they are sometimes the same, but we can’t tell whether that is often, sometimes, or rarely true.

By contrast, if two tests are run and they produce a different result, then this would tend to disprove the idea that the results are broadly comparable. However, it does not disprove the contention that the two methods are comparable under some circumstances.

If two results differ it does not mean one is right. Quite often when a new method is tested, say online versus CATI, and a difference is found the implication is that the established method is correct and the new method wrong, or less commonly that the new is right and the old wrong. However, there is also the possibility that both are wrong.

More data does not always help. Nate Silver highlights this issue in the context of predicting recessions. There have been 11 recessions in the US since the Second World War. Silver quotes 400 economic variables that are available to model the causes and predictors of recession. With 400 variables and only 11 cases there are millions of possible solutions. As researchers, we would prefer there to be 11 variables and 400 recessions. More cases usually help, more variables only help if they can be organised, structured into a model, and if after the processing the number of cases exceed the number of variables.

So?
We do need and want change. New ideas should be encouraged, but in assessing them there are a few basic rules we need to adhere to. It is fine to try something untested, provided one says it is being tried untested. It is fine to be encouraged if a trial shows few differences from a benchmark. But it is not fine to say the technique has been proved, nor that the scientific approach to proof does not matter.

Suggestions?
What are your suggestions for basic rules that should be adhered to?

Sep 112013
 

By common consent, research communities seem to have been the fastest growing new research approach over the last few years (a view that was supported by the latest GRIT report). Indeed, in some sectors, such as media, brands are beginning to worry about being the last to adopt the idea of having meaningful and on-going conversations with their customers.

However, given the speed that the area is moving, there are a variety of definitions and concepts being used. For example, one hears talk of MROCs, consumer consulting boards, and community panels, to name just three. My preferred term is insight community, and that is the title I have used in my latest book “Insight Communities – Leveraging the Power of the Customer” [a PDF version of the book can be downloaded from here]. The book has been produced by Vision Critical University and I’d like to record my thanks to them and all of those who have helped review material and helped source the many case studies used in the book.

The book is a short read, but covers key elements such as: short-term versus long-term, large versus small, and branded versus blind. The book is packed full of examples and case studies from organisations such as NASCAR, Discover Communications, CBS Outdoor, Diageo, Banana Republic, Avianca, and Cathay Pacific – covering Asia, North America, Latin America, Europe, and Australia.

As well as being an online book, professionally produced bound copies are available from Vision Critical’s London and Sydney offices.

Given the topic of communities is such a dynamic field, I’d love to hear your thoughts, suggestions, and ideas about research communities, where they are going next, and the ideas expressed in the book.

Sep 032013
 

Below is a list of the five posts, on NewMR.org, that in 2013 have been read by the largest number of unique readers, as measured by Google Analytics.

  1. Why do companies use market research? This was posted December 30, 2012, and has had 633 unique viewers in 2013.
  2. The ITU is 100% wrong on mobile phone penetration, IMHO. Posted 29 June, 2013, viewed by 380 unique people.
  3. Is it a bad thing that 80% of new products fail? Posted 7 March, 2013, 353 unique viewers.
  4. Notes for a non-researcher conducting qualitative research. This was only posted on 26 August, 2013, so it is probably still on its way up. It has 350 unique viewers.
  5. A Short History of Mobile Marketing Research. Posted 1 March, 2013, with 278 unique views.

I ran the analysis to see if I could spot any patterns in what made a successful NewMR post. However, so far, no clear pattern is emerging. Any thoughts or suggestions?