Aug 262014
 
No More Surveys

Back in March 2010, I caused quite a stir with a prediction, at the UK’s MRS Conference, when I said that in 20 years we would not be conducting market research surveys. I followed my conference contribution with a more nuanced description of my prediction on my blog.

At the time the fuss was mostly from people rejecting my prediction. More recently there have been people saying the MR industry is too fixated on surveys, and my predictions are thought by some to be too cautious. So, here is my updated view on why I think we won’t be conducting ‘surveys’ in 2034.

What did I say in 2010?
The first thing I did was clarify what I meant by market research surveys:

  • I was talking about questionnaires that lasted ten minutes or more.
  • I excluded large parts of social research; some parts of which I think will continue to use questionnaires.

Why no more surveys?
In essence there are three key reasons that I think surveys will disappear

  1. The decline in response rates means that most survey research is being conducted with an ever smaller proportion of the population, who are taking very large numbers of surveys (in many cases several per week). This raises a growing number of concerns that the research is going to become increasingly unrepresentative.
  2. There are a growing number of areas where researchers feel that survey responses are poor indicators of true feelings, beliefs, priorities, and intentions.
  3. There are a growing number of options that can, in some cases, provide information that is faster, better, cheaper – or some combination of all three. Examples of these options include: passive data, big data, neuro-stuff, biometrics, micro-surveys, text processing of open-ended questions and comments, communities, and social media monitoring.

Surveys are the most important thing in market research!
There is a paradox, in market research, about surveys, and this paradox is highlighted by the following statements both being true:

  1. The most important data collection method in market research is surveys (this is because over half of all research conducted, in terms of dollars spent) is conducted via surveys.
  2. The most important change in market research data collection is the move away from surveys.
Because surveys are currently so important to market research there is a vast amount of work going on to improve them, so that they can continue to deliver value, even whilst their share of MR declines. The steps being taken to improve the efficiency and efficacy of surveys include:
  • Mobile surveys
  • Device agnostic surveys
  • Chunking the survey into modules
  • Implicit association
  • Eye-tracking
  • Gamification
  • Behavioural economics
  • Biometrics
  • In the moment research
  • Plus a vast rage of initiatives to merge other data, such as passive data, with surveys.

How quickly will surveys disappear?
When assessing how quickly something will disappear we need to assess where it is now and how quickly it could change.

It is hard to know exactly how many surveys are being conducted, especially with the growth of DIY options. So, as a proxy I have taken ESOMAR’s figures on market research spend.

The table below shows the proportion of global, total market research spend that is allocated to: Quant via surveys, Quant via other routes (e.g. people meters, traffic, passive data etc), Qual, and Other (including secondary data, consultancy and some proportion of communities).

The first three rows show the data reported in the ESOMAR Global Market Research reports. Each year reflects the previous year’s data. The data show that surveys grew as a proportion of research from 2007 to 2010. This was despite a reduction in the cost of surveys as F2F and CATI moved to online. From 2010 to 2013 there was indeed a drop in the proportion of all research spend that was devoted to surveys. However, given the falling cost of surveys and the continued growth of DIY, it is likely that the absolute number of surveys may have grown from 2010 to 2013.

Other quant, which covers many of the things that we think will replace surveys, fell from 2007 to 2010. In many cases this was because passive collection techniques became much cheaper. For example the shift from expensive services to Google Analytics.

The numbers in red are my guess as to what will happen over the next few years. My guess best on 35 years in the industry, talking to the key players, and applying what I see around me.

I think surveys could lose 9 percentage points in 3 years – which is a massive change. Does anybody seriously think it will be much faster? If surveys lose 9 percentage points they will fall below 50% of all research, but still be the largest single method.

I am also forecasting that they will fall another 11 percentage points by 2019 – trends often accelerate – but again, does anybody really think it will be faster? If that forecast is true, by 2019 about one-third of paid for research will still be using surveys. Other quant will be bigger than surveys, but will not be a single approach; there will be many forms of non-survey research.

I also think that Other (which will increasingly mean communities and integrated approaches) and qual will both grow.

What do you think?
OK, I have nailed my flag to the mast, what do you think about this issue? Are my forecasts too high, about right, or too low? Do you agree that the single most important thing about existing data collection methods is the survey process? And, that the most important change is the movement away from surveys?


 

Aug 142014
 
Godzilla

One of the questions I get asked most often is “What’s hot in market research?”. I will be broadcasting my update as NewMR lecture next Wednesday, August 20, (you can register for it here).

But here is a sneak peek into what is hot, still hot, bubbling under the surface, and not so hot.

Still Hot
It is important when looking at the ‘new stuff’ not to ignore stuff that has been around for a while, but which is still growing in market share, importance, and usage:

  • Mobiles in traditional research. Mobile is a big and growing part of CATI, online surveys, and F2F – this trend has a long way to go yet.
  • Communities. Communities (including Insight Communities and MROCs) have been the fastest growing major new research approach for a few years now, and this is going to continue.
  • DIY. We hear less about DIY these days, that is probably because it has become normal, this sector is growing, both in terms of part of being a key part of existing MR and partly because it is growing the scope of market research.

Hot!
These are three of the items that I think are the hottest topics in MR, in terms of their growth and potential. All three of these are going to be game changers.

  • Beacons. For example iBeacons, which use geofencing and allow location-based services (including research) to be offered in much easier and more practical ways than is offered by methods such as GPS.
  • In the moment research. Research using mobiles and research using participants to capture information as people go about their normal day, including qual, quant, and passive, is making research more valid and sensitive.
  • Micro surveys. The most high profile micro (or nano or very short) provider is Google Consumer Surveys, but there are a variety of other providers, such as RIWI. Also, Beacons, In the Moment, and Communities are all leveraging Micro Surveys.

Bubbling
These three are going to make a major impact soon, but not quite yet.

  • Text analytics. The technology is not quite here yet, but when it clears the last few hurdles it will hit market research like a freight train – for example shifting the balance from closed questions to open questions, and finally driving more value out of social media discourses.
  • Web messaging. Apps like WhatsApp, WeChat, and Line are growing faster than anything else globally. A few people are looking at how to leverage these for market research, and more will follow.
  • Research bots. One of the key factors limiting the use of social media, communities, and the use of video is the requirement to use people to do the moderation and analysis. Bots (software applications short for robots) are going to change this and open a new, vast range of options.

Not So Hot
These three are all interesting niches, some people are making a good living from them, but they are not scaling in a way that makes a difference to most brands or researchers.

  • Facial Coding. It answers some questions, but is limited in terms of its range of uses, delays, scalability, and cost.
  • Webcam qual. The benefits are usually too small and the resistance from potential participants are too high to make this a generally useful approach.
  • Social Media Research. Whilst social media research, especially monitoring, has become essential, it has not grown into what was expected.

What about?

  • Big Data
  • Behavioural Economics
  • Gamification
  • Smartphone ethnography
  • Neuroscience
  • Geotracking
  • Wearbles
  • Quantified Self
  • Biometrics

Want to know where these items fit in this picture? Tune in to our webinar next Wednesday, 10am New York time, which is 3pm London time. Click here to register.


 

Jul 162014
 

Sometimes when I run a workshop or training session people want detail, they want practical information about how to do stuff. However, there are times when what people want is a big picture, a method of orientating themselves in the context of the changing landscape around them. Tomorrow I am running a workshop for #JMRX in Tokyo and we are looking at emerging techniques, communities, and social media research – so a big picture is going to be really useful to help give an overview of the detail, and to help people see where things like gamification, big data, and communities all fit.

So, here is my Big Picture of NewMR (click on it to see it full size), and I’d love to hear your thought and suggestions.

Big Picture

The Big Picture has five elements

The heart of the message is that we have reached an understanding that surveys won’t/can’t give us the answers to many of the things we are interested in. People’s memories are not good enough, many decision are automatic and opposed to thought through, and most decision are more emotion that fact. Change is needed, and the case for this has been growing over the last few years.

The four shapes around the centre are different strands that seek to address the survey problem.

In the top left we have big data and social media data, moving away from working with respondents, collecting observations of what people say and do, and using that to build analyses and predictive models.

In the top right we have a battery of new ways of working with respondents to find out why they do things, going beyond asking them survey questions.

In the bottom left we have communities, which I take as a metaphor for working with customers, co-creating, crowdsourcing, treating customers and insiders, not just users.

The bottom right combines elements from the other three. ‘In the moment’ is perhaps, currently, the hottest thing in market research. Combining the ability to watch and record what people do, with interacting with them to explore why and what they would do the options changed.

Thoughts?
So, that is my big picture. Does it work for you? What would you add, change, delete, or tweak?


 

Jun 172014
 

Most samples used by market research are in some sense the ‘wrong’ sample. They are the wrong sample because of one or more of the following:

  • They miss people who don’t have access to the internet.
  • They miss people who don’t have a smartphone.
  • Not representing the 80%, 90%, or 99% who decline to take part.
  • They miss busy people.
Samples that suffer these problems include:
  • Central location miss the people who don’t come into central locations.
  • Face-to-face, door-to-door struggles with people who tend not to be home or who do not open the door to unknown visitors.
  • RDD/telephone misses people who decline to be involved.
  • Online access panels miss the 95%+ who are not members of panels.
  • RIWI and Google Consumer Surveys – misses the people who decline to be involved, and under-represents people who use the internet less.
  • Mobile research – typically misses people who do not have a modern phone and who do not have a reliable internet package/connection.

But, it usually works!

If we look at what AAPOR call non-probability samples with an academic eye we might expect the research to usually be ‘wrong’. In this case ‘wrong’ means gives misleading or harmful advice. Similarly, ‘right’ means gives information that supports a better business decision.

The reason that market research is a $40 Billion industry is that its customers m(e.g. markets, brand managers, ec) have found it is ‘right’ most of the time. Which begs the question “How can market research usually work when the sample is usually ‘wrong’?”

There are two key reasons why the wrong sample gives the right answer and these are:

  1. Homogeneity
  2. Modelling

Homogeneity
If different groups of people believe the same thing, or do the same thing, it does not matter, very much, who is researched. As an experiment look at your last few projects and look at the data split by region, split by age, or split by gender. In most cases you will see there are differences between the groups, often differences big enough to measure, but in most cases the differences are not big enough to change the message.

The reason there are so often few important differences is that we are all more similar to each other than we like to think. This is homogeneity. The level of homogeneity increases if we filter by behaviour. For example, if we screen a sample so that they are all buyers of branded breakfast cereal, they are instantly more similar (in most cases) than the wider population. If we then ask this group to rank 5 pack designs, there will usually be no major differences by age, gender, location etc (I will come back to this use of the word usually later).

In commercial market research, our ‘wrong’ samples usually make some effort to reflect target population, we match their demographics to the population, we screen them by interest (for example heavy, medium, and light users of the target brand). The result of this is that surprisingly often, an online access panel, or a Google Consumer Surveys test will produce useful and informative answers.

The key issue is usually not whether the sample is representative in a statistical sense, because it usually isn’t, the question should be whether it is a good proxy.

Modelling
The second way that market researchers make their results useful is modelling. If a researcher finds that their data source (let’s assume it is an online access panel) over predicts purchase, they can down weight their predictions, if they find their election predictions understate a specific party they can up weight the results. This requires having lots of cases and assumes that something that worked in the past will work in the future.

So, what’s the problem?

The problem for market research is that there is no established body of knowledge or science to work out when the ‘wrong’ sample will give the right answer, and when it will give the ‘wrong’ answer. Some of the cases where the wrong sample gave the wrong answer include:

  • 1936 US presidential election, a sample of 2 million people failed to predict Roosevelt would beet Landon.
  • In 2012 Google Consumer Survey massively over-estimated the number of people who edit Wikipedia – perhaps by as much as 100% – see Jeffrey Hennings review of this case.

My belief is that market researchers need to get over the sampling issue, by recognising the problems and by seeking to identify when the wrong sample is safe, when it is not safe, and how to make it safer.

When and why does the right sample give the wrong answer?

However, there is probably a bigger problem than the wrong sample. This problem is when we use the right sample, but we get the wrong answer. There are a wide variety of reasons, but key ones include:

  • People generally don’t know why they do things, they don’t know what they are to do in the future, but they will usually answer our questions.
  • Behaviour is contextual, for example choices are influenced by what else is on offer – research is too often either context free, or applies the wrong context, or assumes the context is consistent.
  • Behaviour is often not linear, and quite often does not follow the normal distribution – but most market research is based on means, linear regression, correlation etc.

A great example of the right sample giving the wrong answer is New Coke. The research, evidently, did not make it clear to participants that this new flavour was going to replace the old flavour, i.e. they would lose what they saw as “their Coke”.

In almost every product test conducted there are people saying they would buy it who would certainly not buy it. In almost every tracker there are people saying they have seen, or even used, products they have not seen – check out this example.

The issue that researcher need to focus on is total error, not sampling error, not total survey error, but total error. We need to focus on producing useful, helpful advice.


Apr 132014
 
Shibyu At Night

OK, let’s get one thing clear from the outset; I am not saying social media mining and monitoring (the collection and automated analysis of quantitative amounts of naturally occurring text from social media) has met with no success. But, I am saying that in market research the success has been limited.

In this post I will highlight a couple of examples of success, but I will then illustrate why, IMHO, it has not had the scale of success in market research that many people had predicted, and finally share a few thoughts on where the quantitative use of social media mining and monitoring might go next.

Some successes
There have been some successes and a couple of examples are:

Assessing campaign or message break through. Measuring social media can be a great way to see if anybody is talking about a campaign or not, and of checking whether they are talking about the salient elements. However, because of some of the measurement challenges (more on these below) the measurement often ends up producing a three level result, a) very few mentions, b) plenty of mentions, c) masses of mentions. In terms of content the measures tend to be X mentions on target, or Y% of the relevant mentions were on target – which in most cases are informative, but do not produce a set of measures that have any absolute utility and usually can be tightly aligned with ROI.

An example of this use came with the launch of the iPhone 4 in 2010. Listening to SM made it clear that people had detected that the phone did not work well for some people when held in their left hand, that Apple’s message (which came across as) ‘you should be right handed’ was not going down well, and that something needed to be done. The listening could not put a figure on how many users were unhappy, nor even if users were less or more angry than non-users, but it did make it clear that something had to be done.

Identifying language, ideas, topics. By adding humans to the interpretation, many organisations have been able to identify new product ideas (the Nivea story of how it used social media listening to help create Nivea Invisible for Black and White is a great example). Other researchers, such as Annie Pettit, have shown how they have combined social media research with conventional research, to help answer problems.

Outside of market research. Other users of social media listening, such as PR and reaction marketers appear to have had great results with social media, including social media listening. One of the key reasons for that is that their focus/mission is different. PR, marketing, and sales do not need to map or understand the space, they need to find opportunities. They do not need to find all the opportunities, they do not even need to find the best opportunities, they just need to find a good supply of good opportunities. This is why the use of social media appears to be growing outside of market research, but also why its use appears to be in relative decline inside market research.

The limitations of social media monitoring and listening
The strength of social media monitoring and listening is that it can answer questions you had not asked, perhaps had not even thought of. Its weakness is that it can’t answer most of the questions that market researchers’ clients ask.

The key problems are:

  • Most people do not comment in social media, most of the comments in social media are not about our clients’ brands and services, and the comments do not typically cover the whole range of experiences (they tend to focus on the good and the bad). This leaves great holes in the information gathered.
  • It is very hard to attribute the comments to specific groups, for example to countries, regions, to users versus non-users – not to mention little things like age and gender.
  • The dynamic nature of social media means that it is very hard to compare two campaigns or activities, for example this year versus last year. The number of people using social media is changing, how they are using it is changing, and the phenomenal growth in the use of social media by marketers, PR, sales, etc is changing the balance of conversations. Without consistency, the accuracy of social media measurements is limited.
  • Most automated sentiment analysis is considered by insight clients and market researchers to either be poor or useless. This means good social media usage requires people, which tends to make it more expensive and slower, often prohibitively expensive and often too slow.
  • Social media deals with the world as it is, brands can’t use it to test ads, to test new products and services, or almost any future plan.

The future?
Social media monitoring and listening is not going to go away. Every brand should be listening to what its customers and in many cases the wider public are saying about its brands, services, and overall image. This is in addition to any conventional market research it needs to do; this aspect of social media is not a replacement for anything, it is a necessary extra.

Social media has spawned a range of new research techniques that are changing MR, such as insight communities, smartphone ethnography, social media bots, and netnography. One area of current growth is the creation of 360 degree views by linking panel and/or community members to their transactional data, passive data (e.g. from their PC and mobile device), and social media data. Combined with the ability of communities and panels to ask questions (qual and quant) this may create something much more useful that just observational data.

I expect more innovations in the future. In particular I expect to see more conversations in social media initiated by market researchers, probably utilising bots. For example, programming a bot to look out for people using words that indicate they have just bought a new smartphone and asking them to describe how they bought it, what else they considered etc – either in SM or via asking them to continue the chat privately. There are a growing number of rumours that some of the major clients are about to adopt a hybrid approach, combining nano-surveys, social media listening, integrated data, and predictive analytics, and this could be really interesting, especial in the area of tracking (e.g. brand, advertising, and customer satisfaction/experience).

I also expect two BIG technical changes that will really set the cat amongst the pigeons. I expect somebody to do a Google and introduce a really powerful, free or almost free alternative to the social media mining and monitoring platforms, and I expect one or more companies to come up with sentiment analysis solutions that are really useful. I think a really useful platform will include the ability to analyse images and videos, to follow links (many interesting tweets and shares are about the content of the link), to build a PeekYou type of database of people (to help attribute the comments), and will have much better text analytics approach.

 

Feb 072014
 
Influence

One of the growth areas over the last few years has been in the interest in influence marketing, with books such as “The Influentials: One American in Ten Tells the Other Nine How to Vote, Where to Eat, and What to Buy”, metrics such as Klout and Kred, and marketing services such as Klout’s Perks.

The appeal of the influencer model is mostly common sense and has been popularised by writers such as Malcolm Gladwell in his book Tipping Point. New ideas are picked up by key people, people with extensive networks and who tend to be trend leaders, they adopt something and influence the people around them. Looking at social data it is easy to find, for any given trend, people who were in at the start and how the trend flows into the rest of the network. The concept of influence dates back to Paul Lazarsfeld in the 1940s, who suggested that the media were intermediated by influencers.

Homophily
However, there are alternatives to the influencer model, and the key one you are going to be hearing about more and more is homophily. Homophily is the tendency of people with similar tastes and preferences to form networks, the phrase “birds of a feather flock together” describes the concept. Until recently only a few people, for example Mark Earls, have been mentioning homophily, but a new book from Lutz Finger and Soumitra Dutta, “Ask, Measure, Learn”, puts the concept of homophily front and centre. Finger and Dutta cast major doubt on the influencer model, and therefore many of the metrics and marketing methods based on ‘influencers’.

The key finding of Finger and Dutta, and others before them such as Duncan Watts (see “Is the Tipping Point Toast?”) is that the data about how trends, tastes, and activities spread can be explained without resorting to an influence model. The suggestion is that when we look at data we see patterns, but they are the result of chance and networks, not influence.

Does the difference matter?
Yes, consider two cases:

  1. Market diffusion for Acme is led by influencers.
  2. Market diffusion for Acme is led by homophily.
In market 1 the best way to market would be to identify influencers and market to them. In market 2 the best way to market is to target people connected to people who have already bought the product.

So, how will the discussion in 2014 between the adherents of influence and homophily develop? The influence fans tend to appeal to ‘common sense’ and show output examples. The homophily fans tend to appeal to comprehensive data analysis and computer simulations. So, it is likely to be common sense and anecdotes versus facts and models – it would be nice to think science will win, but I have my doubts.

 

Jan 042014
 

The worlds of academic and commercial research are being riven at the moment with concerns and accusation about how poor much of the research and conclusions that have been published are. This particular problem is not specifically about market research, it covers health research, machine learning, bio-chemistry, neuroscience, and much more. The problem relates to the way that tests are being created and interpreted. One of the key people highlighting the concerns about this problem is John Ioannidis from Stanford University and his work has been reported both in academic and popular forums (for example The Economist). The quote “most published research findings are probably false.” comes from Ioannidis.

Key Quotes
Here are some of the quotes and worries floating about at the moment:

  • America’s National Institutes of Health (NIH) – researchers would find it hard to reproduce at least three-quarters of all published biomedical findings
  • Sandy Pentland, a computer scientist at the Massachusetts Institute of Technology – three-quarters of published scientific papers in the field of machine learning are bunk because of this “overfitting”
  • John Bohannon, a biologist at Harvard, submitted an error stewm paper on a cancer drug derived from lichen to 350 journals (as an experiment), 157 accepted it for publication

Key Problems
Key problems that Ioannidis has highlighted, and which relate to market research are:

1. Studies that show an unhelpful result are often not published, partly because they are seen as uninteresting. For example, if 100 teams look to see if they can find a way of improving a process and all test the same idea, we’d expect 5 of them to have results that are significant at the 95%, just by chance. The 95 tests that did not show significant results are not interesting, so they are less likely to be published. The 5 ‘significant’ results are likely to be published, and the researchers on that team are likely to be convinced that the results are valid and meaningful. However, these 5 results would not have been significant if all 100 had been considered together. This problem has been widely associated with problems in replicating results.

2. Another version of the multiple tests problem is when researchers gather a large amount of data then trawl it for differences. With a large enough data set (e.g. Big Data), you will always find things that look like patterns. Tests can only be run if the hypotheses are created BEFORE looking at the results.

3. Ioannidis has highlighted that researchers often base their study design on implicit knowledge, without necessarily intending to, and often without documenting it. This implicit process can push the results in one direction or another. For example, a researcher looking to show two methods produced the same results might be thinking about questions that are more likely to produce the same answers. Asking people to say if they are male or female is likely to produce the same result, across a wide range of question types and contexts. By contrast, questions about products that participants are less attached to, in the context of a 10 point-scale emotional associations are likely to be more variable, and therefore less likely to be consistent across different treatments.

4. Tests have a property called their statistical power, which in general terms is the ability of the test to avoid Type II errors (false negatives). The tests in use in neuroscience, biology, and market research typically have a much lower statistical power than the optimum. This led John Ioannidis in 2005 to assert that “most published research findings are probably false”.

Market Research?
What should market researchers make of these tests and their limitations? Test data is a basic component of evidence for market research. Researchers should seek to add any new evidence they can acquire to that which they already know, and where necessary do their own checking. In general, researcher should seek to find theoretical reasons for the phenomena they observe in testing – rather than relying on solely on test data.

However, let’s stop saying tests “prove” something works, and let’s stop quoting academic research as if it were “truth”. Things are more or less likely to be true, in market research and indeed most of science, there are few things that are definitely true.

The ‘science’ underpinning behavioural economics, neuroscience, and Big Data (to name just three) should be taken as work in progress, not ‘fact’.

Is Ioannidis Right?
If we are in the business of doubting academic research, then it behoves us to doubt the academic telling us to be more skeptical. There are people who are challenging the claims. For example this article from January 2013 claims that the real figure for bad biomedical research is ‘just’ 14%, rather than three-quarters.

Dec 232013
 
The material below is an excerpt from a book I am writing with Navin Williams and Sue York on Mobile Market Research, but its implications are much wider and I would love to hear people’s thoughts and suggestions.

Most commercial fields have methods of gaining and assessing insight other than market research, for example testing products against standards or legal parameters, test launching, and crowd-funding. There are also a variety of approaches that although used by market researchers are not seen by the market place as exclusively (or even in some cases predominantly) the domain of market research, such as big data, usability testing, and A/B testing.

The mobile ecosystem (e.g. telcos, handset manufacturers, app providers, mobile services, mobile advertising and marketing, mobile shopping etc) employs a wide range of these non-market research techniques, and market researchers working in the field need to be aware of the strengths and weaknesses of these approaches. Market researchers need to understand how they can use the non-market research techniques and how to use market research to complement what they offer.

The list below cover techniques frequently used in the mobile ecosystem which are either not typically offered by market researchers or which are offered by a range of other providers as well as market researchers. Key items are:

  • Usage data, for example web logs from online services and telephone usage from the telcos.
  • A/B testing.
  • Agile development.
  • Crowdsourcing, including open-source development and crowdfunding.
  • Usability testing.
  • Technology or parameter driven development.

Usage data

The mobile and online worlds leave an extensive electronic wake behind users. Accessing a website tells the website owner a large amount about the user, in terms of hardware, location, operating system, language the device is using (e.g. English, French etc), and it might make an estimate of things like age and gender based on the sites you visit and the answers you pick. Use a mobile phone and you tell the telco who you contacted, where you were geographically, how long the contact lasted, what sort of contact was it (e.g. voice or SMS). Use email, such as Gmail or Yahoo, and you tell the service provider who you contacted, which of your devices you used, and the content of your email. Use a service like RunKeeper or eBay or Facebook and you share a large amount of information about yourself and in most cases about other people too.

In many fields, market research is used to estimate usage and behaviour, but in the mobile ecosystem there is often at least one company who can see this information without using market research, and see it in much better detail. For example, a telco does not need to conduct a survey with a sample of its subscribers to find out how often they make calls or to work out how many texts they send, and how many of those texts are to international numbers. The telco has this information, for every user, without any errors.

Usage data tends to be better, cheaper, and often quicker than market research for recording what people did. It is much less powerful in working out why patterns are happening, and it is thought (by some people) to be weak in predicting what will happen if circumstances change. However, it should be noted that the advocates of big data and in particular ‘predictive analytics’ believe that it is possible to work out the answer to ‘what-if’ questions, just from usage/behaviour data.

Unique access to usage data
One limitation to the power of usage data is that in most cases only one organisation has access to a specific section of usage data. In a country with two telcos, each will only have access to the usage data for their subscribers, plus some cross-network traffic information. The owner of a website is the only company who can track the people who visit that site (* with a couple of exceptions). A bank has access to the online, mobile and other data from its customers, but not data about the users of other banks.

This unique access feature of usage data is one of the reasons why organisations buy data from other organisations and conduct market research to get a whole market picture.

* There are two exceptions to the unique access paradigm.
The first is that if users can be persuaded to download a tracking device, such as the Alexa.com toolbar, then that service will build a large, but partial picture of users of other services. This is how Alexa.com is able to estimate the traffic for the leading websites globally.

The second exception is if the service provider buys or uses a tool or service from a third party then some information is shared with that provider.

A complex and comprehensive example of this type of access is Google who sign users up to their Google services (including Android), offer web analytics to websites, and serve ads to websites, which allows them to gain a large but partial picture of online and mobile behaviour.

Legal implications of usage data
Usage data, whether it is browsing, emailing, mobile, or financial, is controlled by law in most countries, although the laws tend to vary from one jurisdiction to another. Because the scale and depth of usage data is a new phenomenon and because the tools to analyse it and the markets for selling/using it are still developing the laws are tending to lag behind the practice.

A good example, of the challenges that legislators and data owners face is determining what is permitted and what is not, are the problems that Google had in Spain and Netherlands towards the end of 2013. The Dutch Government’s Data Protection Agency ruled in November 2013 that Google had broken Dutch law by combining data together from its many services to create a holistic picture of users. Spain went one step further and fined Google 900,000 Euros for the same offence (about $1.25 million). This is unlikely to be the end of the story, the laws might change, Google might change its practices (or the permissions it collects), or the findings might be appealed. However, they illustrate that data privacy and protection are likely to create a number of challenges for data users and legislators over the next few year.

A/B testing

The definition of A/B testing is a developing and evolving one; and it is likely to evolve and expand further over the next few years. At its heart A/B testing is based on a very old principle, create a test where two offers only differ in one detail, present these two choices to matched but separate groups of people to evaluate, and whichever is the more popular is the winner. What makes modern A/B testing different from traditional research is the tendency to evaluate the options in the real market, rather than with research participants. One high profile user of A/B testing is Google, who use it to optimise their online services. Google systematically, and in many cases automatically, select a variable, offer two options, and count the performance with real users. The winning option becomes part of the system.

Google’s A/B testing is now available to users of some of its systems, such as Google Analytics. There are also a growing range of companies offering A/B testing systems. Any service that can be readily tweaked and offered is potentially suitable for A/B testing – in particular virtual or online services.

The concept of A/B testing has moved well beyond simply testing two options and assessing the winner, for example:

  • Many online advertising tools allow the advertiser to submit several variations and the platform adjusts which execution is shown most often and to whom it is shown to maximise a dependent variable, for example to maximise click through.
  • Companies like Phillips have updated their direct mailing research/practice by developing multiple offers, e.g. 32 versions of a mailer, employing design principles to allow the differences to be assessed. The mailers are used in the market place, with a proportion of the full database, to assess their performance. The results are used in two ways. 1) The winning mailer is used for the rest of the database. 2) The performance of the different elements are assessed to create predictive analytics for future mailings.
  • Dynamic pricing models are becoming increasingly common in the virtual and online world. Prices in real markets, such as stock exchanges have been based for many years on dynamic pricing, but now services such as eBay, Betfair, and Amazon apply differing types of automated price matching.
  • Algorithmic bundling and offer development. With services that are offered virtually the components can be varied to iteratively seek combinations that work better than others.

The great strength of A/B testing is in the area of small, iterative changes, allowing organisations to optimise their products, services, and campaigns. Market research’s key strength, in this area, is the ability to research bigger changes and help suggest possible changes.

Agile development

Agile development refers to operating in ways where is it easy, quick, and cheap for the organisation to change direction and to modify products and services. One consequence of agile development is that organisations can try their product or service with the market place, rather than assessing it in advance.

Market research is of particular relevance when the costs of making a product are large, or where the consequences of launching an unsatisfactory product or service are large. But, if products and services can be created easily and the consequences of failure are low, then ‘try it and see’ can be a better option than classic forms of market research. Whilst the most obvious place for agile development is in the area of virtual products and services, it is also used in more tangible markets. The move to print on demand books has reduced the barriers to entry in the book market and facilitated agile approaches. Don Tapscott in his book Wikinomics talks about the motorcycle market in China, which adopted an open-source approach to its design and manufacture of motorcycles, something which combined agile development and crowdsourcing (the next topic in this section).

Crowdsourcing

Crowdsourcing is being used in a wide variety of way by organisations, and several of these ways can be seen as an alternative to market research, or perhaps as routes that make market research less necessary. Key examples of crowdsourcing include:

  • Open source. Systems like Linux and Apache are developed collaboratively and then made freely available. The priorities for development are determined by the interaction of individuals and the community, and the success of changes is determined by a combination of peer review and market adoption.
  • Crowdfunding. One way of assessing whether an idea has a good chance of succeeding is to try and fund it through a crowdfunding platform, such as Kickstarter. The crowdfunding route can provide feedback, advocates, and money.
  • Crowdsourced product development. A great example of crowdsourcing is the T-shirt company Threadless.com. People who want to be T-shirt designers upload their designs to the website. Threadless displays these designs to the people who buy T-shirts and asks which ones people want to buy. The most popular designs are then manufactured and sold via the website. In this sort of crowdsourced model there is little need for market research as the audience get what the audience want, and the company is not paying for the designs, unless the designs prove to be successful.

Usability testing

Some market research companies offer usability testing, but there are a great many providers of this service who are not market researchers and who do not see themselves as market researchers. The field of usability testing brings together design professionals, HCI (human computer interaction), ergonomics, as well market researchers.

Usability testing for a mobile phone, or a mobile app, can include:

  • Scoring it against legal criteria to make sure it conforms to statutory requirements.
  • Scoring it against design criteria, including criteria such as disability access guidelines.
  • User lab testing, where potential users are given access to the product or service and are closely observed as they use it.
  • User testing, where potential users are given the product or given access to the service and use it for a period of time, for example two weeks. The usage may be monitored, there is often a debrief at the end of the usage period (which can be qualitative, quantitative, or both), and usage data may have been collected and analysed.

Technology or parameter driven

In some markets there are issues other than consumer choice that guide design and innovation. In areas like mobile commerce and mobile connectivity, there are legal and regulatory limits and requirements as to what can be done, so the design process will often be focused on how to maximise performance, minimise cost, whilst complying with the rules. In these situations, the guidance comes from professionals (e.g. engineers or lawyers) rather than from consumers, which reduces the role for market research.

Future innovations

This section of the chapter has looked at a wide range of approaches to gaining insight that are not strengths of market research. It is likely that this list will grow over time as technologies develop and it is likely to grow as the importance of the mobile ecosystem continues to grow.

As well as new non-market research approaches being developed it is possible, perhaps likely, that areas which are currently seen as largely or entirely the domain of market research will be shared with other non-market research companies and organisations. The growth in DIY or self-serve options in surveys, online discussions, and even whole insight communities are an indication of this direction of travel.


So, that is where the text is at the moment. Plenty of polishing still to do. But here are my questions?
  1. Do you agree with the main points?
  2. Have I missed any major issuies?
  3. Are there good examples of the points I’ve made that you could suggest highlighting/using?

Dec 112013
 

We are all familiar with the phrase that correlation is not the same as causality, but we also know that in many cases correlation is a really good indicator that something is important – so how do we judge how much importance to give to correlation?

In the 1940s, British scientist Richard Doll conducted a study of 649 cases of lung cancer and noted that only two were non-smokers, causing him to a) stop smoking and b) to start researching the link between smoking and cancer. The correlation certainly did not prove smoking caused lung cancer. As a point on interest, in the 1940s about 80% of adults smoked, so it would have been expected that most people with lung cancer smoked. A simplistic view of correlation would have said that no action should have been taken until a cause was identified. We now know that smoking tobacco releases more than 70 different cancer causing substances.

Sometimes a correlation is useful, even when the phenomenon being measured is not a cause. Waist measurements are highly correlated with health problems, but the waist measurement does not directly cause health problems. Having too much fat tends to cause the problems and having too much fat makes the waist measurement go up. So, by measuring waist measurements we can assess likely health issues, even though the link is only correlation.

So, if we find a correlation should we ignore it until we can find a cause or mechanism, or can we act just on the correlation? The answer is, as is often the case, ‘it depends’.

When thinking about brands and marketing the key to the difference between when causality matters and doesn’t often depends on whether we are trying to tackle the underlying cause or the visible measurements. For example, it is likely, IMHO, that making your brand more relevant and having a more engaging presence will grow the number of FaceBook likes, and this is likely to be a good thing, and monitoring the likes is probably going to be a good thing. However, if the number of likes is set as a KPI, then the pressure on the managers is not to increase the engagement or salience of the brand, it is to increase the number of likes. There are many way of increasing likes that have little impact on the brand, such as running one-off promotions through to paying people to click like (typically from low-cost economies). By changing the statistic from a measure to a target we have fallen into the causality/correlation trap.

In many ways, this view of correlation and causality reflects the key point in the book Obliquity. Obliquity points out there are many things you can only achieve by not trying to achieve them, such as happiness. If a brand wants to increase its satisfaction, social engagement, or salience it can only measure it with statistics such as NPS, Likes, social media comments etc if it does not seek to directly change the numbers. Increasing your social media comments by being more newsworthy, having great products, or running wonderful campaigns is great. However, engaging a clever agency to boost your social media mentions is likely to be much less effective.

Nov 242013
 

To help celebrate the Festival of NewMR we are posting a series of blogs from market research thinkers and leaders from around the globe. These posts will be from some of the most senior figures in the industry to some of the newest entrants into the research world.

A number of people have already agreed to post their thoughts, and the first will be posted later today. But, if you would like to share your thoughts, please feel free to submit a post. To submit a post, email a picture, bio, and 300 – 600 words on the theme of “Opportunities and Threats faced by Market Research” to admin@newmr.org.

Posts in this series
The following posts have been received and posted: