Mar 222013
 

This week’s MRS Conference in London was one of the best events I have been to in the last year, generating lots of material to think about. There was a great mix of thinkers from the industry, ideas from outside market research, discussion, and good networking. The conference was true to its theme of the ‘Shock of the New’. The only weakness that I think is worth mentioning, because it is a reoccurring problem, is that there was too little international content. If the UK is going to command a position as an innovator, it needs more input from outside the UK, IMHO.

Key elements, for me, included:

The limitations of Big Data
The panel discussion, including great contributions from Lucien Bowater from BSkyB and Mark Risley from Google, emphasised the current limitations of big data in terms of the sorts of problems that market research is asked to answer. Big data approaches work best when there is a clearly defined, narrow question, and sufficient resources to find an answer. In many cases, market research is being called on to answer a more general, less well defined problem. Lucien, more than once, made the plea for research to tell him where to dig, i.e. provide a broad answer to a broad problem, and then he can apply more detailed techniques.

The panel also drew a marked distinction between real-time data collection (good) and real-time analysis (often not good).

What market research can learn from crowdsourcing
The photo, from the MRS website [http://www.mrs.org.uk/janefrost_archive/blog/386], shows a panel discussion of four practitioners of crowdsourcing, being moderated by me. Although market research has long used some aspects of crowdsourcing, it was fascinating and useful to hear how:

  • • The People Who Share are creating a sharing economy, disintermediating traditional channels, and freeing up value by promoting sharing.
  • Transcribe Bentham are mobilising volunteers to contribute to an academic and literary project by helping transcribe the millions of words hand- written by Jeremy Bentham into a digital format, which has obvious implications for how market research might seek to tackle coding and tagging the mass of unstructured information they are gathering.
  • PeopleFund.it represented the world of crowd funding. One interesting point made by MD Phil Geraghty was that putting an idea into crowdfunding, and lettering the best ideas rise to the top, is a direct alternative (sometimes) for market research.
  • IdeaBounty showed how brands can access the creativity of the masses, and disintermediate agencies, by creating a platform where people can aim to win bounties by offering solutions to brands. Of particular relevance to market research was all the work IdeaBounty have done on IP, very relevant to areas like insight communities.

What market research can learn from art
The closing speaker on the first day was UK artist David Shrigley [http://www.davidshrigley.com/]. For me the main message was ‘be braver’, if we have an idea we should present it, without seeking to build lots of safety nets or excuses, just present it. Shrigley shared a large number of his drawings and some of his videos with us. The one for Scottish knitwear brand Pringle was especially eye catching and memorable; you can see it here.

What market research can learn from science?
The BBC broadcaster and professor of physics Jim Al Khalili gave a great closing presentation to the conference. Amongst the themes he covered were the dangers of paradoxes, showing that we can trap ourselves with faulty logic. He also highlighted the degree to which scientists have to deal with uncertainty, and the limits to what can be known. By contrast to his modern view of science, most market researchers either seem to reject science or have a primitive 1920s approach to science based on proving ideas, as opposed to basing their approach on ‘falsifiability’. Check out Al Khalili’s views on whether we have free will.

Scenario Planning is still less common than it ought to be!
My colleague Niamh Tallon and I ran a workshop on futuring, trendspotting, and cool hunting. Many of the slides I used were taken from a workshop I ran in 2002, however, the news seemed as fresh to market researchers now as it was then. I will come back to this on a future occasion.

Unintended benefits
I found some of the sessions useful, but not in the way that the people presenting intended. For example, the sight, sound and emotion session contained several reminders that a little learning can be a dangerous thing. For example, more than one speaker in the session (IMHO) over-interpreted findings from other disciplines. Indeed this session created a bit of a buzz in Twitter as people highlighted errors, and created the desire to have a NewMR session focused on exploding MR myths. You can read more about the Explode-A-Myth session here.

Mar 172013
 

As I have mentioned before, Navin Williams, Reg Baker, and I are producing a course on mobile marketing research for the University of Georgia’s Principles of Marketing Research course. As the materials are developing, I am posting some of the items here to get your feedback on whether we are on the right track and to acquire additional cases for the course.

I have just finished a section on laws, guidelines, regulations, and ethics and I have included a checklist. I would appreciate hearing you views on the checklist below.

Ethics Checklist – Mobile Marketing Research

No list can be complete or fully up-to-date, but this checklist should be useful when scoping a mobile marketing research project, in terms of ethics and guidelines:
  1. Is it legal? Check whether, in the country where the research is happening, with the approaches being used, and for the topic being researched, is it allowed within the relevant laws? In all cases the law should take precedence over industry or company guidelines. With a mobile study this includes when and how you can contact people (auto-dialers are not legal in some countries), what you can record (location and traffic data are restricted in some countries), and where the respondent can and cannot be (in most countries they can’t be driving and they can’t be in a sensitive area, for example customs and immigration at airports).
  2. Is it marketing research? Marketing researchers are increasingly being called on to use their skills on projects that are not marketing research. If a project is not marketing research the researcher should ensure that it is not called, or made to appear as, marketing research and should abide by the relevant laws and guidelines.
  3. Are you collecting personally identifiable data? Note, this includes photographs of people’s faces. If you are collecting personally identifiable data, you will need to have permission, you will need to follow data protection/privacy rules, and you should only collect such data as is needed by the project (don’t collect things just in case).
  4. Have you acquired informed consent? The main challenge with mobile marketing research is to ensure that the consent from the participants is adequately informed, especially when collecting passive data. If you collect video and images of people other than the research participant, have you acquired their permission?
  5. How will you avoid annoying people, for example an SMS to their phone at 1am in the morning may well wake them. In a global study, or in a country with different time zones, extra steps are required to ensure that MMR does not become a source of annoyance. Also, can the participant re-start the study if they lose their connection
  6. Try to make sure that the participant does not put themselves in danger. Check they are not driving or operating heavy machinery, advise them not to take pictures or videos of unsuitable subjects, such as other people’s children.
  7. What confidence can the user of the research have in the findings? If different participants will see versions of the surveys, rendered in different ways, in different circumstances, how will that impact the results? How has the sample been obtained, what does it represent, what can it be taken as a reasonable proxy for?
A good place to check that your research is heading in the right direction is ESOMAR, whose Guideline For Conducting Mobile Market Research is an important resource to researchers.

Mar 122013
 

This year’s MRS conference looks to be the most unsettling for years. The conference includes a range of new topics, each talking about how the non-research world will impact the cosy world of market research. If you can make it to London on the 19th and 20th March, I’d recommend it – if not, watch out for the digital outpouring.

I am lucky enough to be involved in two sessions and my colleagues at Vision Critical are the sponsors of the Research-live.com hub. The two sessions, both with the potential to be disruptive, are:

  1. A workshop on the application of scenario planning, futuring, trendspotting, cool hunting. Which I will be co-presenting with my Vision Critical colleague Niamh Tallon.
  2. A panel discussion on crowdsourcing, with four practitioners from outside the world of market research. The session is intended to highlight where crowdsourcing is at in 2013, and why market researchers should be learning from it.

Read About The Crowdsourcing Panel
Not everybody can attend the MRS Conference, and even for those who do, there is never time for long introductions. So the downloadable documents below provide an initial briefing on crowdsourcing, and the four organisations who will be taking part in the panel discussion.

How Crowdsourcing is Changing Your World – a note on crowdsourcing and the panel who will be appearing at the MRS Conference.

IdeaBounty, who will be represented by, Conductor :: 42Engines, Heidi Schneigansz.

The People Who Share, who will be represented by Chief Sharer, Benita Matofska.

Transcribe Bentham, who will be represented by Dr Tim Causer.

PeopleFund.it, who will be represented by Managing Director, Phil Geraghty.

If you could ask a question to the panel, what would it be?

Mar 072013
 

In a recent LinkedIn discussion, one contributor suggested that 80% of new product launches fail. This sort of statistic occurs in marketing discussions on a regular basis, with varying definitions of failing and various values being quoted, sometimes as high as 95% sometimes as low as 75%. But I feel that these discussions are often addressing the wrong issue. When they discuss ‘Why is market research concept testing so bad?’ Whenever I hear stats like ’80% of product launches fail’, I ask myself, what percentage of new launches should fail?

I think that we need to move the question away from percentages and ask ‘How many product launches, in absolute terms, could be successful?’ As people like Mark Earls (author of Herd) have shown, we are creatures of habit and we mostly copy behaviour. As an individual most of us are only going to fully adopt a handful of new products each year, and we are more likely to adopt a product if others do.

So, I would contend that within any specific market, there is a limit to the absolute number of new products that can be successful. In my 35 years in research the number, of new product launches has gone up and up, and I feel it has gone up faster than the number of new successful products has gone up. I suspect that the ratio of 5:1 could well be close to the current ratio between the number of products that organisations feel they have to launch each year and the number that could be successful.

If I am a product manager at a company, I may be obliged (by the C-suite and the organisation’s strategy) to launch a range of products every year. If a fast moving consumer brand felt it needed to launch ten new lines or variants every year to keep getting space on the shelf, to get press coverage, to motivate sales, and to keep the brand looking fresh, it is not going to suddenly launch no products if it feels they might not succeed – not launching would, of course, guarantee failure.

If I am a brand manager, I might have 100 concepts to evaluate, I might be expected to launch 10, and I know that parity is 2 (i.e. 20% success) – as long as market research helps me pick the right 10, i.e. the 10 that increases the chance of getting at least 2 successes, I will feel research has helped.

One of the key things we are seeing from modelling, and again from authors like Mark Earls, is that the role of luck is enormous. If 5 equally good and attractive products are launched into the same market sector, only one will succeed (normally), i.e. if 5 equally good products are launched into the same market, one of them will do much better than the rest. The mechanic of success will be social copying, supported by marketing and luck.

So, I am all in favour of better concept testing, and I think that market research should keep pushing for better and better options. But, I think that even if we had perfect testing, the majority of products would fail – where I am defining perfect testing as something that would identify every product that had a chance to be massively successful. However, if every product we predicted to be successful, was successful, nobody would ever launch a product with a 90% chance of success – which would elevate the elimination of Type I errors (false positives) at the cost of a massive number of Type II errors (false negatives).

When real innovators and successful marketers criticize market research, it is not usually because we have allowed too many new ideas to reach the market. It is usually that we have stopped too many products that would have been successful from being launched. Market research should balance Type I (false positives) with Type II (false negatives). The history of market research is that we have probably been too keen on eliminating Type I errors, which means we have helped prevent millions of potentially great products.

In my, humble, opinion the aim of any new concept testing, market predicting technique should be to be more predictive than other tools, not to reach some arbitrary level of accuracy.

However, I will make one forecast. In ten years the failure rate of new product launches will be higher than it is now! This is because I think the number of new products launched will grow faster than the number of new products that can be successful – which will increase the failure rate. The number that will be launched is a function of falling barriers to market, increased retail complexity, and the nature of modern marketing. The number that can be successful is limited to how many things people, like us, will change in any one year – are you going to change your coffee, AND your favourite jam, AND your favourite beer, AND your toothpaste. AND so on – and then stick with your new choices long enough to make them successful?

Mar 012013
 

Navin Williams and I are creating an MMR (mobile marketing research) module for the University of Georgia’s MRII’s Principles of Marketing Research course (under the guiding eye of Reg Baker) – click here to read more.

As part of that course we need to provide a very short summary of the history of MMR. So, the text below is my starting point and I would love to hear any suggestions, corrections, etc that people out there might have.

In terms of context, the course takes the term MMR to include:

  • Self-completion surveys via mobile devices (i.e. not CATI)
  • Qualitative research utilising mobile devices
  • Passive data collection via mobile devices

Mobile devices are taken as mostly being mobile phones, although more recently the term has been expanded to include tablets. Other devices exist, such as PDAs, but they have never been central to the bulk of MMR. Phones tend to be divided into smartphones and feature phones. However, the definition of a smartphone keeps changing, and a feature phone is pretty much any phone that does not meet the current criteria for a smartphone – and yes we are assuming that smartphone is now a single word.

A Brief History of Mobile Marketing Research

The first serious attempts to use mobile phones for market research appeared in the 1990s. The projects that were run at this time tended to use SMS. Questions were sent to respondents via SMS, and the respondents answered via SMS, typically by entering a single digit, such as 1 for Agree strongly, 2 for Agree, etc. These surveys were very short. Few market research projects were transferred to this method, because of the requirement for surveys to be very short and because the interface was considered so limited. This method is still in use today, in cases where it meets specific research needs.

One early innovation with the SMS method was to utilise its ‘in the moment’ potential. For example. some locations put up signs inviting users/visitors to text their satisfaction score to a central location. As phones became ‘smarter’, for example acquiring larger screens and some form of internet access (e.g. WAP) people started to try and utilise the phones for longer and/or more complex surveys. For example, by 2000 several researchers were reporting success in Japan, utilising DoCoMo’s early lead in this area, sending longer surveys and incentivising respondents via telephone credits. However, MMR remained marginal as a percentage of all marketing research.

With the growth in the ownership of smartphones, the app/web dichotomy was created and started to deepen. Some researchers preferred to design software that could be downloaded on respondents’ mobile devices, whilst others sought to get the respondents to connect to the internet. This dichotomy is explored elsewhere in the course. Although both routes are still being used, the majority of quantitative MMR at the moment is via browsers, i.e. by connecting to the internet at the time of the survey, rather than via an app. The app approach, however, is very popular with some of the qualitative and passive data collection approaches.

By about 2005, with Blackberry phones becoming more common and internet enabled PDAs becoming more common, researchers started reporting that a small percentage of respondents were completing online surveys, intended for PCs, on their mobile devices. At the time this ‘accidental’ MMR (accidental on the part of the researcher) accounted for a very small proportion of online surveys. Over the next 8 years this proportion of accidental MMR has grown, as phones have become smarter/larger, and with the arrival of tablets, and is now reported as being in the region of 10-20% of all online surveys.

From about 2005, the qualitative uses of mobile devices started to blossom. With a growing number of researchers utilising participant’s phones to collect data about the participants everyday lives – for example by collecting images and recordings. Researchers also started utilising participants’ mobile devices to connect the participants with blogs, bulletin boards, discussions and communities. From 2005, and until very recently, much (perhaps most) of the success stores in MMR came from the realm of qualitative research.

With the advent of the iPhone in 2007, MMR moved into a higher gear. Qualitative researchers sought to use the extra facilities, as did some quantitative researchers. In 2008, the appearance of Android phones from companies such as HTC helped ensure that the new generation of smartphones established a critical mass, enabling MMR to focus on the benefits of mobiles.

In 2010 the iPad was released, presaging a major growth in the penetration of tablets, and making definitions in MMR a bit more tricky. Some researchers have been using smartphones as CAPI devices for several years, but the arrival of tablets has made this more attractive and easier to implement.

By 2010 mobile phones had become common across both the developed and developing world. Consequently, researchers were putting them to ever greater use in developing markets, particularly in Africa and Asia. However, in developing markets MMR tends to focus on feature phones rather than smartphones, and online surveys tend not to be an alternative.

From about 2010 there has been a growth in the amount of passive data being collected as part of MMR (telcos have been collecting it for their purposes from the outset, of course). This passive data is typically collected by downloading an app onto the phone, which records information such as: usages, internet connection, phone calls, location, etc.

By 2012 the MMRA (Mobile Marketing Research Association) had been formed, conferences were being dedicated to the topic of MMR, ESOMAR had published guidelines on the proper and ethical use of MMR, and there were about 6 billion mobile phones in use globally.


I would love to hear your thoughts, corrections, suggestions, extensions etc? In particular, I am keen to source data on the incidence of different forms of MMR from about 1990 onwards.