Jul 162014
 

Sometimes when I run a workshop or training session people want detail, they want practical information about how to do stuff. However, there are times when what people want is a big picture, a method of orientating themselves in the context of the changing landscape around them. Tomorrow I am running a workshop for #JMRX in Tokyo and we are looking at emerging techniques, communities, and social media research – so a big picture is going to be really useful to help give an overview of the detail, and to help people see where things like gamification, big data, and communities all fit.

So, here is my Big Picture of NewMR (click on it to see it full size), and I’d love to hear your thought and suggestions.

Big Picture

The Big Picture has five elements

The heart of the message is that we have reached an understanding that surveys won’t/can’t give us the answers to many of the things we are interested in. People’s memories are not good enough, many decision are automatic and opposed to thought through, and most decision are more emotion that fact. Change is needed, and the case for this has been growing over the last few years.

The four shapes around the centre are different strands that seek to address the survey problem.

In the top left we have big data and social media data, moving away from working with respondents, collecting observations of what people say and do, and using that to build analyses and predictive models.

In the top right we have a battery of new ways of working with respondents to find out why they do things, going beyond asking them survey questions.

In the bottom left we have communities, which I take as a metaphor for working with customers, co-creating, crowdsourcing, treating customers and insiders, not just users.

The bottom right combines elements from the other three. ‘In the moment’ is perhaps, currently, the hottest thing in market research. Combining the ability to watch and record what people do, with interacting with them to explore why and what they would do the options changed.

Thoughts?
So, that is my big picture. Does it work for you? What would you add, change, delete, or tweak?


 

Mar 222013
 

This week’s MRS Conference in London was one of the best events I have been to in the last year, generating lots of material to think about. There was a great mix of thinkers from the industry, ideas from outside market research, discussion, and good networking. The conference was true to its theme of the ‘Shock of the New’. The only weakness that I think is worth mentioning, because it is a reoccurring problem, is that there was too little international content. If the UK is going to command a position as an innovator, it needs more input from outside the UK, IMHO.

Key elements, for me, included:

The limitations of Big Data
The panel discussion, including great contributions from Lucien Bowater from BSkyB and Mark Risley from Google, emphasised the current limitations of big data in terms of the sorts of problems that market research is asked to answer. Big data approaches work best when there is a clearly defined, narrow question, and sufficient resources to find an answer. In many cases, market research is being called on to answer a more general, less well defined problem. Lucien, more than once, made the plea for research to tell him where to dig, i.e. provide a broad answer to a broad problem, and then he can apply more detailed techniques.

The panel also drew a marked distinction between real-time data collection (good) and real-time analysis (often not good).

What market research can learn from crowdsourcing
The photo, from the MRS website [http://www.mrs.org.uk/janefrost_archive/blog/386], shows a panel discussion of four practitioners of crowdsourcing, being moderated by me. Although market research has long used some aspects of crowdsourcing, it was fascinating and useful to hear how:

  • • The People Who Share are creating a sharing economy, disintermediating traditional channels, and freeing up value by promoting sharing.
  • Transcribe Bentham are mobilising volunteers to contribute to an academic and literary project by helping transcribe the millions of words hand- written by Jeremy Bentham into a digital format, which has obvious implications for how market research might seek to tackle coding and tagging the mass of unstructured information they are gathering.
  • PeopleFund.it represented the world of crowd funding. One interesting point made by MD Phil Geraghty was that putting an idea into crowdfunding, and lettering the best ideas rise to the top, is a direct alternative (sometimes) for market research.
  • IdeaBounty showed how brands can access the creativity of the masses, and disintermediate agencies, by creating a platform where people can aim to win bounties by offering solutions to brands. Of particular relevance to market research was all the work IdeaBounty have done on IP, very relevant to areas like insight communities.

What market research can learn from art
The closing speaker on the first day was UK artist David Shrigley [http://www.davidshrigley.com/]. For me the main message was ‘be braver’, if we have an idea we should present it, without seeking to build lots of safety nets or excuses, just present it. Shrigley shared a large number of his drawings and some of his videos with us. The one for Scottish knitwear brand Pringle was especially eye catching and memorable; you can see it here.

What market research can learn from science?
The BBC broadcaster and professor of physics Jim Al Khalili gave a great closing presentation to the conference. Amongst the themes he covered were the dangers of paradoxes, showing that we can trap ourselves with faulty logic. He also highlighted the degree to which scientists have to deal with uncertainty, and the limits to what can be known. By contrast to his modern view of science, most market researchers either seem to reject science or have a primitive 1920s approach to science based on proving ideas, as opposed to basing their approach on ‘falsifiability’. Check out Al Khalili’s views on whether we have free will.

Scenario Planning is still less common than it ought to be!
My colleague Niamh Tallon and I ran a workshop on futuring, trendspotting, and cool hunting. Many of the slides I used were taken from a workshop I ran in 2002, however, the news seemed as fresh to market researchers now as it was then. I will come back to this on a future occasion.

Unintended benefits
I found some of the sessions useful, but not in the way that the people presenting intended. For example, the sight, sound and emotion session contained several reminders that a little learning can be a dangerous thing. For example, more than one speaker in the session (IMHO) over-interpreted findings from other disciplines. Indeed this session created a bit of a buzz in Twitter as people highlighted errors, and created the desire to have a NewMR session focused on exploding MR myths. You can read more about the Explode-A-Myth session here.

Nov 222012
 

Earlier this week I was in Singapore, attending the MRSS Asia Research Conference, which this year focused on the theme of Big Data. There was an interesting range of papers, including ones linking neuroscience, Behavioural Economics, and ethnography to Big Data.

One reference that was repeated by several of the speakers, including me, was IBM’s four Vs, i.e. Volume, Velocity, Variety, and Veracity. Volume is a given, big data is big. Velocity relates to the speed that people want to access the information. Variety reminds us that Big Data includes a mass of unstructured information, including photos, videos, and open-ended comments. Veracity relates to whether the information is correct or reliable.

However, as I listened to the presentations, and whilst I heard at least three references to the French mathematician/philosopher René Descartes, my mind turned to another French mathematician, Peirre-Simon Laplace. In 1814, Laplace put forward the view that if someone were (theoretically) to know the precise position and movement of every atom it would be possible to estimate their future position – a philosophical position known as determinism. Laplace was shown to be wrong, first by the laws of thermodynamics, and secondly and more thoroughly by quantum mechanics.

The assumption underlying much of Big Data seems to echo Laplace’s deterministic views, i.e. that if we have enough data we can predict what will happen next. A corollary to this proposition is a further assumption that if we have more data, then the predictions will be even better. However, neither of these is necessarily true.

There are several key factors that limit the potential usefulness of big data:

  1. Big Data only measures what has happened in a particular context. Mathematics can often use interpolation to produce a reliable view of the detail of what happened. However, extrapolation, i.e. predicting what will happen in a different context (e.g. the future) is often problematic.
  2. If you add random or irrelevant data to a meaningful signal, then the signal is less clear. The only way to process the signal is to remove the random or irrelevant data. If we try to measure shopping data and we collect everything we can collect, then we can only make sense of it by removing elements irrelevant to the behaviour we are trying to measure – bigger isn’t always better.
  3. If the data we collect are correlated with each other (i.e. they exhibit multicollinearity) then most mathematical techniques will not interpret their contribution of the factors correctly – rendering predictions unstable.
  4. Some patterns of behaviour are chaotic. Changes in the inputs cause changes in the outputs, but not in ways in which are predictable.

One of the most successful organisations in using Big Data has been Tesco. For almost 20 years, the retailer Tesco has been giving competitors and suppliers a hard time by utilising the data from its Clubcard loyalty scheme. Scoring Points (the book about Tesco written by Clive Humby and Terry Hunt) shows that one key to Tesco’s success was that they took the 4 points above into account.

Tesco simplified the data, removed noise, categorised the shoppers, the baskets, and times of day. Their techniques are based on interpolation, not extrapolation, and they are able to extend the area of knowledge by trial and error. Big Data is going to be increasingly important to marketers and market researchers. But, its usefulness will be greater if people do not over-hype it. More data is not necessarily better. Knowing what people did will not necessarily tell you what they will do. And, knowing what people did will often not tell you why they did it, and that they might do if the choice is repeated or varied.

Marketers and market researchers seduced by the promise of Big Data should remember Laplace’s demon – and realise that the world is not deterministic.