May 142014

Guest Post by Ilka Kuhagen, Co-founder of Think Global Qualitative and founder of IKM, see her LinkedIn profile by clicking here.

The QRCA Global Outreach Scholarship is a wonderful opportunity for qualitative researchers outside the US, UK and Canada to experience a QRCA annual conference. One Scholarship is awarded to a qualitative researcher in the early stages of their career, while the second is for a more senior practitioner who is well established in the industry.

This year’s recipients will have the opportunity to come to New Orleans from 15-17th October 2014.

QRCA is currently seeking candidates for two 2014 Global Outreach Scholarships:

  • The Foundation Scholarship is awarded to a qualitative researcher who is relatively new to qualitative research, but is already establishing a career path in this field. For instance, they should have developed some experience of moderating group discussions and IDI’s and of analysing the results.
  • The Advanced Scholarship is intended for a qualitative researcher who is already well established in their career, but wants to expand and deepen their knowledge of methods and techniques, and to maximize the value of the projects that they plan and execute for their clients.

The Scholarships cover the cost of conference registration (valued at up to US$1,425) and offer up to US$1,000 to cover travel expenses to the conference. QRCA’s Annual Conference provides exposure to the latest in qualitative thinking and techniques, and is an invaluable opportunity for international qualitative researchers to extend their network of contacts around the world. In addition the recipients are given free QRCA membership for the remainder of 2014 if they are not already members.

Full information about the Scholarships, including specific details about the qualifying criteria and application process, is available at or can be obtained from Darrin Hubbard at The closing date for applications is Friday 30 May 2013.

Full information is available on QRCA’s website at – The form to apply can be downloaded by clicking here.

On the website you can also watch the video (short version or in full length) with the two winners from 2013 by clicking here.


Feb 012014

Neil Gains has been kind enough to send me a copy of his new book, Brand esSense and I wanted to share my thoughts about this useful book.

In the title to my post I say it is two books in one. The first three-quarters of the book do a great job of taking the reader through a well annotated and easy to read overview of the role of senses in marketing and market research and the way these link to the way people make sense of the world around them. This sense making focuses especially on symbols, signs, storytelling, and archetypes.

Most market researchers and marketers have an incomplete understanding of the senses, somebody might be quite good on taste, but less familiar with the body of learning about touch, or familiar with symbols and semiotics, but less familiar with the use of brand archetypes. Neil’s book facilitates a levelling up of one’s learning, highlighting to the reader areas where their knowledge might be weaker, giving them an initial grounding and signposting options for further reading.

The final quarter of the book shows how Neil has developed methods of utilising the approaches described in the earlier part of the book – which he terms the esSense of the brand. Neil illustrates how to find the esSense of a brand and how to apply his esSense framework.

The book is an easy read for anybody broadly familiar with brands, the senses, and qualitative research. Even for people deeply steeped in the area, there are nuggets in there that they will find illuminating or useful. So, I would warmly recommend it.

As I flick back through my annotated copy (I have become an inveterate scribbler in text books – ones I own), I can see plenty of things I highlighted for review, and only one or two where I put an exclamation mark (my sign for disagreement). My only double-exclamation mark was the reference to Mehrabian and the extent to which language contributes to presentations – when you read the book see if you agree with my concern, and if you do you might enjoy this short presentation from Russ Wilson.

The book is published by Kogan Page and is available from all good online bookstores. From the Kogan Page website you can download a sample chapter.


Dec 232013
The material below is an excerpt from a book I am writing with Navin Williams and Sue York on Mobile Market Research, but its implications are much wider and I would love to hear people’s thoughts and suggestions.

Most commercial fields have methods of gaining and assessing insight other than market research, for example testing products against standards or legal parameters, test launching, and crowd-funding. There are also a variety of approaches that although used by market researchers are not seen by the market place as exclusively (or even in some cases predominantly) the domain of market research, such as big data, usability testing, and A/B testing.

The mobile ecosystem (e.g. telcos, handset manufacturers, app providers, mobile services, mobile advertising and marketing, mobile shopping etc) employs a wide range of these non-market research techniques, and market researchers working in the field need to be aware of the strengths and weaknesses of these approaches. Market researchers need to understand how they can use the non-market research techniques and how to use market research to complement what they offer.

The list below cover techniques frequently used in the mobile ecosystem which are either not typically offered by market researchers or which are offered by a range of other providers as well as market researchers. Key items are:

  • Usage data, for example web logs from online services and telephone usage from the telcos.
  • A/B testing.
  • Agile development.
  • Crowdsourcing, including open-source development and crowdfunding.
  • Usability testing.
  • Technology or parameter driven development.

Usage data

The mobile and online worlds leave an extensive electronic wake behind users. Accessing a website tells the website owner a large amount about the user, in terms of hardware, location, operating system, language the device is using (e.g. English, French etc), and it might make an estimate of things like age and gender based on the sites you visit and the answers you pick. Use a mobile phone and you tell the telco who you contacted, where you were geographically, how long the contact lasted, what sort of contact was it (e.g. voice or SMS). Use email, such as Gmail or Yahoo, and you tell the service provider who you contacted, which of your devices you used, and the content of your email. Use a service like RunKeeper or eBay or Facebook and you share a large amount of information about yourself and in most cases about other people too.

In many fields, market research is used to estimate usage and behaviour, but in the mobile ecosystem there is often at least one company who can see this information without using market research, and see it in much better detail. For example, a telco does not need to conduct a survey with a sample of its subscribers to find out how often they make calls or to work out how many texts they send, and how many of those texts are to international numbers. The telco has this information, for every user, without any errors.

Usage data tends to be better, cheaper, and often quicker than market research for recording what people did. It is much less powerful in working out why patterns are happening, and it is thought (by some people) to be weak in predicting what will happen if circumstances change. However, it should be noted that the advocates of big data and in particular ‘predictive analytics’ believe that it is possible to work out the answer to ‘what-if’ questions, just from usage/behaviour data.

Unique access to usage data
One limitation to the power of usage data is that in most cases only one organisation has access to a specific section of usage data. In a country with two telcos, each will only have access to the usage data for their subscribers, plus some cross-network traffic information. The owner of a website is the only company who can track the people who visit that site (* with a couple of exceptions). A bank has access to the online, mobile and other data from its customers, but not data about the users of other banks.

This unique access feature of usage data is one of the reasons why organisations buy data from other organisations and conduct market research to get a whole market picture.

* There are two exceptions to the unique access paradigm.
The first is that if users can be persuaded to download a tracking device, such as the toolbar, then that service will build a large, but partial picture of users of other services. This is how is able to estimate the traffic for the leading websites globally.

The second exception is if the service provider buys or uses a tool or service from a third party then some information is shared with that provider.

A complex and comprehensive example of this type of access is Google who sign users up to their Google services (including Android), offer web analytics to websites, and serve ads to websites, which allows them to gain a large but partial picture of online and mobile behaviour.

Legal implications of usage data
Usage data, whether it is browsing, emailing, mobile, or financial, is controlled by law in most countries, although the laws tend to vary from one jurisdiction to another. Because the scale and depth of usage data is a new phenomenon and because the tools to analyse it and the markets for selling/using it are still developing the laws are tending to lag behind the practice.

A good example, of the challenges that legislators and data owners face is determining what is permitted and what is not, are the problems that Google had in Spain and Netherlands towards the end of 2013. The Dutch Government’s Data Protection Agency ruled in November 2013 that Google had broken Dutch law by combining data together from its many services to create a holistic picture of users. Spain went one step further and fined Google 900,000 Euros for the same offence (about $1.25 million). This is unlikely to be the end of the story, the laws might change, Google might change its practices (or the permissions it collects), or the findings might be appealed. However, they illustrate that data privacy and protection are likely to create a number of challenges for data users and legislators over the next few year.

A/B testing

The definition of A/B testing is a developing and evolving one; and it is likely to evolve and expand further over the next few years. At its heart A/B testing is based on a very old principle, create a test where two offers only differ in one detail, present these two choices to matched but separate groups of people to evaluate, and whichever is the more popular is the winner. What makes modern A/B testing different from traditional research is the tendency to evaluate the options in the real market, rather than with research participants. One high profile user of A/B testing is Google, who use it to optimise their online services. Google systematically, and in many cases automatically, select a variable, offer two options, and count the performance with real users. The winning option becomes part of the system.

Google’s A/B testing is now available to users of some of its systems, such as Google Analytics. There are also a growing range of companies offering A/B testing systems. Any service that can be readily tweaked and offered is potentially suitable for A/B testing – in particular virtual or online services.

The concept of A/B testing has moved well beyond simply testing two options and assessing the winner, for example:

  • Many online advertising tools allow the advertiser to submit several variations and the platform adjusts which execution is shown most often and to whom it is shown to maximise a dependent variable, for example to maximise click through.
  • Companies like Phillips have updated their direct mailing research/practice by developing multiple offers, e.g. 32 versions of a mailer, employing design principles to allow the differences to be assessed. The mailers are used in the market place, with a proportion of the full database, to assess their performance. The results are used in two ways. 1) The winning mailer is used for the rest of the database. 2) The performance of the different elements are assessed to create predictive analytics for future mailings.
  • Dynamic pricing models are becoming increasingly common in the virtual and online world. Prices in real markets, such as stock exchanges have been based for many years on dynamic pricing, but now services such as eBay, Betfair, and Amazon apply differing types of automated price matching.
  • Algorithmic bundling and offer development. With services that are offered virtually the components can be varied to iteratively seek combinations that work better than others.

The great strength of A/B testing is in the area of small, iterative changes, allowing organisations to optimise their products, services, and campaigns. Market research’s key strength, in this area, is the ability to research bigger changes and help suggest possible changes.

Agile development

Agile development refers to operating in ways where is it easy, quick, and cheap for the organisation to change direction and to modify products and services. One consequence of agile development is that organisations can try their product or service with the market place, rather than assessing it in advance.

Market research is of particular relevance when the costs of making a product are large, or where the consequences of launching an unsatisfactory product or service are large. But, if products and services can be created easily and the consequences of failure are low, then ‘try it and see’ can be a better option than classic forms of market research. Whilst the most obvious place for agile development is in the area of virtual products and services, it is also used in more tangible markets. The move to print on demand books has reduced the barriers to entry in the book market and facilitated agile approaches. Don Tapscott in his book Wikinomics talks about the motorcycle market in China, which adopted an open-source approach to its design and manufacture of motorcycles, something which combined agile development and crowdsourcing (the next topic in this section).


Crowdsourcing is being used in a wide variety of way by organisations, and several of these ways can be seen as an alternative to market research, or perhaps as routes that make market research less necessary. Key examples of crowdsourcing include:

  • Open source. Systems like Linux and Apache are developed collaboratively and then made freely available. The priorities for development are determined by the interaction of individuals and the community, and the success of changes is determined by a combination of peer review and market adoption.
  • Crowdfunding. One way of assessing whether an idea has a good chance of succeeding is to try and fund it through a crowdfunding platform, such as Kickstarter. The crowdfunding route can provide feedback, advocates, and money.
  • Crowdsourced product development. A great example of crowdsourcing is the T-shirt company People who want to be T-shirt designers upload their designs to the website. Threadless displays these designs to the people who buy T-shirts and asks which ones people want to buy. The most popular designs are then manufactured and sold via the website. In this sort of crowdsourced model there is little need for market research as the audience get what the audience want, and the company is not paying for the designs, unless the designs prove to be successful.

Usability testing

Some market research companies offer usability testing, but there are a great many providers of this service who are not market researchers and who do not see themselves as market researchers. The field of usability testing brings together design professionals, HCI (human computer interaction), ergonomics, as well market researchers.

Usability testing for a mobile phone, or a mobile app, can include:

  • Scoring it against legal criteria to make sure it conforms to statutory requirements.
  • Scoring it against design criteria, including criteria such as disability access guidelines.
  • User lab testing, where potential users are given access to the product or service and are closely observed as they use it.
  • User testing, where potential users are given the product or given access to the service and use it for a period of time, for example two weeks. The usage may be monitored, there is often a debrief at the end of the usage period (which can be qualitative, quantitative, or both), and usage data may have been collected and analysed.

Technology or parameter driven

In some markets there are issues other than consumer choice that guide design and innovation. In areas like mobile commerce and mobile connectivity, there are legal and regulatory limits and requirements as to what can be done, so the design process will often be focused on how to maximise performance, minimise cost, whilst complying with the rules. In these situations, the guidance comes from professionals (e.g. engineers or lawyers) rather than from consumers, which reduces the role for market research.

Future innovations

This section of the chapter has looked at a wide range of approaches to gaining insight that are not strengths of market research. It is likely that this list will grow over time as technologies develop and it is likely to grow as the importance of the mobile ecosystem continues to grow.

As well as new non-market research approaches being developed it is possible, perhaps likely, that areas which are currently seen as largely or entirely the domain of market research will be shared with other non-market research companies and organisations. The growth in DIY or self-serve options in surveys, online discussions, and even whole insight communities are an indication of this direction of travel.

So, that is where the text is at the moment. Plenty of polishing still to do. But here are my questions?
  1. Do you agree with the main points?
  2. Have I missed any major issuies?
  3. Are there good examples of the points I’ve made that you could suggest highlighting/using?

Nov 272013

Post by Neil Gains, the founder of TapestryWorks, based in Singapore.

What is the single biggest threat to market research?


Data represents the existing business model of large multinational research companies who make money by selling as many interviews/surveys/data points as possible for as high a CPI (cost per interview) as possible.

Data also represents a huge part of the future of market research. While “big data” is an opportunity to extract value from the ever-increasing flow of data from businesses and from the multiple devices that we all use at home and as we go about the world, this is also a huge threat. It’s a threat for market research companies and businesses, most of whom do not have the skill set to manage and analyze large data sets. More importantly, it’s a threat to the talent in the industry. As the pool of data grows every year, it becomes more and more important to have the thinking skills to ask the right questions, and connect different information sources together. This is not a skill that can be automated.

And finally data represents the focus of much of market research on “measuring to manage”. By definition, most large-scale data collection exercises can only look back at the past and attempt to explain what has already happened. However, the increasing need of business is to look forward and either predict the future or create it.

So the traditional data driven model of market research is not only being broken as a business model, by the provision of cheap or (virtually) free sample, but also by the increasing importance of using business insight to drive innovation. In a world where the brand and product cycle becomes faster and faster, it is less and less important to look back and more and more important to move forward.

The opportunity for market research is clear. Whatever happens to the business of sample and data collection, there will (or should) always be a need to understand customers. The business of market research is the business of helping other businesses to change the behavior of customers.

And this is where perhaps the biggest disruptions are happening for market research. Market research innovation has focused on technology and new ‘toys’, with less emphasis on changing the fundamentals of asking questions. The debate has been about how you translate a 30-minute survey to mobile phones, rather than asking whether it makes sense to ask questions at all. And we should be asking such questions of the industry, because the evidence of the behavioral sciences on the reliability of direct question approaches is very clear. The old models don’t work.

Looking from the client perspective, businesses increasingly need to use customer understanding to drive a constant stream of ideas and innovations to keep ahead of their competitors. This need can’t wait for data to be collected, analyzed and presented, but has to be a constantly evolving interaction between businesses and their customers to co-create the future.

More and more, businesses need to synthesize data across multiple sources, use customer understanding to drive change and remain agile enough to respond to a rapidly changing marketplace.

The future of market research is not in data. Or insights. The opportunity is to understand behavior and use that understanding to help clients create their future.

Click here to read other posts in this series.

Nov 242013

To help celebrate the Festival of NewMR we are posting a series of blogs from market research thinkers and leaders from around the globe. These posts will be from some of the most senior figures in the industry to some of the newest entrants into the research world.

A number of people have already agreed to post their thoughts, and the first will be posted later today. But, if you would like to share your thoughts, please feel free to submit a post. To submit a post, email a picture, bio, and 300 – 600 words on the theme of “Opportunities and Threats faced by Market Research” to

Posts in this series
The following posts have been received and posted:

Nov 152013

London, 14 November, 2013, the ICG (the Independent Consultants Group), held their fourth Question Time event, where five leading lights of the MR industry are invited to answer questions posed by ICG members and the audience. I had the honour to be the chair of the session, and to ask the five luminaries the questions.

The five panel members were (quoting their description on the ICG site):

  • Ken Parker, AQR Chairman; founder – Discovery Research; sports research expert and football fanatic
  • Becky Rowe, MD of ESRO; an award-winning researcher for NHS ethnography work
  • Paul Edwards , Chief Strategy Officer, Hall & Partners; vastly experienced industry leader and ad planner
  • Janet Kiddle, Founder: Steel Magnolia and long-time ICG member; ex MD of TRBI
  • Mike Barnes, Consultant; ex Head of Research, RBS

As ever the session was a social success with lots of networking and discussion, including a chance for me to hear about Dinko Svetopetic’s success in promoting Insight Communities in Poland via his company MRevolution.

But, what I wanted to post here were my key takeaways from the session.

  1. Big Data is the topic of the moment. However, the general view is that Big Data is making relatively slow progress and will initially have a much bigger impact on the large agencies than on the independents and consultants. Indeed, Big Data may even be an opportunity for independents in that they can provide help on understanding the “Why?” and helping shape the “So what?”
  2. DIY is a threat to independents and consultants, but it is also an opportunity. When clients find they have bitten off more than they can chew, or when they get out of their depth, the independents and consultants are a great resource to help resolve issues.
  3. One challenge for independents (and clients) is how to stay up-to-date with the latest approaches, tools, and technology. The view of the panel was that nobody can stay fully informed about everything. The key for independents is to develops strengths, not an ever wider offering, and to support this with networks.
  4. Another threat to independents and consultants is competition from people supplying poor research, particular in the context of faster/cheaper research. The general response of the room and the panel was that independents should continue to stress the need for good research, that analysis requires experience and time, and to focus on the clients who are looking for something more than ‘value’ or bulk research. Ken Parker was also able to report back (with his AQR hat on) on the moves being made to create suitable accreditation schemes for qual research and for recruiters. The ICG is involved in this initiative, so keep your eyes on their website.
  5. The hunt is clearly still on for a better way of presenting information. Becky Rowe made the case for hiring professional communicators/designers to improve the way we communicate in MR. Mike Brown, gave the client’s perspective that presenters need to have done their homework and identified what a particular audience expects and needs – one size does not fit all.
  6. In terms of key trends independents need to be aware of, the panel identified:
    • Online qual
    • DIY
    • Big Data
    • The need to combine asking questions (in qual and quant) with observational research

For me, one of the interesting nuggets was that over two-thirds of the room had delivered at least one ‘old fashioned’ written report (i.e. more than 10 pages of words), in the last year. To me this suggests that clients who are working with independents are looking for something different than the sort of ‘fast food’ they typically buy from the agencies.

Sep 032013

Below is a list of the five posts, on, that in 2013 have been read by the largest number of unique readers, as measured by Google Analytics.

  1. Why do companies use market research? This was posted December 30, 2012, and has had 633 unique viewers in 2013.
  2. The ITU is 100% wrong on mobile phone penetration, IMHO. Posted 29 June, 2013, viewed by 380 unique people.
  3. Is it a bad thing that 80% of new products fail? Posted 7 March, 2013, 353 unique viewers.
  4. Notes for a non-researcher conducting qualitative research. This was only posted on 26 August, 2013, so it is probably still on its way up. It has 350 unique viewers.
  5. A Short History of Mobile Marketing Research. Posted 1 March, 2013, with 278 unique views.

I ran the analysis to see if I could spot any patterns in what made a successful NewMR post. However, so far, no clear pattern is emerging. Any thoughts or suggestions?

Aug 262013

In November I am presenting a paper to the ESOMAR Conference on Qualitative Research, in Valencia in Spain. My paper suggests that one threat to qualitative research is the potential for damage caused by people with no training in qualitative research using one of the many DIY tools that are appearing – especially those for online discussions and instant chats.

My suggestion is to create a simple set of notes that will help put newcomers to our world on the right path. Below is my initial draft if of my notes, and I would really appreciate your feedback.

The Playbook

The playbook needs to be short, relevant, and easy to use if it is going to be of value to people looking to conduct their own research. Therefore, this initial draft covers the following topics:

  • Evidence, not answers
  • Creating a narrative
  • Analysis begins at the start not the end of the project
  • Creating a discussion guide
  • Not everything that matters can be counted
  • Data does not mean numbers
  • Consider actors and agendas
  • We are poor witnesses to our own motivations
  • Memoing
  • Enabling the participants whenever possible
  • Grounding the story in the data
  • Examples that inform, not ones that entertain
  • The “But, I already knew that test!”

Evidence, not answers
Qualitative research, for example, online discussions, real-time chat, smartphone ethnography, or discussions gathered from social media, does not provide categorical, definitive answers. Qualitative research provides evidence, and the researcher has to interpret this evidence to produce the product of the research.

A quantitative study might discover that 10% of the population buy a product from ACME Corporation. This 10% is an answer, something discovered and provided by the research. A qualitative discussion might suggest that people seem willing to use words like respect, admire, trust about ACME, but were less willing to say love, like, associate with ACME. The researcher has to determine what that might mean and what the implications for ACME might be.

At the end of a qualitative project we can’t say things like “50% of the participants said they would try the product”, implying that 50% of the target group will buy the product. The qualitative participants are not numerous enough to forecast population-wide behaviour and the way the questions were asked will have affected the thinking and responses of the participants. A qualitative finding is more likely to describe what the people who said they would try it liked about the product, how they came to their decision that they might try it, and what was inhibiting those who did not want to try it.

A quantitative ad test might try to forecast how many people would recall it, how many would recommend the product, and how many would buy the product. A qualitative ad test tries to find out how the ad worked and to suggest how it might be improved.

Creating a narrative
The purpose of a qualitative market research project is to create a story that illuminates the topic under investigation. Qualitative researchers do not ‘discover’ the story, they create the story from what they find, potentially co-creating it with the participants and/or the client. The evidence they gather, the knowledge they have, the knowledge the client has, need to be woven together to produce the final narrative.

The narrative that is created needs to explain the evidence in a way that throws light on the subject so that it facilitates better business decisions. Qualitative researchers are aware that there is no one ‘correct’ story, there are usually many ways to tell a good/effective/useful story (and of course even more ways to tell it in ineffective or misleading ways).

Analysis begins at the start not the end of the project
Before conducting the research the researcher needs to think about what is already known, what needs to be known, and the sorts of evidence that will help create a narrative. During the data gathering phase, the text (e.g. the chat, the posts, the comments) should be reviewed to challenge the hypotheses the researcher already has and to help create new hypotheses. The researcher should seek to test hypotheses by posting questions, by assigning tasks, and by probing existing answers, in ways that will make or break the hypotheses.

For example, if the researcher feels that the participants do not trust a specific brand, the participants might be asked to write a list of all the things they like about that brand. The words that are not on the list are a clue to what people feel. The words not on the list can then be used to elicit which brands do have those characteristics.

Create a discussion guide
A discussion guide is a plan of what is going to be discussed during the research. Researchers vary in how detailed their guide will be. Some researchers spell out every question they plan to ask in their online chat, focus group, or discussion. Other researchers will simply map out the topics they plan to cover and the sequence which they initially expect to ask them in.

Without a discussion guide the research runs the risk of running out of time, of failing to cover all the necessary topics, or of bringing up the topics in an order that is likely to inappropriately bias the results. A discussion guide can also be a useful way of checking with other stakeholders that the research is likely to cover what is needed.

Not everything that matters can be counted
In most cases, the exact number of times a particular word is used is not directly relevant to the outcome of a qualitative research project. Simple tools, particularly word clouds, give a picture, of qualitative data, based simply on how often certain words occur. Whilst a word cloud can be a useful starting point, it is never enough. Qualitative research is conducted by reading and considering all the material. In a modern qualitative project, that might include words, pictures, videos, audio contributions and more.

The sequence in which things are said can often matter more than the frequency of words. In an online discussion, for example, it is not unusual for several participants to comment on why they like something, until one person raises a major drawback. When this happens the conversation on that point may simply stop, because the drawback is so clear. But a word count of that conversation would treat the drawback as one comment, and the many, previous, praises for it, as being more significant. The order words are said in matters as much as the content of what is said.

Data does not mean numbers
When a qualitative researcher says ‘data’ they mean the words, pictures, videos, notes, audio recordings, and objects that have been collected. They do not mean a list of numbers in some tabular format.

There are other words that qualitative researchers use, such as text, corpus, discourse, artefacts, objects, exhibits etc. However, all of these can be subsumed in the term data. Sometimes, to reduce confusion these materials are described as qualitative data.

Consider actors and agendas
When looking at a post, an upload, or a comment, the researcher should consider who said it and why. People play roles in discussions, some are trying to be experts, while some are trying to conceal their true feelings. The researcher needs to assess who the actors in the discussion are and what they are trying to achieve, in order to place their contributions in the narrative.

In a discussion about coffee we may identify baristas, amateur experts, people with a green agenda, traditionalists, and innovators. The words cannot be separated from who said them, and ideally who said them to whom. Linking a series of contributions to the same person can increase the insight generated about narrative that is being sought.

We are poor witnesses to our own motivations
Many of the questions that researchers would like to ask are impossible for participants to answer accurately. People tend not to know why they do things. They mostly do not know the drivers of their behaviour. And, they are fairly poor at forecasting what they will do in the future. So, questions that ask “Why are you overweight?”, “Why did you buy that gym membership, knowing you’d hardly use it?” and “What is it about the ACME brand that makes you feel safe and warm?” are likely to fail.

Questions that tend to work are:

  • Reporting questions – e.g. “Which cupboard do you store you cleaning products in?” and “How often do you eat in a restaurant?”
  • Choice based questions. Show three items, ask “Which is the odd one out?”, and which can then lead into discussions of why.
  • Asking about other people. For example, “Tell me all the reasons why some people who are on a diet drink milk shakes?”
  • Asking what sorts of people do things. For example, “Tell me who might bake their own bread?”
  • Lists – in online research the creation of lists can be a natural way to get participants to be active and to reveal some of their feelings and beliefs. For example, a researcher might ask “Thinking about the brand Coca-Cola, list all the non-drink things you think they would be good at making?” – again leading on to why, and asking who agrees, and who has alternative suggestions.

Asking the obvious questions, for example, “What do you like about this advert?” are always going to be part of the qualitative research process. They are often an easy way to start a discussion, and we want to know what the answers are. However, we do not place too much motivational and narrative importance on the answers to these sorts of questions. The answers should certainly not be reported as being the actual motivations and feelings.

When analysing non-trivial amounts of qualitative information, it is really useful to annotate the material. This can be called tagging, memoing, commenting, annotating, highlighting, marking-up and probably a variety of other things. The material is read through and key themes, ideas, quotes, examples, hypotheses etc are noted.

Traditionally, this memoing process was done with scissors, copies of the transcripts, and coloured highlighter pens. Now there are a variety of software tools to help, often referred to as CAQDAS (Computer Aided Qualitative Data Analysis). Some people use specific software, whilst others find they can use Word and/or Excel to achieve what they need.

The narrative is then, typically, constructed from the memos. The source documents are often only referred back to when the story emerging from the memos needs additional evidence or appears inconsistent.

In a collaborative project, participants or ‘the crowd’ are enlisted to add their comments, tags, annotations.

Enable the participants whenever possible
The researcher will be developing hypotheses before the research, during the research, and after the research. One great way of challenging, supporting, or enriching these hypotheses is to actively involve the participants in the process.

Participants can be enabled and encouraged to tag their own comments and uploads, to tag and/or reply to other people’s contributions, and they can feedback on ideas presented by the researcher. In ‘researcher talk’, the researcher provides an outsider’s view (the etic view) whilst the participants can provide an insider’s view (the emic view). A narrative that combines the insider and outsider views is often more powerful than just a single perspective.

Grounding the story in the data
When the narrative is being created, the researcher should check that everything they are claiming can be supported by something evidenced in the data. Whilst not everything in the data should be in the narrative, everything in the narrative should be supported by something in the data.

If the researcher believes something to be true and important, but they cannot support it from the data, they should seek to introduce it to the research, in order to elicit evidence. This is could be via posts in an online discussion, or through the discussion guide for later online focus groups.

Examples that inform, not ones that entertain
There can be a temptation, when creating the research story, to include a video clip, photo, or quote that is particularly powerful, even though it is not truly relevant to the message, or perhaps is even at odds with the main element of the story. This is a bad practice and researchers need to be on their guard against it.

The researcher has to be seen as a ‘truth teller’. The role of the researcher is to tell the customers’ story. This means having the discipline to only use materials that are true to the narrative that has been created.

The “But I already knew that test!”
One test of a powerful narrative from qualitative research is that the client, when presented with the story says “But, I already knew that!” The client did already know it, but they did not know they knew it till they heard the research narrative.

This test, the “I already knew it” test goes to the heart of what qualitative research is all about. The research gathers evidence and synthesises them into a narrative that illuminates the topic under investigation. The illumination, typically, comes from revealing things we already knew, but did not realise or could not access without the research.


So, what you your thoughts and suggestions? What should be added, removed, or amended? Indeed, is the project just pure folly?

Jul 262013

From neuroscience to behavioural economics, from advanced and adaptive choice models to participative ethnography, from facial coding to big data there are masses of analysis approaches that are threatening to be the next big thing (yes, I know they are not all new, but they are contending to be the next big thing), and I’d love to hear your thoughts.

However, in my opinion, text analytics (using the term in its widest sense, but focusing on computer assisted and automated approaches) is my pick for the biggest hit of the next few years. There are several reasons for this, including:

  • The software is beginning to work, from tools to help manual analysts at one end of the spectrum, to better coding, through to concept construction software, the tools are beginning to mature and deliver.
  • Text analytics, as a category, is not linked to a niche. Text occurs in qual and quant, in free text, in the answers to survey questions, and in discussions.
  • Text analytics will help us ask shorter surveys, one of the key needs over the next few years. Instead of trying to pre-guess everything that might be important, researchers can reduce the number of closed questions massively, and ask Why? For example? and Which? as open-ended questions.
  • Text analytics will work well with the current leading growth area in research, namely communities. Many communities are kept artificially small to make it practical to moderate and communicate with members. With text analytics it will be possible to have far more members in discursive communities.
  • Text analytics will be essential to help understand the ‘why’ created by big data’s ‘what’.
  • Text analytics is the key to most forms of social media research, turning millions of real conversations into actionable insight.

I am clearly not alone in my view on text analytics, at this year’s AMSRS conference in Sydney there are at least three papers looking at different applications of text analytics and I am going to be running a number of workshops on text analytics in the second half of this year.

What are your thoughts on text analytics?

If not text analytics, what would you pick as the analysis approach which is likely to have the biggest impact over the next five years?

Jul 052013

As I have mentioned before, I am involved in writing a book on mobile market research, with Navin Williams and Sue York. As part of that process we will be posting elements of our thinking and snippets of the book to NewMR in order to crowd-source improvements. Here is one such snippet, it is the first page of a chapter on mobile qualitative research. We would love to hear your thoughts.

Mobile Specific Qualitative Research
This chapter looks at qualitative market research techniques that have been created by, or heavily impacted by, the arrival and utilisation of mobile devices. A separate chapter looks at how mobile devices are being incorporated into other, more traditional, forms of qualitative research (for example, in online focus groups and discussions, or in connection with face-to-face qualitative approaches).

Topics covered in this chapter include:

  • Mobile ethnography: where participants captures slices of their lives, or the lives of people around them, as an input to an ethnographic analysis.
  • Mobile diaries: where participants record their activity in relation to a specific topic, for example during the purchase of a mortgage, or whilst on a journey.
  • Triggered recording: where participants record their interactions with some external factor, for example, every time they see and advert for a particular category.
  • Qualitative tracking: This approach uses passive tracking, i.e. the phone uses its features and sensors to record where the participants go, what they do, etc; without any moment-to-moment intervention from the participants. These traces are then reviewed by the researcher as an input to their qualitative analysis.

To some extent, some of these approaches show a degree of overlap. For example, in mobile ethnography, mobile diaries, and triggered recording a participant might be asked to create a message when a specific event happens, they might be asked to take a photo or record a video, or to record how they feel. The difference tends to be the balance between the activities, the reason for the research, and how the research will be analysed. For example, in a mobile diary project the participants’ descriptions may be the key deliverable, in an ethnography it is the analysis and write-up that is the key element of the project.

Several of these mobile qualitative approaches use data collection methods that are similar to mobile quantitative techniques. For example, qualitative mobile diaries might be used to follow 20 participants, capturing their thoughts and experiences in relation to some activity, such as every time they have a drink during the day, capturing open-ended comments and images. A quantitative mobile diary study might be based on 400 participants and be based on the answers to closed questions, captured every time the participants drink something. Similarly, qualitative tracking might look at twelve people for several days, and the analysis might include sitting with the participants and reviewing the trace information to build a rich picture what has happened. A quantitative project might be based on 600 people and the analysis based on using software to find patterns in the data, e.g. sequences of actions, or typical routes.

This chapter reviews each of these approaches, providing practical advice, case studies, and methodological notes.


  1. I would love to hear from people with case studies they would like to share, either in the book or on our mobile resources page.
  2. Is mobile specific qualitative research a suitable term for this collection of approaches?
  3. Would you add any techniques to this list?
  4. Would you change the names of any of these four approaches?
  5. Do the first three really constitute three different approaches, or would they be better rolled into a single item?