Jul 262013

From neuroscience to behavioural economics, from advanced and adaptive choice models to participative ethnography, from facial coding to big data there are masses of analysis approaches that are threatening to be the next big thing (yes, I know they are not all new, but they are contending to be the next big thing), and I’d love to hear your thoughts.

However, in my opinion, text analytics (using the term in its widest sense, but focusing on computer assisted and automated approaches) is my pick for the biggest hit of the next few years. There are several reasons for this, including:

  • The software is beginning to work, from tools to help manual analysts at one end of the spectrum, to better coding, through to concept construction software, the tools are beginning to mature and deliver.
  • Text analytics, as a category, is not linked to a niche. Text occurs in qual and quant, in free text, in the answers to survey questions, and in discussions.
  • Text analytics will help us ask shorter surveys, one of the key needs over the next few years. Instead of trying to pre-guess everything that might be important, researchers can reduce the number of closed questions massively, and ask Why? For example? and Which? as open-ended questions.
  • Text analytics will work well with the current leading growth area in research, namely communities. Many communities are kept artificially small to make it practical to moderate and communicate with members. With text analytics it will be possible to have far more members in discursive communities.
  • Text analytics will be essential to help understand the ‘why’ created by big data’s ‘what’.
  • Text analytics is the key to most forms of social media research, turning millions of real conversations into actionable insight.

I am clearly not alone in my view on text analytics, at this year’s AMSRS conference in Sydney there are at least three papers looking at different applications of text analytics and I am going to be running a number of workshops on text analytics in the second half of this year.

What are your thoughts on text analytics?

If not text analytics, what would you pick as the analysis approach which is likely to have the biggest impact over the next five years?

Jul 172013

Following the discussion on tablets in mobile market research, this post addresses the wider issue of why somebody would want to conduct a study that is mobile only.

Having spoken to a wide cross section of clients and researcher, typical reasons for a mobile only study seem to include:

  1. Because the data needs to be collected, or is better if collected, ‘in the moment’. Where ‘in the moment’ typically means as people are making a decision, whilst using something, or immediately after using something.
  2. To collect passive data, as people go about their everyday lives.
  3. Because mobile gives a more appropriate sample than other similar methods. For example, in a country where 80% of economically active adults have a phone and 50% have internet access, mobile can provide the better sample.
  4. In order to change CAPI to mCAPI, re-energising CAPI.
  5. To add items like photos and videos to traditional survey responses.
  6. Where the mobile device can assist or improve data logging and collection.
  7. Research on the mobile ecosystem, for example of mobile advertising and campaigns.
  8. To research mobile data collection, part of what researchers call RoR, research-on-research.

Another example of point 3, a more appropriate sample, is provided by French company ELIPSS who have created a panel of people, selected via random probability, to whom they have given an internet connected tablet, creating a sample source that is both internet enabled and broadly representative of the group they are seeking to represent.

Two items which are currently not on the list are a) to be cheaper, b) to be faster. This may change in the future and faster and perhaps cheaper could become drivers of mobile usage. But here is why we don’t see them as drivers at the moment

There is doubt that mobile will be cheaper than online surveys in the foreseeable future for like-for-like surveys. The cost of programming a study for online and mobile is, at its best, the same. Testing for online and mobile is, at its best, the same. Incentives, are at their best, the same. And, the processing costs are typically the same. In fact, at the moment, mobile studies typically cost more to program and test since there are more contingencies to consider and to deal with.

However, if the desire to use mobile drives researchers to use shorter surveys, the net effect could be cheaper studies – as well as better and faster.

Faster is a plausible benefit for mobile, although this is a matter of degree. When online research, coupled with online access panels, burst on the scene, one of the key benefits was speed, days instead of weeks. In terms of mobile the speed difference for data collection is likely to be hours instead of days. However, this does not mean that most project turnarounds will reduce by the same factor. A project includes, design, scripting (the writing and testing of the survey), the fieldwork, and the analysis. Reducing the fieldwork from, say, 48 hours to 4 hours might reduce a project from five days to four days – good, but only crucial sometimes.

However, mobile data collection may come into its own when researchers start requesting, near, instant results. Consider the launch of a campaign, or assessing an open-air event, or dealing with the impact of a product disaster like a recall, a mobile survey could be sent out within minutes and a broad, cross-section of people could reply within minutes, potentially allowing real-time management of the campaign, event, or news.

In many cases there are methodological reasons to want the fieldwork to last at least 24 hours, and potentially longer. Different times of day attract different sorts of respondents. Researchers have reported that responses in the morning can be different from responses collected in the evening, and quite often that the first third of responses are different from the last third – although this may be due to more than just speed as the last third is often the part of a sample where there is a struggle to fill quotas – i.e. the last third are often demographically different from the first third.

Big shout of thanks to Frankie Johnson for highlighting mCAPI in relation to the previous post, and to Gerry Nicolaas for bring Elipss to my attention.

So, what are your thoughts? Would you make any major additions, deletions, or amendments to our list? Are you aware of interesting examples of people doing some of the less common alternatives?

Jul 122013

As mentioned before, I am in the midst of co-writing a book on mobile research and today I have been working my through the contrasting roles of phones, PCs, and tablets in quantitative research, specifically with respect to surveys.

The discussion about phones was relatively straightforward, covering both studies designed specifically for phones, and studies where phones might be used by some of the respondents, whilst others used, for example, a PC.

However, when I came to write the section on tablets, I came to the realisation that not only are surveys not normally written for tablets at the moment, they are unlikely to be written specifically and solely for tablets in the future.

Tablets can be used for surveys written for phones, tablets can be used for surveys written for PCs, but why would a survey be written for tablet in such a way that it was not suitable for a smartphone AND not suitable for a PC?

Possible reasons for writing a tablet only study might focus on having a large touch screen, but that seems like a niche. A tablet only survey could be written for a specific game, but again, that would be a niche. Of course, a survey about tablets might be designed to be tablet only, but once again a niche.

It is easy to see mCAPI (mobile CAPI) and qualitative uses that are specifically tablet. But unless somebody can see something I am missing, there seems to be little need for a genre that is tablet only surveys?

Unless you know better?

ps The title “The tablet that didn’t bite” is a loose reference to a phrase used in a Sherlock Holmes story (The Sliver Blaze), where the non-appearance of something revealed a vital clue.

Jul 052013

As I have mentioned before, I am involved in writing a book on mobile market research, with Navin Williams and Sue York. As part of that process we will be posting elements of our thinking and snippets of the book to NewMR in order to crowd-source improvements. Here is one such snippet, it is the first page of a chapter on mobile qualitative research. We would love to hear your thoughts.

Mobile Specific Qualitative Research
This chapter looks at qualitative market research techniques that have been created by, or heavily impacted by, the arrival and utilisation of mobile devices. A separate chapter looks at how mobile devices are being incorporated into other, more traditional, forms of qualitative research (for example, in online focus groups and discussions, or in connection with face-to-face qualitative approaches).

Topics covered in this chapter include:

  • Mobile ethnography: where participants captures slices of their lives, or the lives of people around them, as an input to an ethnographic analysis.
  • Mobile diaries: where participants record their activity in relation to a specific topic, for example during the purchase of a mortgage, or whilst on a journey.
  • Triggered recording: where participants record their interactions with some external factor, for example, every time they see and advert for a particular category.
  • Qualitative tracking: This approach uses passive tracking, i.e. the phone uses its features and sensors to record where the participants go, what they do, etc; without any moment-to-moment intervention from the participants. These traces are then reviewed by the researcher as an input to their qualitative analysis.

To some extent, some of these approaches show a degree of overlap. For example, in mobile ethnography, mobile diaries, and triggered recording a participant might be asked to create a message when a specific event happens, they might be asked to take a photo or record a video, or to record how they feel. The difference tends to be the balance between the activities, the reason for the research, and how the research will be analysed. For example, in a mobile diary project the participants’ descriptions may be the key deliverable, in an ethnography it is the analysis and write-up that is the key element of the project.

Several of these mobile qualitative approaches use data collection methods that are similar to mobile quantitative techniques. For example, qualitative mobile diaries might be used to follow 20 participants, capturing their thoughts and experiences in relation to some activity, such as every time they have a drink during the day, capturing open-ended comments and images. A quantitative mobile diary study might be based on 400 participants and be based on the answers to closed questions, captured every time the participants drink something. Similarly, qualitative tracking might look at twelve people for several days, and the analysis might include sitting with the participants and reviewing the trace information to build a rich picture what has happened. A quantitative project might be based on 600 people and the analysis based on using software to find patterns in the data, e.g. sequences of actions, or typical routes.

This chapter reviews each of these approaches, providing practical advice, case studies, and methodological notes.


  1. I would love to hear from people with case studies they would like to share, either in the book or on our mobile resources page.
  2. Is mobile specific qualitative research a suitable term for this collection of approaches?
  3. Would you add any techniques to this list?
  4. Would you change the names of any of these four approaches?
  5. Do the first three really constitute three different approaches, or would they be better rolled into a single item?