Dec 182013
 

Most market researchers (IMO) who use Twitter do so with the #MRX tag, with the #NewMR, #ESOMAR and #AMSRS tags a little way behind. Indeed Vaughan Mordecai has recently posted an interesting analysis of #MRX contributor and content – and Jeffrey Henning Tweets a weekly list of top #MRX links and posts a biweekly blog on GreenBook about the top ten.

But, is all of this just creating a cozy world where a few thousand market researchers tweet to each other, and nobody else really contributes, reads, are even cares? The quickest way to get recognition amongst market researchers is to use the #MRX tag, so it becomes the default, and in doing so, perhaps, it becomes a fence or boundary of our own making?

Time add new links to the wider world?
Other leading #MRX figures, such as Tom Ewing and Reg Baker have written about what happens if you ignore the #MRX audience, your figures quickly decline. But perhaps the key is to be adding more dimensions to what we do, and for those dimensions to have an external focus?

By external focus, I mean using cues and clues that other people are likely to be looking for. Who outside the market researcher Twitterati would be looking for #MRX or #NewMR – even if they were looking for market research related material?

Options we might want to consider, when talking about the right subjects are:

  • #ROI
  • #Insights
  • #Retail
  • #B2B
  • #Mobile (we do sometimes use #MMR – mobile market research, but that does not really ‘reach out’ to the non-cognoscenti)
  • #BigData
  • #Surveys

What do you think? Is there any potential in widening the hashtags we in the #MRX chatterati use? Or, would we still be talking to the same few people?

What tags would you suggest?

Sep 032013
 

Below is a list of the five posts, on NewMR.org, that in 2013 have been read by the largest number of unique readers, as measured by Google Analytics.

  1. Why do companies use market research? This was posted December 30, 2012, and has had 633 unique viewers in 2013.
  2. The ITU is 100% wrong on mobile phone penetration, IMHO. Posted 29 June, 2013, viewed by 380 unique people.
  3. Is it a bad thing that 80% of new products fail? Posted 7 March, 2013, 353 unique viewers.
  4. Notes for a non-researcher conducting qualitative research. This was only posted on 26 August, 2013, so it is probably still on its way up. It has 350 unique viewers.
  5. A Short History of Mobile Marketing Research. Posted 1 March, 2013, with 278 unique views.

I ran the analysis to see if I could spot any patterns in what made a successful NewMR post. However, so far, no clear pattern is emerging. Any thoughts or suggestions?

Aug 022013
 

This post has been written in response to a query I receive fairly often about sampling. The phenomenon it looks at relates to the very weird effects that can occur when a researcher uses non-interlocking quotas, effects that I am calling unintentional quotas, for example when using an online access panel.

In many studies, quota controls are used to try to achieve a sample to match a) the population and/or b) the target groups needed for analysis. Quota controls fall into two categories, interlocking and non-interlocking.

The difference between the two types can be shown with a simple example, using gender (Male and Female) and colour preference (Red or Blue). If we know that 80% of Females prefer Red, if we know that 80% of Men prefer Blue, and if there are an equal number of Males and Females in our target population, then we can create interlocking quotas. In our example we will assume that the total sample size wanted is 200.

  • Males who prefer Red = 50% * 20% * 200 = 20
  • Males who prefer Blue = 50% * 80% * 200 = 80
  • Females who prefer Red = 50% * 80% * 200 = 80
  • Females who prefer Blue = 50% * 20% * 200 = 20

These quotas deliver the 200 people required, in the correct proportions.

The Problems with Interlocking Quotas
The problem with the interlocking quotas above is that it requires the researcher to know what the colour preference of Males versus Females is, before doing the research. In everyday market research the quotas are often more complex, for example: 4 regions, 4 age breaks, 2 gender breaks, 3 income breaks. This pattern (of region, age, gender, and income) would generate 96 interlocking cells, and the researcher would need to know the population data for each of these cells. If these characteristics were then to be combined with a quota related to some topic (such as coffee drinking, car driving, TV viewing etc) then the number of cells becomes very large, and it is very unlikely the researcher would know the proportions for each cell.

Non-Interlocking Quotas
When interlocking cells become too tricky, the answer tends to be non-interlocking cells.

In our example above, we would have quotas of:

  • Male 100
  • Female 100
  • Prefer Red 100
  • Prefer Blue 100

The first strength of this route is that it does not require the researcher to know the underlying interlocking structure of the characteristics in the population. The second strength is that it makes it simple for the sample to be designed for the researcher’s need. For example, if in the population we know that Red is preferred by 80% of the population, then a researcher might still collect 100 Red and 100 Blue, to ensure the Blue sample was large enough to analyse, and the total sample could be created by weighting the results (to down-weight Blue, and up-weight Red).

Unintentional Interlocking Quotas
However, non-interlocking quotas can have some very weird and unpleasant effects if there are differences in response rates in the sample. This is best shown by an example.

Let’s make the following assumptions about the population for this example:

  • Prefer Red 80%
  • Prefer Blue 20%
  • No differences in colour gender preferences, i.e. 80% of males and females prefer Red
  • Female response rate 20%
  • Male response rate 10%

The researcher knows that overall 80% of people prefer Red, but does not know what the figures are for males and females, indeed the researcher hopes this project will through some light on any differences.

The specification of the study is to collect 200 interviews, using the following non-interlocking quotas.

  • Male 100
  • Female 100
  • Prefer Red 100
  • Prefer Blue 100

A largish initial sample of respondents are invited, let’s assume 1000 males and 1000 females. Noting that 1000 males at 10% response rate should deliver 100 completes.

However!!!
After 125 completes have been achieved the pattern of completed interview looks like this:

  • Female Red 67
  • Female Blue 17
  • Male Red 33
  • Male Blue 8

This is because the probability of each of the 125 interviews can be estimated by the combination of the chance it is male or female (10% male response rate and 20% female means that it is one-third likely to be a male and two-thirds likely to be a female) and the preference for Red (80%) and Blue 20%). Which to the nearest round percentages gives us the following odds: Female Red 53%, Female Blue 13%, Male Red 27%, Male Blue 7%.

The significance of 125 completes is that the Red Quota is complete. No more Reds can be collected. This, in turn, means:

  • The remaining 75 completes will all be people who prefer Blue
  • 17 of the remaining interviews will be Female (we already have 83 Females, so the Female quota will close when we have another 17)
  • 58 of the remaining interviews will be Male, Male Blues will be the only missing cell left to fill
  • The rapid filling of the Red quota, especially with Females, has resulted in interlocking quotas being created for the Blue cells.

The final result from this study will be:

  • Female Red 67
  • Female Blue 33
  • Male Red 33
  • Male Blue 67

Although there is no gender bias to colour preference in the population, in our study we have created a situation where two-thirds of Males prefer Blue, and two-thirds of the Females prefer Red.

In this example we are going to have to invite a lot more Males. We started by inviting 1000 Males, and with a response rate of 10% we might expect to collect our 100 completes. But, we have ended up needing to collect 67 Male Blues, because of the unintentional interlocking quotas. We can work out the number of invites it takes to collect 67 Male Blues by dividing 67 by the product of the response rate (10%) and the incidence of preferring Blue (20%), which gives us 67 / (10% * 20%) = 3,350. The 1000 male invites need to be boosted, by another 2,350, to 3,350 to fill the cells. Most researchers will have noticed that the last few cells in a project are hard to fill, that is because they have created unintentional interlocking quotas, locking the hardest cells together, which makes them even harder.

This, of course, is a very simple example. We only have two variables, each with two levels, and the only varying factor is the response rates between Male and Female. In an everyday project we would have more variables, and response rates will often vary by age, gender, and region. So, the scale of the problem in typical interlocking samples is likely to be larger than in this example, at least for the harder cells to complete.

Improving the Sampling/Quota Controlling Process
Once we realise we have a problem, and with the right information, there is plenty we can do to remove or ameliorate the problem.

  • Match the invites to the response rates. If, in the example above, we had invited twice as many Males as Females the cells would have completed perfectly.
  • Use interlocking cells. To do this you might run an omnibus before the main survey to determine what the cells targets should be.
  • Use the first part of the data collection to inform the process. So, in the example above we could have set the quotas to 50 for each of the four cells. As soon as one cell fills we look at the distribution of the data and amend the structure of the quotas, making some of them interlocking, perhaps relaxing (i.e. make bigger) some of the others, and invite more of the sorts of people we are missing. This does not fix the problem, but it can greatly reduce it, especially if you bite the bullet and increase the sample size at your expense.

Working with panel companies. Tell the panel company that you want them to phase their invites to match likely response rates. They will know which demographics respond better. For the demographic cells, watch to see that they are advancing in step. For example, watch to see that Young Males, Young Females, Older Males, and Older Females are all filling at the same rate and shout if this is not happening.

It is a good idea to make sure that the fieldwork is not going to happen so fast that you won’t have time to review it and make adjustments. As a rule of thumb, you want to review the data when one of the cells is about 50% full. At that stage you can do something about it. This means you do not want the survey to start after you leave the office, if there is a risk of 50% of the data being collected before the start of the next day.


Questions? Is this a problem you have comes across? Do you have other suggestions for dealing with it?

Jul 262013
 

From neuroscience to behavioural economics, from advanced and adaptive choice models to participative ethnography, from facial coding to big data there are masses of analysis approaches that are threatening to be the next big thing (yes, I know they are not all new, but they are contending to be the next big thing), and I’d love to hear your thoughts.

However, in my opinion, text analytics (using the term in its widest sense, but focusing on computer assisted and automated approaches) is my pick for the biggest hit of the next few years. There are several reasons for this, including:

  • The software is beginning to work, from tools to help manual analysts at one end of the spectrum, to better coding, through to concept construction software, the tools are beginning to mature and deliver.
  • Text analytics, as a category, is not linked to a niche. Text occurs in qual and quant, in free text, in the answers to survey questions, and in discussions.
  • Text analytics will help us ask shorter surveys, one of the key needs over the next few years. Instead of trying to pre-guess everything that might be important, researchers can reduce the number of closed questions massively, and ask Why? For example? and Which? as open-ended questions.
  • Text analytics will work well with the current leading growth area in research, namely communities. Many communities are kept artificially small to make it practical to moderate and communicate with members. With text analytics it will be possible to have far more members in discursive communities.
  • Text analytics will be essential to help understand the ‘why’ created by big data’s ‘what’.
  • Text analytics is the key to most forms of social media research, turning millions of real conversations into actionable insight.

I am clearly not alone in my view on text analytics, at this year’s AMSRS conference in Sydney there are at least three papers looking at different applications of text analytics and I am going to be running a number of workshops on text analytics in the second half of this year.

What are your thoughts on text analytics?

If not text analytics, what would you pick as the analysis approach which is likely to have the biggest impact over the next five years?

Jun 102013
 

As I mentioned in earlier posts. NewMR is involved in the creation of a new book, provisionally called the Handbook of Mobile Market Research. We will be publishing a lot of our work online, as the book progresses, to share our learning, to invite comments, and hopefully elicit extra material. Much of the material we are gathering is available via our Mobile Market Research Resources page.

The post published on this page is a piece of ‘work in progress’ from one of the chapters in the new book. The chapter will look at key debates in mobile market research, and this post addresses the question “How do clients move 20 to 30 minute tracking studies onto smartphones?“. We have access to some raw data and studies to back up the points in this post, but we’d love to have more, and I have flagged up in the post where we are particularly looking for more material. So, if you’d like to contribute: comment here, comment in the NewMR LinkedIn group or email us via admin@newmr.org.

Note, this work remains our copyright, or at least until it is transferred to the publishers. If you use it, or quote from it, please cite the source.


How do clients move 20 to 30 minute tracking studies onto smartphones?

There is a general view that one of the things that has slowed down the development of mobile marketing research has been the problem of how to move a 20 to 30 minute survey onto a phone. This problem seemed insurmountable in the days of feature phones, but even now, with smartphones becoming ever more common, there are no clear answers to the problem.

The problem appears to have two key elements:

  1. The belief that people will not take part in 20 to 30 minute surveys on their mobile phone.
  2. The belief that many research projects require long surveys, for example brand trackers, ad tracking, some U&A studies, and many customer satisfaction studies.
If both of these elements are true, then a substantial part of market research will not transfer to smartphones (it may, of course, still transfer to mobile via tablets). So, it is worth examining both of these elements in more depth.

People won’t do long interviews on their smartphones

Some of the people who put forward this proposition base it on common sense and personal experience. If we interrupt people during their busy day, to do a survey on their ever-present smartphone, they won’t have the time or the inclination to complete a long survey. Also, since the smartphone screen is small and the interface fiddly, it will be too onerous to do a long survey. However, common sense and personal experience are often a bad guide to what people actually do. Long surveys would not need to be synonymous with people completing them whilst busy, and plenty of people seem very happy with using their smartphone for extended periods of time – as any journey on a train will confirm.

One point that researchers should bear in mind is that when CATI appeared on the seen the consensus was that interviews needed to be short, but over time they became longer. When online research appeared (in the mid-1990s), early movers such as Pete Comley [REF:] said that interviews needed to be short, about five minutes, and certainly not longer than eight. However, both CATI and online went on to be used for longer and longer studies, and 40 minute studies are not rare these days.

If the experience of CATI and online are reviewed, the picture seems to be:

  1. There are a large number of people who are not willing to take part in market research surveys at all
  2. There are a large number of people who will sometimes take part in market research surveys, typically because the survey is short, not too boring, and they have been asked at the right time (asking nicely helps too).
  3. There are a group of people who will do a large number of surveys, and some of them will do quite long surveys, in return for incentives. For example, these are the people who sign-up to online access panels.

The evidence?
The evidence, so far, falls, broadly, into two groups. The first relates specifically to projects conducted to test mobile market research. The second relates to respondents who have used their mobile devices to take part in surveys that were solely, or mainly, intended for people using PCs – the type of mobile market research that is often referred to as unintentional mobile.

Mobile specific studies into drop-off rates
Many studies have reported that there are few problems in finding respondents willing to do ten minute surveys on the smartphone. With several studies indicating a severe increase in dropout at a point between ten and fifteen minutes.
MORE DATA AND STUDIES BEING SOUGHT

Unintentional mobile market research
It would appear that depending on the panel, some 5% to 15% of surveys intended for online via PC are being completed on smartphones, including surveys over 20 minutes in length.
MORE DATA AND STUDIES BEING SOUGHT

Summary of whether people will do long surveys on their smartphones
Although most research pundits and opinion leaders believe that mobile surveys should be short (indeed they tend to believe that online, CATI, and face-to-face should be relatively short too), the evidence suggests that researchers are faced with a choice.

Long mobile surveys are possible, if researchers are willing to reduce the population who are prepared to take part in their surveys. This is the decision they have made when dealing with CATI and online, so it is perfectly possible that some, perhaps the majority, of researchers will be willing (over a period of time) to trade of the breadth of the population they are surveying in order to ask the sorts of surveys that they or their clients think are necessary. It is likely that the growth in access panels with large numbers of mobile users will facilitate this choice.

Many projects require long surveys

Many types of market research surveys have become longer over the years. There seem to be several forces driving this process:

  • There are more brands and brands have more variants than in the past.
  • Different parts of the business want to add their questions to existing studies, especially as budgets become tighter.
  • Techniques such as driver analysis tend to require a wide range of topics to be measured – often resulting in grids in the surveys.
  • Legacy issues mean it is easier to add a new measurement than to remove an old one.
  • KPIs are often linked to specific questions in surveys, meaning they take on a life of their own.
But perhaps the main reason that surveys have become longer is that market researchers have found ways of persuading, some, people to do longer surveys. By creating convenient samples, e.g. online access panels, of people who will take part in long surveys for incentives, market research has created the opportunity for long surveys.

The alternatives to long surveys

Several alternatives to long surveys have been put forward, but most of them have not become widely popular, and none are yet the norm.

Review: this is the simplest and probably most common way of shortening a survey. All of the stakeholders are interviewed to find out what their current priorities are. The survey is subject to analysis, for example to identify correlations between the measures and to identify which measures are not measuring anything useful. The intended result is a shorter survey. However, even when this method is employed successfully, it tends to be a temporary solution, as the survey often starts to grow again.

Partial Data: Partial data refers to asking different participants different questions, in order to build a total picture. One method of doing this is to split answer lists, and even whole questions, across different respondents. Normally, in these cases, the researcher preserves a core that is the same, to allow the data set to be analysed meaningfully. One implication of this approach is that the sample size needs to be increased, if the sample size per question is to remain at the pre partialising level.

Another technique for working with partial data is to use Hierarchical Bayes (HB). For example, HB is often used in Discrete Choice Modelling (DCM) studies. Each respondent sees a subset of the tasks, and HB is used to calculate the utilities for each respondent. Note, in these cases HB is not used to estimate the stated values, it is used to estimate the implied or revealed values.

Splitting the survey over time: In this approach, the respondent completes the survey over a period of time, as a set of smaller surveys. This method underlies some of the popularity of insight communities, where the longitudinal nature of the relationship means that questions can be asked in short surveys, without the need to re-ask things like demographics.

One issue for the researcher to keep in mind, if using this approach, is that the key sample size issue is how many participants complete all the steps. If a survey is broken into, say, three units. The base for some of the analysis will be people who completed the first, second, and third elements, and there is normally some drop off between the stages.

Re-thinking: For some researchers the most attractive option to shorter surveys is to re-think the whole process. The question, in this case, becomes not how can we make this shorter, but what do we really need to do to answer the clients business needs?

Different researchers and thinkers have come up with different ideas. Probably, the idea that has had the greatest impact on the research industry was Fred Reichheld’s proposal that the NPS (Net Promoter Score) was the one number that companies needed to measure, however, and perhaps perversely, market research has tended to incorporate NPS measurements into long surveys, rather than replacing them. Some researchers have looked at replacing grids and attribute batteries with open-ended questions, seeking to analyse them with automated text analytics. Other researchers have looked to reduce their tracking studies by collecting more information from social media, and restricting the survey to just those elements that their social medial monitoring does not capture. However, none of these routes has systematically and widely reduced the length of surveys to date.

One interesting contribution to the thinking about shorter surveys was presented to the 2012 ESOAMR 3D Conference, when Alice Louw and Jan Hofmyer, showed how flawed much of the thinking behind long surveys was, and proposed focusing on just those elements where respondents could provide meaningful information. Although this paper has not turned into specific research approaches, at least not ones that the industry has adopted, it perhaps shows the degree of radical thinking that is required to create real change?

Summary of whether longer surveys are needed?
This question about whether longer surveys are needed almost misses the point. Whilst longer surveys are possible, it is likely that clients will continue to use them. This may change is providers can come up with something which is either cheaper or which is dramatically better.

Overall Summary

The common view is that a large part of client’s research spend is on long surveys, that these surveys can’t readily be made shorter, and that smartphone based mobile surveys need to be short. People who believe this to be true will tend to keep their surveys as PC based online for the foreseeable future. For these people, mobile will be a method of tacking other problems, but not the problems which they currently associate with long surveys.

However, it is likely that some respondents will, for a fee, be willing to do long surveys on their mobiles, so this will probably happen. Researchers, and the users of research, should note that it is likely that these populations will be even more dissimilar from the total population than the population of people willing to be an active member of an online access panel.

The hunt is still on for a method of shortening long surveys that is cheaper than long surveys, as fast as long surveys, and good enough. Time will tell whether such a solution is found.


We’d love to hear your thoughts. Is this a useful review of a key question? Do you have a different view? Do you have data or studies you’d like to share?

p.s. I like to include a relevant image for each post, but given that there are likely to be 50+ posts on mobile research over the next few months, posts of different pictures of mobile phones are likely to become tedious. So, in this series of posts, the images will be ones I have taken with my mobile phone.