Jun 292013

The ITU (the International Telecommunication Union, the UN agency that looks after ICT – information and communication technologies) has produced a useful update on ICT facts and figures.

The report is well worth reading and shows, amongst other things:

  • As more and more mobile phones are bought, the growth is slowing. In 2005/6 the global growth rate in cellular subscriptions was just under 25%. In 2012/13 it was down to just over 5%. In the developing world the growth has fallen from over 30% in 2005/6 to just over 6% now. None of which is surprising, but it is nice to know the numbers.
  • The internet continues to grow in all regions and globally. With 77% in the developed world having internet access, and 31% in the developing world.
  • Globally just under 3 billion people are using the internet, almost 40% of the population.
  • About 50% of the households with access to the internet are in the developing world (although that is a much lower penetration rate than in the developed world, 28% in the developing world and 78% in the developed world).
  • Fixed-broadband is much cheaper in the developed world than the developing world, although the price has been falling in the developing world. Costs in the report are measured as a percentage of GNI (Gross National Income – roughly, the amount the whole country earns) per person. In the developed world fixed broadband costs under 2% of average monthly income, in the developing world it costs over 30% of average monthly income.
  • Fixed broadband in the developing world is growing, but is still only 6%, compared with 27% in the developed markets. However, over 50% of the households with fixed-broadband are in the developing world, because it is larger.
  • The four countries with the highest percentage of their fixed-broadband being high-speed are: South Korea, Hong Kong, Japan, and Bulgaria.
  • Mobile broadband subscriptions have grown from under 300 million in 2007, to 2 billion in 2013.
  • In the developing countries mobile broadband is more expensive than it is in the developed markets, but cheaper than fixed broadband in the developing markets.
  • In Africa mobile broadband subscriptions cost about 50% of average income, compared with less than 2% in Europe.

The ITU is 100% wrong on penetration
So, it is a pity that the ITU refer to a highly misleading statistic in their report, which challenges the value the way that data from the ITU will be considered. And, it is a pity that some people in and around the market research world have picked up on this misleading number.

What is this misleading statistic? I am referring to the part of the report where the ITU says that the penetration of mobile-cellular is 96% globally and approaching 100%. It then compounds its dodgy use of language when it describes the penetration in the developed world as 128%, and describes mobile-cellular penetration as 170% in the CIS (a subset of the countries that used to be in Soviet Union, including Russia).

Let’s just think about 100% for one moment. In the way we normally use the phrase (for products, diseases, education, services) 100% would mean every baby, every prisoner, every homeless person would have one. For example, when we estimate the penetration of a TV show we interview a representative sample and gross up to the population. Clearly, it would be a nonsense to claim that 100% of people have a mobile phone. By the time we get to 170%, we can see that the ‘normal’, or useful definition of penetration is not the one they are using.

So, what do the reports of 100% penetration mean? Read the non-nonsense bits of the ITU report and you will notice that the team who have produced the charts (as opposed to the copy) refer to mobile-cellular subscriptions, and mobile-cellular subscriptions per 100 people. It is a pity that the copywriters did not follow the lead of the ITU people who worked on the charts.

What are mobile-cellular subscriptions? Very roughly, the number of subscriptions is the number of sims in use. If somebody has two phones, that is two sims, two subscriptions. If somebody has a dual-sim phone, that is two sims, and is often two subscriptions. If somebody has two phones, a tablet, and a mobile modem, they have four sims.

Am I just being pedantic, or does it matter? Yes, in my opinion it matters. Because people are quoting these super high ‘penetration’ rates there is an assumption that catering for mobile phone users, in and of itself, avoids excluding people. We can use the UK as a good example. The ITU figures for the UK, in 2011, says there were 131 subscriptions per 100 people – a figure the ITU copywriters and careless MR tweeters would call 131% penetration. However, the UK’s General Lifestyle Survey found that in 2011 one-in-seven households had zero mobile phones (i.e. 86% of households had at least one person in it who had at least one mobile phone). Data collected in the UK by the communication regulator (Ofcom) estimate that at the end of 2012 92% of adults owned or had the use of a mobile phone.

In the developed markets, such as the UK, the difference between a penetration rate of 131-132% of the total population (babies and all) and a real rate 0f 86-92% of adults is not particularly important. But if the ratio in the UK is typical, the ITU figure of 100% global could mean about two-thirds of adults have the use of a mobile phone, and that does matter. For example, it means research projects requiring a good representation of people, in some countries, cannot assume that mobile is currently a safe option.

Jun 122013

Earlier this month, NewMR held its first Explode-A-Myth session (find the recordings by clicking here) and my contribution was a discussion why there is no method that is a melange of qual and quant, because the underlying paradigms are different.

Through the Q&A session at that event, and in particular a question from Betsy Leichliter, I gained a clearer understanding of the core difference between qual and quant. Betsy asked “So should the ‘qual’ or ‘quant’ labels be driven by the method of analysis, not necessarily the method of “data collection”?”. I think this question from Betsy is the best answer to the question about what is the difference between qual and quant I have seen.

Within reason, any data can be assessed quantitatively or qualitatively. Of course, there are some limits to both approaches. A very small amount of data is likely to produce findings that are hard to generalise. We can count the sales of brand X, in one store, on one day, but it is hard to draw any inferences about the world from that. Similarly, ten-thousand open-ended responses could only be assessed qualitatively with a large team, or a large amount of time.

The quantitative approach is based on an assumption that there is a ‘real’ world, which we can measure objectively (or, at least, that we can get fairly close to that ideal). The underlying beliefs are a) it is the method that provides the results (different researchers should provide the same answer if they use the same method on the same data), and b) that the researcher is discovering and reporting something that exists.

The qualitative approach, as it has developed over the past thirty years, is based (for most researchers) on a constructionist paradigm (there are several different models, but they all tend to be constructionist). The researcher does not discover truths, the researcher creates a narrative that provides useful insight into what is happening. The researcher is part of the analysis, different researchers will provide different narratives, and the value of the narrative depends on the ability of the researcher to observe what is happening, to synthesise an analysis, and to create a narrative that conveys something useful to the end client.

The key difference between quant and qual is the difference between discovering and creating, overlaid with ritual of using numbers for quant and words for qual.

Jun 102013

As I mentioned in earlier posts. NewMR is involved in the creation of a new book, provisionally called the Handbook of Mobile Market Research. We will be publishing a lot of our work online, as the book progresses, to share our learning, to invite comments, and hopefully elicit extra material. Much of the material we are gathering is available via our Mobile Market Research Resources page.

The post published on this page is a piece of ‘work in progress’ from one of the chapters in the new book. The chapter will look at key debates in mobile market research, and this post addresses the question “How do clients move 20 to 30 minute tracking studies onto smartphones?“. We have access to some raw data and studies to back up the points in this post, but we’d love to have more, and I have flagged up in the post where we are particularly looking for more material. So, if you’d like to contribute: comment here, comment in the NewMR LinkedIn group or email us via admin@newmr.org.

Note, this work remains our copyright, or at least until it is transferred to the publishers. If you use it, or quote from it, please cite the source.

How do clients move 20 to 30 minute tracking studies onto smartphones?

There is a general view that one of the things that has slowed down the development of mobile marketing research has been the problem of how to move a 20 to 30 minute survey onto a phone. This problem seemed insurmountable in the days of feature phones, but even now, with smartphones becoming ever more common, there are no clear answers to the problem.

The problem appears to have two key elements:

  1. The belief that people will not take part in 20 to 30 minute surveys on their mobile phone.
  2. The belief that many research projects require long surveys, for example brand trackers, ad tracking, some U&A studies, and many customer satisfaction studies.
If both of these elements are true, then a substantial part of market research will not transfer to smartphones (it may, of course, still transfer to mobile via tablets). So, it is worth examining both of these elements in more depth.

People won’t do long interviews on their smartphones

Some of the people who put forward this proposition base it on common sense and personal experience. If we interrupt people during their busy day, to do a survey on their ever-present smartphone, they won’t have the time or the inclination to complete a long survey. Also, since the smartphone screen is small and the interface fiddly, it will be too onerous to do a long survey. However, common sense and personal experience are often a bad guide to what people actually do. Long surveys would not need to be synonymous with people completing them whilst busy, and plenty of people seem very happy with using their smartphone for extended periods of time – as any journey on a train will confirm.

One point that researchers should bear in mind is that when CATI appeared on the seen the consensus was that interviews needed to be short, but over time they became longer. When online research appeared (in the mid-1990s), early movers such as Pete Comley [REF:] said that interviews needed to be short, about five minutes, and certainly not longer than eight. However, both CATI and online went on to be used for longer and longer studies, and 40 minute studies are not rare these days.

If the experience of CATI and online are reviewed, the picture seems to be:

  1. There are a large number of people who are not willing to take part in market research surveys at all
  2. There are a large number of people who will sometimes take part in market research surveys, typically because the survey is short, not too boring, and they have been asked at the right time (asking nicely helps too).
  3. There are a group of people who will do a large number of surveys, and some of them will do quite long surveys, in return for incentives. For example, these are the people who sign-up to online access panels.

The evidence?
The evidence, so far, falls, broadly, into two groups. The first relates specifically to projects conducted to test mobile market research. The second relates to respondents who have used their mobile devices to take part in surveys that were solely, or mainly, intended for people using PCs – the type of mobile market research that is often referred to as unintentional mobile.

Mobile specific studies into drop-off rates
Many studies have reported that there are few problems in finding respondents willing to do ten minute surveys on the smartphone. With several studies indicating a severe increase in dropout at a point between ten and fifteen minutes.

Unintentional mobile market research
It would appear that depending on the panel, some 5% to 15% of surveys intended for online via PC are being completed on smartphones, including surveys over 20 minutes in length.

Summary of whether people will do long surveys on their smartphones
Although most research pundits and opinion leaders believe that mobile surveys should be short (indeed they tend to believe that online, CATI, and face-to-face should be relatively short too), the evidence suggests that researchers are faced with a choice.

Long mobile surveys are possible, if researchers are willing to reduce the population who are prepared to take part in their surveys. This is the decision they have made when dealing with CATI and online, so it is perfectly possible that some, perhaps the majority, of researchers will be willing (over a period of time) to trade of the breadth of the population they are surveying in order to ask the sorts of surveys that they or their clients think are necessary. It is likely that the growth in access panels with large numbers of mobile users will facilitate this choice.

Many projects require long surveys

Many types of market research surveys have become longer over the years. There seem to be several forces driving this process:

  • There are more brands and brands have more variants than in the past.
  • Different parts of the business want to add their questions to existing studies, especially as budgets become tighter.
  • Techniques such as driver analysis tend to require a wide range of topics to be measured – often resulting in grids in the surveys.
  • Legacy issues mean it is easier to add a new measurement than to remove an old one.
  • KPIs are often linked to specific questions in surveys, meaning they take on a life of their own.
But perhaps the main reason that surveys have become longer is that market researchers have found ways of persuading, some, people to do longer surveys. By creating convenient samples, e.g. online access panels, of people who will take part in long surveys for incentives, market research has created the opportunity for long surveys.

The alternatives to long surveys

Several alternatives to long surveys have been put forward, but most of them have not become widely popular, and none are yet the norm.

Review: this is the simplest and probably most common way of shortening a survey. All of the stakeholders are interviewed to find out what their current priorities are. The survey is subject to analysis, for example to identify correlations between the measures and to identify which measures are not measuring anything useful. The intended result is a shorter survey. However, even when this method is employed successfully, it tends to be a temporary solution, as the survey often starts to grow again.

Partial Data: Partial data refers to asking different participants different questions, in order to build a total picture. One method of doing this is to split answer lists, and even whole questions, across different respondents. Normally, in these cases, the researcher preserves a core that is the same, to allow the data set to be analysed meaningfully. One implication of this approach is that the sample size needs to be increased, if the sample size per question is to remain at the pre partialising level.

Another technique for working with partial data is to use Hierarchical Bayes (HB). For example, HB is often used in Discrete Choice Modelling (DCM) studies. Each respondent sees a subset of the tasks, and HB is used to calculate the utilities for each respondent. Note, in these cases HB is not used to estimate the stated values, it is used to estimate the implied or revealed values.

Splitting the survey over time: In this approach, the respondent completes the survey over a period of time, as a set of smaller surveys. This method underlies some of the popularity of insight communities, where the longitudinal nature of the relationship means that questions can be asked in short surveys, without the need to re-ask things like demographics.

One issue for the researcher to keep in mind, if using this approach, is that the key sample size issue is how many participants complete all the steps. If a survey is broken into, say, three units. The base for some of the analysis will be people who completed the first, second, and third elements, and there is normally some drop off between the stages.

Re-thinking: For some researchers the most attractive option to shorter surveys is to re-think the whole process. The question, in this case, becomes not how can we make this shorter, but what do we really need to do to answer the clients business needs?

Different researchers and thinkers have come up with different ideas. Probably, the idea that has had the greatest impact on the research industry was Fred Reichheld’s proposal that the NPS (Net Promoter Score) was the one number that companies needed to measure, however, and perhaps perversely, market research has tended to incorporate NPS measurements into long surveys, rather than replacing them. Some researchers have looked at replacing grids and attribute batteries with open-ended questions, seeking to analyse them with automated text analytics. Other researchers have looked to reduce their tracking studies by collecting more information from social media, and restricting the survey to just those elements that their social medial monitoring does not capture. However, none of these routes has systematically and widely reduced the length of surveys to date.

One interesting contribution to the thinking about shorter surveys was presented to the 2012 ESOAMR 3D Conference, when Alice Louw and Jan Hofmyer, showed how flawed much of the thinking behind long surveys was, and proposed focusing on just those elements where respondents could provide meaningful information. Although this paper has not turned into specific research approaches, at least not ones that the industry has adopted, it perhaps shows the degree of radical thinking that is required to create real change?

Summary of whether longer surveys are needed?
This question about whether longer surveys are needed almost misses the point. Whilst longer surveys are possible, it is likely that clients will continue to use them. This may change is providers can come up with something which is either cheaper or which is dramatically better.

Overall Summary

The common view is that a large part of client’s research spend is on long surveys, that these surveys can’t readily be made shorter, and that smartphone based mobile surveys need to be short. People who believe this to be true will tend to keep their surveys as PC based online for the foreseeable future. For these people, mobile will be a method of tacking other problems, but not the problems which they currently associate with long surveys.

However, it is likely that some respondents will, for a fee, be willing to do long surveys on their mobiles, so this will probably happen. Researchers, and the users of research, should note that it is likely that these populations will be even more dissimilar from the total population than the population of people willing to be an active member of an online access panel.

The hunt is still on for a method of shortening long surveys that is cheaper than long surveys, as fast as long surveys, and good enough. Time will tell whether such a solution is found.

We’d love to hear your thoughts. Is this a useful review of a key question? Do you have a different view? Do you have data or studies you’d like to share?

p.s. I like to include a relevant image for each post, but given that there are likely to be 50+ posts on mobile research over the next few months, posts of different pictures of mobile phones are likely to become tedious. So, in this series of posts, the images will be ones I have taken with my mobile phone.

Jun 062013

I am involved in a new book, which we hope will be published early in 2014. As with The Handbook of Online and Social Media Research, I will be sharing the project with the #NewMR community and would hope to receive as much help and support as I received last time (all those who contributed are listed in the book).

We should be able to publicise the publisher and the team shortly (final negotiations are taking place at the moment).

The book will be informed by the work I have done with Navin William and Reg Baker to create a mobile marketing research course for the University of Georgia’s Principles of Marketing Research course – which will be available shortly.

The first question
So, here is our first question to the market research community. What are the key debates about mobile market research?

My feeling is that the key debates in mobile market research are:

  1. How do clients move 20 to 30 minute tracking studies onto mobile devices?
  2. Closely followed by, what is the maximum length of a mobile interview?
  3. What sorts of techniques can’t be completed on a phone?
  4. Closely followed, by how do we adapt techniques that don’t work on a phone?
  5. Does the rise of smartphones mean we can ignore feature phones?
  6. Will the rise of tablets mean we don’t need to worry about phones?
  7. How does data from smartphone surveys compare with surveys conducted on PC, or F2F, or telephone?
  8. Can researchers deal with the differences in phones and operating systems?
  9. What is the right balance of web versus app?
  10. Where will the samples come from?
  11. CATI replaced much of F2F, online replaced much of CATI and F2F, what will mobile research replace?
  12. How will mobile change the world of qualitative research?
  13. How will mobile change the world of quantitative research?
  14. Will the legislators prevent mobile market research, and what aspects are most at risk?
  15. What are the ethical challenges?
  16. How do clients assess one option against another?
  17. Will mobile every be cheaper than online for mainstream surveys?

I’d love to hear your views and thoughts? Are these the key debates? What would you drop from this list? What would you add to the list?