Apr 072014
 

Last week Jeffrey Henning gave a great #NewMR lecture on how to improve the representativeness of online surveys (click here to access the slides and recordings). During the lecture he touched lightly on the topic of calculating sampling error from non-probability samples, pointing out that it did not really do what it was supposed to. In this blog I want to highlight why I recommend using this statistic as a measure of reliability, but not validity.

If we calculate the sampling error for a non-probability sample, for example from an online access panel, we are not representing the wider population. The population for this calculation is just those people who might have taken the survey. For example, just those members of the online access panel who met the screening criteria and who were willing (during the survey period) to take the study. The sampling error tells us how good our estimates of this population are (i.e. those members of the panel who met the criteria and who were willing to take a survey at that particular time).

If we take a sample of 1000 people from an online access panel and we calculate that the confidence interval is +/-3% at the 95% level, what we are saying is that if we had done another test, on the same day, with the same panel, with a different group of people, we are 95% sure that the answer we would have got would have been within 3% of the first test. That is a measure of reliability. But we are not saying that if we had measured the wider population the answer would have been within 3%, or 10% or any other number we could quote.

The sampling error statistic from a panel is not about validity, since we can’t estimate how representative the panel is of the wider population. But, it does give us a statistical measure of how likely we are to get the same answer again if we repeat the study on the same panel, with the same sample specification, during the same period of time – which is a pretty good statement of reliability.

Note, to researchers reliability is about whether something measures the same way each time. Validity relates to whether what is measured is correct. A metal metre ruler that is 10cm short is reliable, it is always 10 cm short, but it is not as valid as we would like.

My recommendation is to calculate the sampling error and use it to indicate which values from the non-probability sample are at least big enough to be reliable. But let’s not claim it represents the sampling error of the wider population, nor that it directly links to validity.

I would recommend adding text something like: “The sampling reliability of this estimate at the 95% level is +/- X%, which means that if we used the same sampling source 20 times, with the same specification, we would expect the answers to be within X% 19 times.”

Total Survey Error
Another reason to be careful with sampling error is that it is only one source of error in a survey. Asking leading questions, asking questions that people can’t answer (for example because we are poor witnesses to our own plans and motivations), or asking questions that people don’t want to answer (for example because of social desirability bias), can all result in much bigger problems than sampling error.

Researchers can sometimes be too worried about sampling error, leading them to ignore much bigger sources of error in their work.

 

Feb 152014
 
Castle

As part of writing our new book on Mobile Market Research (which should be available in September) I have been reading a lot of research-on-research (RoR) related to mobile studies.

RoR can provide insights into whether a research technique works or not, or the extent to which it works, or how it works. However, RoR is often over-interpreted. Running a single test does not ‘prove’ a technique works, nor does it very often prove a technique is without merit.

The following observations should be kept in mind when reading the results of RoR. For the purpose of this illustration, consider the possible outcomes of on an experiment with two cells. Each cell examines the same phenomenon (e.g. survey questions) via two methods (e.g. online versus mobile) – calling the methods A and B.

  1. A and B produce results that are statistically significantly different. This does not mean that A and B will always produce different results. The difference in the results could be the result of chance, or there could have been a flaw in the test. But, even if the test was fair and well-constructed, the difference only indicates that the methods A and B will sometimes produce different results; it does not say they will always produce different results. If the tests are repeated, for different products, with different surveys, and with different sorts of customers, and if the differences between A and B keep appearing – then researchers will start to assume that there probably is a general difference between A and B.
  2. A and B produce results where the differences are not big enough to be statistically significant. This type of result is often misinterpreted, leading to a false conclusion that the cells are the same. A test that fails to prove that A and B are different does not mean that A and B are the same, or even similar. If a test shows that the probability that A and B are different is 89%, convention dictates that the difference is not big enough for the researcher to be 95% confident there is a difference – so the cells are dismissed as not being different. But it does not mean A and B are the same, indeed in this example the researcher is 89% sure they are different. When a difference is not statistically significantly different, it often means that the sample size was not large enough to confirm the difference as being significant. With a large enough sample size, almost any difference is significant, with a small enough sample size, many important differences will be judged not to be significant.
  3. A and B produce results that show the difference between A and B is small. Instead of testing for differences, researchers can test whether a difference is smaller than specific value, which in practice means testing whether two cells are the same. Consider a case where a test has been conducted and the results show that A and B produce the same result, within a specified boundary of what is meant by ‘the same’. This test would not have shown that A and B will always produce the same result as each other, but simply that they can sometimes produce the same result. If several tests are run, in different contexts, and A and B keep producing similar results, then researchers will start to form a view that A and B will generally produce similar results.

 

Jan 042014
 

The worlds of academic and commercial research are being riven at the moment with concerns and accusation about how poor much of the research and conclusions that have been published are. This particular problem is not specifically about market research, it covers health research, machine learning, bio-chemistry, neuroscience, and much more. The problem relates to the way that tests are being created and interpreted. One of the key people highlighting the concerns about this problem is John Ioannidis from Stanford University and his work has been reported both in academic and popular forums (for example The Economist). The quote “most published research findings are probably false.” comes from Ioannidis.

Key Quotes
Here are some of the quotes and worries floating about at the moment:

  • America’s National Institutes of Health (NIH) – researchers would find it hard to reproduce at least three-quarters of all published biomedical findings
  • Sandy Pentland, a computer scientist at the Massachusetts Institute of Technology – three-quarters of published scientific papers in the field of machine learning are bunk because of this “overfitting”
  • John Bohannon, a biologist at Harvard, submitted an error stewm paper on a cancer drug derived from lichen to 350 journals (as an experiment), 157 accepted it for publication

Key Problems
Key problems that Ioannidis has highlighted, and which relate to market research are:

1. Studies that show an unhelpful result are often not published, partly because they are seen as uninteresting. For example, if 100 teams look to see if they can find a way of improving a process and all test the same idea, we’d expect 5 of them to have results that are significant at the 95%, just by chance. The 95 tests that did not show significant results are not interesting, so they are less likely to be published. The 5 ‘significant’ results are likely to be published, and the researchers on that team are likely to be convinced that the results are valid and meaningful. However, these 5 results would not have been significant if all 100 had been considered together. This problem has been widely associated with problems in replicating results.

2. Another version of the multiple tests problem is when researchers gather a large amount of data then trawl it for differences. With a large enough data set (e.g. Big Data), you will always find things that look like patterns. Tests can only be run if the hypotheses are created BEFORE looking at the results.

3. Ioannidis has highlighted that researchers often base their study design on implicit knowledge, without necessarily intending to, and often without documenting it. This implicit process can push the results in one direction or another. For example, a researcher looking to show two methods produced the same results might be thinking about questions that are more likely to produce the same answers. Asking people to say if they are male or female is likely to produce the same result, across a wide range of question types and contexts. By contrast, questions about products that participants are less attached to, in the context of a 10 point-scale emotional associations are likely to be more variable, and therefore less likely to be consistent across different treatments.

4. Tests have a property called their statistical power, which in general terms is the ability of the test to avoid Type II errors (false negatives). The tests in use in neuroscience, biology, and market research typically have a much lower statistical power than the optimum. This led John Ioannidis in 2005 to assert that “most published research findings are probably false”.

Market Research?
What should market researchers make of these tests and their limitations? Test data is a basic component of evidence for market research. Researchers should seek to add any new evidence they can acquire to that which they already know, and where necessary do their own checking. In general, researcher should seek to find theoretical reasons for the phenomena they observe in testing – rather than relying on solely on test data.

However, let’s stop saying tests “prove” something works, and let’s stop quoting academic research as if it were “truth”. Things are more or less likely to be true, in market research and indeed most of science, there are few things that are definitely true.

The ‘science’ underpinning behavioural economics, neuroscience, and Big Data (to name just three) should be taken as work in progress, not ‘fact’.

Is Ioannidis Right?
If we are in the business of doubting academic research, then it behoves us to doubt the academic telling us to be more skeptical. There are people who are challenging the claims. For example this article from January 2013 claims that the real figure for bad biomedical research is ‘just’ 14%, rather than three-quarters.

Dec 112013
 

We are all familiar with the phrase that correlation is not the same as causality, but we also know that in many cases correlation is a really good indicator that something is important – so how do we judge how much importance to give to correlation?

In the 1940s, British scientist Richard Doll conducted a study of 649 cases of lung cancer and noted that only two were non-smokers, causing him to a) stop smoking and b) to start researching the link between smoking and cancer. The correlation certainly did not prove smoking caused lung cancer. As a point on interest, in the 1940s about 80% of adults smoked, so it would have been expected that most people with lung cancer smoked. A simplistic view of correlation would have said that no action should have been taken until a cause was identified. We now know that smoking tobacco releases more than 70 different cancer causing substances.

Sometimes a correlation is useful, even when the phenomenon being measured is not a cause. Waist measurements are highly correlated with health problems, but the waist measurement does not directly cause health problems. Having too much fat tends to cause the problems and having too much fat makes the waist measurement go up. So, by measuring waist measurements we can assess likely health issues, even though the link is only correlation.

So, if we find a correlation should we ignore it until we can find a cause or mechanism, or can we act just on the correlation? The answer is, as is often the case, ‘it depends’.

When thinking about brands and marketing the key to the difference between when causality matters and doesn’t often depends on whether we are trying to tackle the underlying cause or the visible measurements. For example, it is likely, IMHO, that making your brand more relevant and having a more engaging presence will grow the number of FaceBook likes, and this is likely to be a good thing, and monitoring the likes is probably going to be a good thing. However, if the number of likes is set as a KPI, then the pressure on the managers is not to increase the engagement or salience of the brand, it is to increase the number of likes. There are many way of increasing likes that have little impact on the brand, such as running one-off promotions through to paying people to click like (typically from low-cost economies). By changing the statistic from a measure to a target we have fallen into the causality/correlation trap.

In many ways, this view of correlation and causality reflects the key point in the book Obliquity. Obliquity points out there are many things you can only achieve by not trying to achieve them, such as happiness. If a brand wants to increase its satisfaction, social engagement, or salience it can only measure it with statistics such as NPS, Likes, social media comments etc if it does not seek to directly change the numbers. Increasing your social media comments by being more newsworthy, having great products, or running wonderful campaigns is great. However, engaging a clever agency to boost your social media mentions is likely to be much less effective.

Nov 242013
 

To help celebrate the Festival of NewMR we are posting a series of blogs from market research thinkers and leaders from around the globe. These posts will be from some of the most senior figures in the industry to some of the newest entrants into the research world.

A number of people have already agreed to post their thoughts, and the first will be posted later today. But, if you would like to share your thoughts, please feel free to submit a post. To submit a post, email a picture, bio, and 300 – 600 words on the theme of “Opportunities and Threats faced by Market Research” to admin@newmr.org.

Posts in this series
The following posts have been received and posted:

Oct 202013
 

Tens of thousands of new products are tested each year, as part of concept screening, NPD, and volumetric testing. Some products produce a positive result, and everybody is pretty happy, but many produce a negative result. A negative result might be that a product has a low stated intention to purchase or it might be that it fails to create the attitude or belief scores that were being sought.

Assuming that the research was conducted with a relatively accepted technique, what might the negative result mean?

A bad product/idea
One possibility is that the product is simply not good enough. This means that if the product is launched, as currently envisaged, it is very likely to fail. In statistical terms this is the true negative.

The false negative
The second possibility is that the result is a Type II error, i.e. a false negative. The product is good, but the test has not shown this. Designers and creatives seem to think this is the case in a large proportion of the cases, and there are many ways that this false negative result can occur.

The test was unlucky
If a test is based on a procedure with a true negative rate of 80% then one-in-five times a success will be recorded as a failure. A recent article in the New Scientist (19 October, 2013) pointed out that since most tests focus on minimising Type I errors (false positives) the typical true negative rate is often much less than 80%, meaning unlucky results will be more common.

The sample size was too small
If a stimulus produces a large effect, it will obvious even with a small sample, but if the effect is small a large sample is needed to indicate it, and if it is not indicated, it will typically be called a failure. For example, if a sample of 1000 (randomly selected) people is used, the result is normally taken to be +/- 3%, which means relatively small benefits can be identified. However, if a sample size of 100 is used the same assumptions would imply +/- 10%, which means effects have to be much larger to be likely to be found.

The description does not adequately describe the product
If the product description is not good enough, then the result of the test is going to be unreliable, which could result in a good idea getting a bad result.

The product or its use can’t be envisaged
Some products only become appealing once they are used, apps and software often fall into this category, but so do products as varied as balsamic vinegar, comfy socks, and travel cards (such as the London Oyster). Some products only become appealing when other people start to use them, as Mark Earls has shown in “I’ll have what she’s having”. Generally, copying behaviour is hard to predict from market research tests, producing a large number of false positives and false negatives. In these cases, the purchase intention scale (and alternatives such as predictive markets) can be very poor indicators of likely success.

In many cases people may be able to identify that they like a product, but are unable to reliably forecast whether they will actually buy and use it, i.e. they can’t envisage how the product will fit in their lives. For example, I have lost count of the number of holiday locations and restaurants I have been convinced that I would re-visit, only to be proved wrong. This is another situation where the researcher’s trusty purchase intention scale can be a very poor indicator.

The wrong people were researched
If the people who might buy the product were not researched, then the result of the test is unlikely to forecast their behaviour. For example, in the UK, energy drinks are less about sports people than office workers looking for a boost. Range Rovers are less for country folk than they are for Londoners.

So, how should a bad result be dealt with?
This is where science becomes art, and sometimes it will be wrong (but the science is also wrong some of the time). So, here are a few thoughts/suggestions.

  • If you expected the product/concept to fail, the test has probably told you what you already knew, so it is most likely safe to accept the negative finding.
  • If you have tested several similar products, and this is the one of the weaker results, it is probably a good indication the product is weak.

In both of these cases, the role of the modern market researcher is not just to give bad news, it should also include suggesting recommendations for what might work, either modifications to the product/concept, or alternative ideas.

If you can’t see why it failed
If you can’t see why a product failed, try to find ways of understanding why. Look at the open-ended comments to see if they provide clues. Try to assess whether the idea was communicated. For example, did people understand the benefits, and reject them, or not understand the benefits?

Is the product/concept one where people are likely to be able to envisage how the product would fit in their life? If not, you might want to suggest qualitative testing, in-home use test, or virtual reality testing.

Some additional questions
To help understand why products produce a failing score, I find it useful to include the following in screening studies:

  • • What sort of people might use this product?
  • • Why might they like it/use it?
  • • What changes would improve it for these people?

Aug 022013
 

This post has been written in response to a query I receive fairly often about sampling. The phenomenon it looks at relates to the very weird effects that can occur when a researcher uses non-interlocking quotas, effects that I am calling unintentional quotas, for example when using an online access panel.

In many studies, quota controls are used to try to achieve a sample to match a) the population and/or b) the target groups needed for analysis. Quota controls fall into two categories, interlocking and non-interlocking.

The difference between the two types can be shown with a simple example, using gender (Male and Female) and colour preference (Red or Blue). If we know that 80% of Females prefer Red, if we know that 80% of Men prefer Blue, and if there are an equal number of Males and Females in our target population, then we can create interlocking quotas. In our example we will assume that the total sample size wanted is 200.

  • Males who prefer Red = 50% * 20% * 200 = 20
  • Males who prefer Blue = 50% * 80% * 200 = 80
  • Females who prefer Red = 50% * 80% * 200 = 80
  • Females who prefer Blue = 50% * 20% * 200 = 20

These quotas deliver the 200 people required, in the correct proportions.

The Problems with Interlocking Quotas
The problem with the interlocking quotas above is that it requires the researcher to know what the colour preference of Males versus Females is, before doing the research. In everyday market research the quotas are often more complex, for example: 4 regions, 4 age breaks, 2 gender breaks, 3 income breaks. This pattern (of region, age, gender, and income) would generate 96 interlocking cells, and the researcher would need to know the population data for each of these cells. If these characteristics were then to be combined with a quota related to some topic (such as coffee drinking, car driving, TV viewing etc) then the number of cells becomes very large, and it is very unlikely the researcher would know the proportions for each cell.

Non-Interlocking Quotas
When interlocking cells become too tricky, the answer tends to be non-interlocking cells.

In our example above, we would have quotas of:

  • Male 100
  • Female 100
  • Prefer Red 100
  • Prefer Blue 100

The first strength of this route is that it does not require the researcher to know the underlying interlocking structure of the characteristics in the population. The second strength is that it makes it simple for the sample to be designed for the researcher’s need. For example, if in the population we know that Red is preferred by 80% of the population, then a researcher might still collect 100 Red and 100 Blue, to ensure the Blue sample was large enough to analyse, and the total sample could be created by weighting the results (to down-weight Blue, and up-weight Red).

Unintentional Interlocking Quotas
However, non-interlocking quotas can have some very weird and unpleasant effects if there are differences in response rates in the sample. This is best shown by an example.

Let’s make the following assumptions about the population for this example:

  • Prefer Red 80%
  • Prefer Blue 20%
  • No differences in colour gender preferences, i.e. 80% of males and females prefer Red
  • Female response rate 20%
  • Male response rate 10%

The researcher knows that overall 80% of people prefer Red, but does not know what the figures are for males and females, indeed the researcher hopes this project will through some light on any differences.

The specification of the study is to collect 200 interviews, using the following non-interlocking quotas.

  • Male 100
  • Female 100
  • Prefer Red 100
  • Prefer Blue 100

A largish initial sample of respondents are invited, let’s assume 1000 males and 1000 females. Noting that 1000 males at 10% response rate should deliver 100 completes.

However!!!
After 125 completes have been achieved the pattern of completed interview looks like this:

  • Female Red 67
  • Female Blue 17
  • Male Red 33
  • Male Blue 8

This is because the probability of each of the 125 interviews can be estimated by the combination of the chance it is male or female (10% male response rate and 20% female means that it is one-third likely to be a male and two-thirds likely to be a female) and the preference for Red (80%) and Blue 20%). Which to the nearest round percentages gives us the following odds: Female Red 53%, Female Blue 13%, Male Red 27%, Male Blue 7%.

The significance of 125 completes is that the Red Quota is complete. No more Reds can be collected. This, in turn, means:

  • The remaining 75 completes will all be people who prefer Blue
  • 17 of the remaining interviews will be Female (we already have 83 Females, so the Female quota will close when we have another 17)
  • 58 of the remaining interviews will be Male, Male Blues will be the only missing cell left to fill
  • The rapid filling of the Red quota, especially with Females, has resulted in interlocking quotas being created for the Blue cells.

The final result from this study will be:

  • Female Red 67
  • Female Blue 33
  • Male Red 33
  • Male Blue 67

Although there is no gender bias to colour preference in the population, in our study we have created a situation where two-thirds of Males prefer Blue, and two-thirds of the Females prefer Red.

In this example we are going to have to invite a lot more Males. We started by inviting 1000 Males, and with a response rate of 10% we might expect to collect our 100 completes. But, we have ended up needing to collect 67 Male Blues, because of the unintentional interlocking quotas. We can work out the number of invites it takes to collect 67 Male Blues by dividing 67 by the product of the response rate (10%) and the incidence of preferring Blue (20%), which gives us 67 / (10% * 20%) = 3,350. The 1000 male invites need to be boosted, by another 2,350, to 3,350 to fill the cells. Most researchers will have noticed that the last few cells in a project are hard to fill, that is because they have created unintentional interlocking quotas, locking the hardest cells together, which makes them even harder.

This, of course, is a very simple example. We only have two variables, each with two levels, and the only varying factor is the response rates between Male and Female. In an everyday project we would have more variables, and response rates will often vary by age, gender, and region. So, the scale of the problem in typical interlocking samples is likely to be larger than in this example, at least for the harder cells to complete.

Improving the Sampling/Quota Controlling Process
Once we realise we have a problem, and with the right information, there is plenty we can do to remove or ameliorate the problem.

  • Match the invites to the response rates. If, in the example above, we had invited twice as many Males as Females the cells would have completed perfectly.
  • Use interlocking cells. To do this you might run an omnibus before the main survey to determine what the cells targets should be.
  • Use the first part of the data collection to inform the process. So, in the example above we could have set the quotas to 50 for each of the four cells. As soon as one cell fills we look at the distribution of the data and amend the structure of the quotas, making some of them interlocking, perhaps relaxing (i.e. make bigger) some of the others, and invite more of the sorts of people we are missing. This does not fix the problem, but it can greatly reduce it, especially if you bite the bullet and increase the sample size at your expense.

Working with panel companies. Tell the panel company that you want them to phase their invites to match likely response rates. They will know which demographics respond better. For the demographic cells, watch to see that they are advancing in step. For example, watch to see that Young Males, Young Females, Older Males, and Older Females are all filling at the same rate and shout if this is not happening.

It is a good idea to make sure that the fieldwork is not going to happen so fast that you won’t have time to review it and make adjustments. As a rule of thumb, you want to review the data when one of the cells is about 50% full. At that stage you can do something about it. This means you do not want the survey to start after you leave the office, if there is a risk of 50% of the data being collected before the start of the next day.


Questions? Is this a problem you have comes across? Do you have other suggestions for dealing with it?

Jun 292013
 

The ITU (the International Telecommunication Union, the UN agency that looks after ICT – information and communication technologies) has produced a useful update on ICT facts and figures.

The report is well worth reading and shows, amongst other things:

  • As more and more mobile phones are bought, the growth is slowing. In 2005/6 the global growth rate in cellular subscriptions was just under 25%. In 2012/13 it was down to just over 5%. In the developing world the growth has fallen from over 30% in 2005/6 to just over 6% now. None of which is surprising, but it is nice to know the numbers.
  • The internet continues to grow in all regions and globally. With 77% in the developed world having internet access, and 31% in the developing world.
  • Globally just under 3 billion people are using the internet, almost 40% of the population.
  • About 50% of the households with access to the internet are in the developing world (although that is a much lower penetration rate than in the developed world, 28% in the developing world and 78% in the developed world).
  • Fixed-broadband is much cheaper in the developed world than the developing world, although the price has been falling in the developing world. Costs in the report are measured as a percentage of GNI (Gross National Income – roughly, the amount the whole country earns) per person. In the developed world fixed broadband costs under 2% of average monthly income, in the developing world it costs over 30% of average monthly income.
  • Fixed broadband in the developing world is growing, but is still only 6%, compared with 27% in the developed markets. However, over 50% of the households with fixed-broadband are in the developing world, because it is larger.
  • The four countries with the highest percentage of their fixed-broadband being high-speed are: South Korea, Hong Kong, Japan, and Bulgaria.
  • Mobile broadband subscriptions have grown from under 300 million in 2007, to 2 billion in 2013.
  • In the developing countries mobile broadband is more expensive than it is in the developed markets, but cheaper than fixed broadband in the developing markets.
  • In Africa mobile broadband subscriptions cost about 50% of average income, compared with less than 2% in Europe.

The ITU is 100% wrong on penetration
So, it is a pity that the ITU refer to a highly misleading statistic in their report, which challenges the value the way that data from the ITU will be considered. And, it is a pity that some people in and around the market research world have picked up on this misleading number.

What is this misleading statistic? I am referring to the part of the report where the ITU says that the penetration of mobile-cellular is 96% globally and approaching 100%. It then compounds its dodgy use of language when it describes the penetration in the developed world as 128%, and describes mobile-cellular penetration as 170% in the CIS (a subset of the countries that used to be in Soviet Union, including Russia).

Let’s just think about 100% for one moment. In the way we normally use the phrase (for products, diseases, education, services) 100% would mean every baby, every prisoner, every homeless person would have one. For example, when we estimate the penetration of a TV show we interview a representative sample and gross up to the population. Clearly, it would be a nonsense to claim that 100% of people have a mobile phone. By the time we get to 170%, we can see that the ‘normal’, or useful definition of penetration is not the one they are using.

So, what do the reports of 100% penetration mean? Read the non-nonsense bits of the ITU report and you will notice that the team who have produced the charts (as opposed to the copy) refer to mobile-cellular subscriptions, and mobile-cellular subscriptions per 100 people. It is a pity that the copywriters did not follow the lead of the ITU people who worked on the charts.

What are mobile-cellular subscriptions? Very roughly, the number of subscriptions is the number of sims in use. If somebody has two phones, that is two sims, two subscriptions. If somebody has a dual-sim phone, that is two sims, and is often two subscriptions. If somebody has two phones, a tablet, and a mobile modem, they have four sims.

Am I just being pedantic, or does it matter? Yes, in my opinion it matters. Because people are quoting these super high ‘penetration’ rates there is an assumption that catering for mobile phone users, in and of itself, avoids excluding people. We can use the UK as a good example. The ITU figures for the UK, in 2011, says there were 131 subscriptions per 100 people – a figure the ITU copywriters and careless MR tweeters would call 131% penetration. However, the UK’s General Lifestyle Survey found that in 2011 one-in-seven households had zero mobile phones (i.e. 86% of households had at least one person in it who had at least one mobile phone). Data collected in the UK by the communication regulator (Ofcom) estimate that at the end of 2012 92% of adults owned or had the use of a mobile phone.

In the developed markets, such as the UK, the difference between a penetration rate of 131-132% of the total population (babies and all) and a real rate 0f 86-92% of adults is not particularly important. But if the ratio in the UK is typical, the ITU figure of 100% global could mean about two-thirds of adults have the use of a mobile phone, and that does matter. For example, it means research projects requiring a good representation of people, in some countries, cannot assume that mobile is currently a safe option.

May 192013
 

1 It’s not your classic textbook
This book focusses on the questions that are part of the everyday practicalities of market research, the advice you don’t typically get from a textbook – the type of advice researchers would ideally have a mentor or more experienced colleague to ask – unfortunately not everyone has these support networks.

2 The contributors are practitioners
The content has been prepared by a team of experienced researchers, so the advice is relevant for researchers who are talking to clients, writing proposals, managing projects, developing questionnaires, analysing data, reporting results, etc.

3 A great resource for the generalist or research all-rounder
(Thanks to Sue Bell for emphasising this point.)
Many conferences and events, social media forums, and journals focus on specialist areas. This book, doesn’t cover everything, but aims to give a solid grounding on the basics, written and reviewed by experienced market and social research industry heavy weights who know what you need to know.

4 A balance between traditional and new techniques
The book covers the traditional areas – questionnaire design, qualitative, pricing research, B2B – as well as the emerging techniques, for example, communities and social media research.

5 A variety of views of expressed
In some areas of our profession there is not a consensus view – particularly in new and rapidly developing areas. This book highlights areas where consensus does not exist and presents the differing viewpoints.

6 The Client perspective is explored
Special attention is paid to one of the key relationships in market research, that of client and research provider, with an emphasis on the points of tension.

7 A Global Perspective
Unlike some textbooks, which focus on specific markets or regions, this book recognises many researchers are operating in international markets and also the issues and challenges faced by those working in markets with different levels of economic and technological development.

8 Ethics, Laws, Codes and Guidelines
As could be expected of book put together by ESOMAR, the book explains in simple and clear terms why we have these and how to fit them into everyday research.

9 Advice for both new researchers and more experienced researchers who are new to a topic
Thanks to Phyllis Macfarlane for emphasising this point.

10 It’s great value, at 20 Euros (including postage and packaging)
And, if you like it so much you want to bulk order for colleagues, clients, or students – better prices are available via ESOMAR!

Join us at the book launch
On Wednesday, 22 May, ESOMAR and NewMR are holding a virtual book launch, where contributors to the book will explain the book’s mission, its content, and more about how you can be involved. Click here to find out more details and to register to attend.

So what do you think?

Declaration of interest, I am one of the Editors and Curators of the project (as was NewMR’s Ray Poynter) – Sue York

Dec 182012
 

I am in the process of writing an introductory statistics book for market researchers. This post and some of the following posts are taken from that book, in an attempt to field test the style, approach, and depth I am employing. All comments welcome.

My recommendation is that most numbers in presentations and reports should be presented as 2 or 3 significant digits. I feel that the issue of significant digits is more important than the more frequently discussed issue of decimal places.

In a number, the significant digits are those that carry the key details. If a bank robber steals $56 million, the 5 and the 6 are the significant digits – and the million gives the scale of the number. If we say that PI is 3.1416 then we are showing it to four decimal places and five significant digits.

Table 1 shows the number of internet users in five key, original, members of the EU; showing the raw numbers and the same numbers using two significant digits.

Column B shows the estimates in the format they were downloaded from the InternetWorldStats website. These raw numbers contain 7 or 8 digits, and commas are used to help make the numbers more readable. These values, presumably, represent the best estimates for each country, but they require an active act to read and interpret. By contrast, Column C shows the numbers using just two significant digits.

The use of two significant digits in Column C has two advantages, when compared with Column B.

  1. It is much easier see the relationships in Column C, compared with Column B. For example, in Column C, it is easy to see that Italy has just over twice as many internet users as the Netherlands, and about half as many as Germany. This information is harder to see at a glance in Column B.
  2. Almost all numbers have errors in them, and they tend to relate to a specific moment in time. Statisticians talk of spurious accuracy when too many digits are displayed, for example when saying 37.67% plus or minus 10%. If we use all of the digits, as in Column B, then we are implying (to most readers) that all the digits are equally accurate. By using just the two most significant digits, Column C gives a message to the reader that these are approximations.

Methods of utilising 2 or 3 significant digits
Here are some tips for different situations:
  1. Percentages. Only use round numbers, e.g. 36% rather than 35.67%.
  2. Salaries. Round them to the nearest thousands, for example $136K, rather than $135,670.
  3. 7-point rating scales. One decimal place, for example 4.6 rather than 4.634.
  4. Sales. Round the numbers to the nearest thousands, million, or billions. For example, numbers like 36,785 and 76,230 could be expressed as 37K and 76K (two significant digits). However, 36,785, 76,230 and 148, 102 would need to be shown as 37K, 76K, and 148K (three significant digits).

Exceptions
Ralph Waldo Emmerson said “A foolish consistency is the hobgoblin of small minds”, and it would be foolish to think that every set of numbers can be shown to two or three significant digits. Background documents, notes, and tables are often better with more digits.

However, in most cases, and in most presentations and reports, two or three significant digits are going to help the audience/reader understand the message better than showering them with digits.