Apr 302014
 
Suz Allen NewMR April Lecture - Thumbnail

I have just finished listening to a great presentation by Suz Allen (R&D Director Sensory & Consumer Science Asia Pacific & International, Campbell Arnott’s) talking about how suppliers and clients can work together better (you can access the recording and slides here).

Whilst I found the presentation useful, informative, and entertaining, I was amazed at how low the bar seems to be. I think it is distressing that agencies are making such basic mistakes.

Here are some of the recommendations that Suz made:

  • “No Surprises! Never!”
  • For presentations, arrive early, ask (in advance) if you can have early access to the room to set-up, have spare cables, connectors, clicker, etc (we should not need to be reminded of this!)
  • Match your staff to the client, some people work better together than others, this is a people business.
  • Call your client, 1, 3, or even more months after a project to ask how it is going.
  • The agency should seek to make the client look good, their “butt is on the line” when they hire us.
  • Value and reward good clients, for example sharing ideas, papers, leads, and recommendations with them.

Suz Allen’s presentation has lots more tips on best practice, advice about how to get the client’s attention, and advice on some things not to do. You can access the presentation and slides of Suz’s presentation by clicking here.

What are your thoughts? If you are client-side (or have been client-side in the past), how do these points compare with your experience?

Apr 212014
 
Path into salt

“Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” is one of the most commonly quoted comments about advertising, being variously attributed to John Wanamaker and William Lever. Perhaps as a consequence, one of the key uses of market research is to test, monitor, and track advertising. However, it might well be that half of the money spent on testing and tracking advertising is also wasted.

How does advertising work?
In the distant past we used to think advertising worked along the lines of the AIDA model, it helped create Awareness/Attention, Interest, Desire, and Activation. However, more recent research, including behavioural science, econometrics, and media mix modelling, have shown that the picture is much more complex.

One of the best studies of how advertising works is one carried out for the IPA by Les Binet and Peter Field, which produced the report “The Long and the Short of It”.

Short Term and Long Term?
One of the key findings in the work by Binet and Field is that short-term success is not a good or reliable indicator of long-term success. Rational measures, such as standout and attention are quite good at predicting short-term effects (such as whether people will try something, click on it, etc). However, these measures are not good predictors of long-term success.

What is long-term success? Perhaps the best way of encapsulating long-term success is to say that it reduces price elasticity. If we become more attached to a product or service we keep buying it, even if the price goes up, i.e. we are less price elastic, i.e. it is directly related to the ability to make more net profit, not just to the ability to move the volume of sales.

If the short term is so bad at predicting the long term, why do we focus on measuring short-term predictors?
In my opinion, the key reason for focusing on short-term predictors is that we can measure them, and they tend to correlate well with short term results – and in today’s short-term world an immediate correlation is quite reassuring.

The problem with long-term measures is that there is no clearly established research technique that predicts the long-term effects of advertising, although many agencies are working hard to find solutions. There is a feeling that focusing on emotional messaging might be capable of being predictive of long-term results, but the jury is out at the moment.

Watch the video
You can hear Les Binet and Peter Field talk about their findings in the video below.


IPA THOUGHT LEADSHIP – ADVERTISING THE LONG AND… by advertisingweek

 

Apr 132014
 
Shibyu At Night

OK, let’s get one thing clear from the outset; I am not saying social media mining and monitoring (the collection and automated analysis of quantitative amounts of naturally occurring text from social media) has met with no success. But, I am saying that in market research the success has been limited.

In this post I will highlight a couple of examples of success, but I will then illustrate why, IMHO, it has not had the scale of success in market research that many people had predicted, and finally share a few thoughts on where the quantitative use of social media mining and monitoring might go next.

Some successes
There have been some successes and a couple of examples are:

Assessing campaign or message break through. Measuring social media can be a great way to see if anybody is talking about a campaign or not, and of checking whether they are talking about the salient elements. However, because of some of the measurement challenges (more on these below) the measurement often ends up producing a three level result, a) very few mentions, b) plenty of mentions, c) masses of mentions. In terms of content the measures tend to be X mentions on target, or Y% of the relevant mentions were on target – which in most cases are informative, but do not produce a set of measures that have any absolute utility and usually can be tightly aligned with ROI.

An example of this use came with the launch of the iPhone 4 in 2010. Listening to SM made it clear that people had detected that the phone did not work well for some people when held in their left hand, that Apple’s message (which came across as) ‘you should be right handed’ was not going down well, and that something needed to be done. The listening could not put a figure on how many users were unhappy, nor even if users were less or more angry than non-users, but it did make it clear that something had to be done.

Identifying language, ideas, topics. By adding humans to the interpretation, many organisations have been able to identify new product ideas (the Nivea story of how it used social media listening to help create Nivea Invisible for Black and White is a great example). Other researchers, such as Annie Pettit, have shown how they have combined social media research with conventional research, to help answer problems.

Outside of market research. Other users of social media listening, such as PR and reaction marketers appear to have had great results with social media, including social media listening. One of the key reasons for that is that their focus/mission is different. PR, marketing, and sales do not need to map or understand the space, they need to find opportunities. They do not need to find all the opportunities, they do not even need to find the best opportunities, they just need to find a good supply of good opportunities. This is why the use of social media appears to be growing outside of market research, but also why its use appears to be in relative decline inside market research.

The limitations of social media monitoring and listening
The strength of social media monitoring and listening is that it can answer questions you had not asked, perhaps had not even thought of. Its weakness is that it can’t answer most of the questions that market researchers’ clients ask.

The key problems are:

  • Most people do not comment in social media, most of the comments in social media are not about our clients’ brands and services, and the comments do not typically cover the whole range of experiences (they tend to focus on the good and the bad). This leaves great holes in the information gathered.
  • It is very hard to attribute the comments to specific groups, for example to countries, regions, to users versus non-users – not to mention little things like age and gender.
  • The dynamic nature of social media means that it is very hard to compare two campaigns or activities, for example this year versus last year. The number of people using social media is changing, how they are using it is changing, and the phenomenal growth in the use of social media by marketers, PR, sales, etc is changing the balance of conversations. Without consistency, the accuracy of social media measurements is limited.
  • Most automated sentiment analysis is considered by insight clients and market researchers to either be poor or useless. This means good social media usage requires people, which tends to make it more expensive and slower, often prohibitively expensive and often too slow.
  • Social media deals with the world as it is, brands can’t use it to test ads, to test new products and services, or almost any future plan.

The future?
Social media monitoring and listening is not going to go away. Every brand should be listening to what its customers and in many cases the wider public are saying about its brands, services, and overall image. This is in addition to any conventional market research it needs to do; this aspect of social media is not a replacement for anything, it is a necessary extra.

Social media has spawned a range of new research techniques that are changing MR, such as insight communities, smartphone ethnography, social media bots, and netnography. One area of current growth is the creation of 360 degree views by linking panel and/or community members to their transactional data, passive data (e.g. from their PC and mobile device), and social media data. Combined with the ability of communities and panels to ask questions (qual and quant) this may create something much more useful that just observational data.

I expect more innovations in the future. In particular I expect to see more conversations in social media initiated by market researchers, probably utilising bots. For example, programming a bot to look out for people using words that indicate they have just bought a new smartphone and asking them to describe how they bought it, what else they considered etc – either in SM or via asking them to continue the chat privately. There are a growing number of rumours that some of the major clients are about to adopt a hybrid approach, combining nano-surveys, social media listening, integrated data, and predictive analytics, and this could be really interesting, especial in the area of tracking (e.g. brand, advertising, and customer satisfaction/experience).

I also expect two BIG technical changes that will really set the cat amongst the pigeons. I expect somebody to do a Google and introduce a really powerful, free or almost free alternative to the social media mining and monitoring platforms, and I expect one or more companies to come up with sentiment analysis solutions that are really useful. I think a really useful platform will include the ability to analyse images and videos, to follow links (many interesting tweets and shares are about the content of the link), to build a PeekYou type of database of people (to help attribute the comments), and will have much better text analytics approach.

 

Apr 072014
 

Last week Jeffrey Henning gave a great #NewMR lecture on how to improve the representativeness of online surveys (click here to access the slides and recordings). During the lecture he touched lightly on the topic of calculating sampling error from non-probability samples, pointing out that it did not really do what it was supposed to. In this blog I want to highlight why I recommend using this statistic as a measure of reliability, but not validity.

If we calculate the sampling error for a non-probability sample, for example from an online access panel, we are not representing the wider population. The population for this calculation is just those people who might have taken the survey. For example, just those members of the online access panel who met the screening criteria and who were willing (during the survey period) to take the study. The sampling error tells us how good our estimates of this population are (i.e. those members of the panel who met the criteria and who were willing to take a survey at that particular time).

If we take a sample of 1000 people from an online access panel and we calculate that the confidence interval is +/-3% at the 95% level, what we are saying is that if we had done another test, on the same day, with the same panel, with a different group of people, we are 95% sure that the answer we would have got would have been within 3% of the first test. That is a measure of reliability. But we are not saying that if we had measured the wider population the answer would have been within 3%, or 10% or any other number we could quote.

The sampling error statistic from a panel is not about validity, since we can’t estimate how representative the panel is of the wider population. But, it does give us a statistical measure of how likely we are to get the same answer again if we repeat the study on the same panel, with the same sample specification, during the same period of time – which is a pretty good statement of reliability.

Note, to researchers reliability is about whether something measures the same way each time. Validity relates to whether what is measured is correct. A metal metre ruler that is 10cm short is reliable, it is always 10 cm short, but it is not as valid as we would like.

My recommendation is to calculate the sampling error and use it to indicate which values from the non-probability sample are at least big enough to be reliable. But let’s not claim it represents the sampling error of the wider population, nor that it directly links to validity.

I would recommend adding text something like: “The sampling reliability of this estimate at the 95% level is +/- X%, which means that if we used the same sampling source 20 times, with the same specification, we would expect the answers to be within X% 19 times.”

Total Survey Error
Another reason to be careful with sampling error is that it is only one source of error in a survey. Asking leading questions, asking questions that people can’t answer (for example because we are poor witnesses to our own plans and motivations), or asking questions that people don’t want to answer (for example because of social desirability bias), can all result in much bigger problems than sampling error.

Researchers can sometimes be too worried about sampling error, leading them to ignore much bigger sources of error in their work.

 

Apr 012014
 
Larnaca April 2014

I am currently at an academic conference on mobile research in Cyprus, a WebDataNet event. I am a keynote speaker and my role is to share with the delegates the commercial market research picture.

I really enjoy mixing with the academic world, and I am intrigued and fascinated by the differences between the academic and commercial worlds. This post looks at some of the key differences that I have noticed.

Timelines
In the academic world, timelines are usually longer than in market research. For example, an ethnographic project might be planned for 8 months, in the field for 4 months, and spend 12 months being analysed and written up. A commercial ‘ethnography’ might spend 4 weeks in design and set-up, the fieldwork might be wrapped up in 2 weeks, and the analysis and ‘write up’ conducted in 2 weeks.

In many ways the differences in the timelines result from differences in the motivation for doing a research project. Commercial market research is often conducted to answer a specific business question, which means the research has to be conducted within the timeline required by the business question – which is typically rapid. Academic research is typically conducted to advance the body of knowledge, which means there is often not a specific time constraint. However, there is a need to establish what is already known (the literature review) and a need to spend time creating a write up that embeds the new learning in the wider canon of knowledge.

The balance between preparation, action, analysing, and writing up

In the commercial world the answer is the point of the study; the method, providing it is acceptable, is less relevant.

In an academic study, the value of the specific answer is sometimes almost the least important feature of the project. For example, a commercial project looking at five possible ads for a new soft drink would seek to find the winner. An academic project would normally find that sort of result too specific (i.e. not an addition to the canon of knowledge). An academic project might be more interested in questions such as, what is the relationship between different formats of ad and the way they are evaluated, or the extent to which short-term and long-term effects can be identified. Indeed, in academic project the brands and the specific ads tested will often be obscured, because the study is about the method and the generalizable findings, not (usually) about which ad did best.

The definition of quality
Academic and market researchers have a hierarchy of types of validity but the hierarchy is not the same. Market researchers tend to value Criterion validity (does the measure correlate with or predict something of interest) as their ‘best’ measure.

By contrast, the academic world tends to prioritise Construct Validity, which relates to how well new findings relate to an accepted theory of how things work. This again probably relates to the specificity of the objectives. Market researchers need something that works well enough to solve a particular business problem. The academic is seeking to build knowledge and to connect that research to a wider framework.

The difference in samples
Most market research is conducted with a sample drawn from the target population and usually the sample is constructed to be similar to the target population in terms of simple variables such as age and gender – although it usually falls well short of being a random probability sample. By contrast, a large proportion of academic research appears to be conducted with convenience samples, often students.

The most common reason, for using convenience samples, is lack of resources. In some cases there is a belief that the phenomenon being researched is equally distributed across the population, such as preference for using left or right hand.

Access to the results
In commercial research the results are normally private to the client, unless they are for PR purposes. Traditionally, the results of academic research have been made available to the wider academic world. The future of access to academic research is subject to two contradictory trends. Firstly, commercially sponsored research is tending to be more secretive, because of the commercial interests. Secondly, Governments (who are often a major funder) are pushing the Open Data agenda, making research less secretive.

Which is better?
Academic research and market research differ in several ways, but that is mostly because they have different objectives. If you wanted to use a market research project for academic purposes you would need to add a literature review, add a comprehensive write-up, and be prepared to mount a robust defence of your method. If you wanted to use an academic project for a commercial project you would need to check the ethical clearance, check the timelines were going to be relevant, and check whether the study was likely to give an actionable result.