Jun 222014

We like to think of ourselves as rational creatures and we like to think we can trust our ears. However, watch the video below and be ready to change your mind.

The Mckurk effect , the understanding of which dates back to 1976, shows how hearing and vision interact with each other. One of the interesting things about this effect is that even once you are aware of it you still experience it.

From a marketing and market research point of view key messages are:

  1. Changing the sound can change the perception, which means that the real sound should be tested as part of the research.
  2. More generally, the behavioural sciences, such as behavioural economics and neuromarketing are changing the our understanding of how marketing works and how it should be evaluated.
  3. Perception is not reality, which in terms of persuasion means that reality is not always relevant.
  4. People exposed to this sort of effect may be tricked, but if they are they are likely to be angry once they are aware – so include checking to post purchase remorse as part of the research.

Can you suggest other similar effects that help remind marketers and market researchers that they can’t trust their model of the rational consumer.


Jun 112014

Guest post from Gaelle Bertrand, Client Director, Brand Insight, Precise, UK.

This post is based on material Gaelle contributed to the #IPASocialWorks ‘Measuring Not Counting’ project – and is slightly different to most of the other posts in this series (click here to see a list of the posts in the series) but it provides a good overview of using social media to evaluate media campaigns.

Using social media to measure traditional media campaigns

Measuring the effectiveness of communication campaigns through traditional media such as TV advertising has long been the remit of quantitative researchers across the globe. Representative sample surveys aimed at measuring the public’s awareness of a campaign, recall of its messages and more importantly whether it has shifted the needle in terms of brand awareness and perceptions are the norm. However, the advent of social media and the unprompted brand mentions it yields means that researchers now have a unique opportunity to get a read on most campaigns’ effectiveness. So what does social media analysis bring to the equation?

Strengths and weaknesses
One of the key strengths of social media is its immediacy, so it is an excellent way to get an early read on what people think of your campaign within the first hours of its launch.

The fact that posts are self-generated and can be mined retrospectively is also a key asset. It means that researchers do not have to rely on respondents’ recall, as with more traditional methods, and can potentially measure true unprompted awareness from the level of mentions the campaign receives in social media. It also means that benchmarks of awareness and perceptions prior to the campaign can be easily derived after the campaign has ended as there are no time constraints. This is a key advantage that traditional research does not have.

Social media can also reveal the most salient aspects of the campaign without respondents’ being prompted, which you could argue is a purer reflection of consumer perceptions and attitudes towards the campaign, and ultimately how they affect brand image, than those derived through traditional research techniques.

Social media does not just enable measurement though it also provides an unprompted in-depth understanding of initial reactions to a campaign which could only be replicated through qualitative research techniques.

While it all sounds very positive so far, there is a key aspect which must not be forgotten: social media’s representativeness (or some would say lack thereof) of the public’s opinion.

Despite the fact that the reach of social media is expanding daily and that Facebook has a reported active UK user base of over 31m and Twitter 10m, the demographic representativeness of this audience is likely to be put in question.

Many would argue that as long as this fact is clearly used to contextualise and interpret the content of conversations, it becomes a secondary issue. This also strongly reinforces the need for social media not to be used in isolation from other data collection techniques to provide context. The bigger question, it seems, is whether the attitudes and perceptions expressed in social media conversations reflect those of a wider audience. There is strong evidence that it does but piloting the approach before measuring any campaign is a must to create benchmarks pre-campaign and validate the approach.

Best Practices

  1. Run a benchmark analysis prior to the campaign. This will be key to measuring any shifts in levels of conversation about the brand, but also existing attitudes and perceptions. This will also be a useful exercise to determine which metrics the campaign will be measured on. Using a 3-month time frame before the campaign is likely to smooth out any spikes driven by other events or campaigns.
  2. Build an intelligent search query. Using the campaign strapline or title will not be enough to gather relevant content. Use key words which relate to key elements of the campaign e.g. central character and premise but also key words associated with the themes or topic broached. This will ensure that the range of content gathered is in consumers’ own words.
  3. Apply sampling principles. The social media data set is vast and generally cannot be analysed in its entirety without significant resource investment. Intelligent sampling is therefore essential. Sampling can be done across the body of mentions I.e. across all social media channels using random sampling principles or be restricted to one or all of the main consumer channels (Facebook, Twitter, YouTube).
  4. Remember that volumes and share of voice hide rich insights. While volumetrics are sometimes useful, they are not the be-all and end-all of social media analysis. The exercise is about measuring and not counting. This is why human analysis is important in this context.

Key considerations
The increasing use of hashtags by brands which serve as prompts to the campaign somewhat remove the candid nature of social media conversations about these activities, and effectively tag ‘prompted’ mentions. This should be considered when analysing results and analysed separately if appropriate. You have to be prepared for the fact that your campaign may not be talked about by social media users. It does happen!

May 022014

One of my favourite social media/listening books is Stephen Rappaport’s Listen First!, so I was delighted when his new book ‘The Digital Metrics Field Guide’ was announced, and even more delighted to get a copy to review.

The book has been produced and published by the ARF and you can download an interactive PDF from this link on the ARF site. The Field Guide is free for ARF members and $29.95 for non-members.

To produce the book Stephen reduced a list of about 350 metrics to 197 and backed these up by referring to almost 150 studies, which illustrates the claim that online is the most measurable medium. The book covers four digital channels: email, mobile, social, and the web, and produces a really easy to use reference for anybody interested in the area.

To make things easier Stephen has organised the information in three ways, Alphabetical, Category, and Marketing Stage – to deal with different tastes and preferences.

12 Fields per Metric
The book is organised in terms of 12 fields per metric, including: where it fits in Paid/Owned/Earned, its category, a definition, and the sorts of questions it answers. The use of a standardised format makes it much quicker for the user to find and locate a specific piece of information.

Examples of metrics covered include:

  • Average time spent on page – including issues such as tabbed browsing and download time.
  • Brand Lift – Did exposure to the advertising impact brand lift measurement?
  • Conversation – How many conversations are people having about the brand?
  • Direct Traffic Visitors – how many people came to the site directly?

Who should buy this book?
I think anybody who, over the next year or so, needs to check on the meaning, use, or definition of more than three or four of the digital metrics should buy a copy of the book. If you only need to refer to one or two, you could simply Google them, find some links, read some articles and come to a view. But, if you want a handy, well-researched, well laid out reference – this is the book for you.

Note, you will not want to necessarily sit down and read this book cover to cover, it is much more of a reference than a good read (but see next note on the essays).

The book finishes with a series of 12 essays and viewpoints, from people such as Gunnard Johnson from Google and David Rabjohns from MotiveQuest. Unlike the rest of the book, these should be read as opposed to referred to. Whilst I don’t agree with all the points made in the essays, they are valid and interesting points, and ones that anybody engaged in the medium should be familiar with.

Timely publication
For me the publication of Stephen’s book is very timely as I am working on part of the IPASocialWorks project, looking at a guide to ‘measuring not counting’ in social media. The focus of our work is much more about the strategy and best practices of measuring social phenomena, but Stephen’s book provides a great reference to the variety of metrics available.

Apr 212014
Path into salt

“Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” is one of the most commonly quoted comments about advertising, being variously attributed to John Wanamaker and William Lever. Perhaps as a consequence, one of the key uses of market research is to test, monitor, and track advertising. However, it might well be that half of the money spent on testing and tracking advertising is also wasted.

How does advertising work?
In the distant past we used to think advertising worked along the lines of the AIDA model, it helped create Awareness/Attention, Interest, Desire, and Activation. However, more recent research, including behavioural science, econometrics, and media mix modelling, have shown that the picture is much more complex.

One of the best studies of how advertising works is one carried out for the IPA by Les Binet and Peter Field, which produced the report “The Long and the Short of It”.

Short Term and Long Term?
One of the key findings in the work by Binet and Field is that short-term success is not a good or reliable indicator of long-term success. Rational measures, such as standout and attention are quite good at predicting short-term effects (such as whether people will try something, click on it, etc). However, these measures are not good predictors of long-term success.

What is long-term success? Perhaps the best way of encapsulating long-term success is to say that it reduces price elasticity. If we become more attached to a product or service we keep buying it, even if the price goes up, i.e. we are less price elastic, i.e. it is directly related to the ability to make more net profit, not just to the ability to move the volume of sales.

If the short term is so bad at predicting the long term, why do we focus on measuring short-term predictors?
In my opinion, the key reason for focusing on short-term predictors is that we can measure them, and they tend to correlate well with short term results – and in today’s short-term world an immediate correlation is quite reassuring.

The problem with long-term measures is that there is no clearly established research technique that predicts the long-term effects of advertising, although many agencies are working hard to find solutions. There is a feeling that focusing on emotional messaging might be capable of being predictive of long-term results, but the jury is out at the moment.

Watch the video
You can hear Les Binet and Peter Field talk about their findings in the video below.