Apr 012014
 
Larnaca April 2014

I am currently at an academic conference on mobile research in Cyprus, a WebDataNet event. I am a keynote speaker and my role is to share with the delegates the commercial market research picture.

I really enjoy mixing with the academic world, and I am intrigued and fascinated by the differences between the academic and commercial worlds. This post looks at some of the key differences that I have noticed.

Timelines
In the academic world, timelines are usually longer than in market research. For example, an ethnographic project might be planned for 8 months, in the field for 4 months, and spend 12 months being analysed and written up. A commercial ‘ethnography’ might spend 4 weeks in design and set-up, the fieldwork might be wrapped up in 2 weeks, and the analysis and ‘write up’ conducted in 2 weeks.

In many ways the differences in the timelines result from differences in the motivation for doing a research project. Commercial market research is often conducted to answer a specific business question, which means the research has to be conducted within the timeline required by the business question – which is typically rapid. Academic research is typically conducted to advance the body of knowledge, which means there is often not a specific time constraint. However, there is a need to establish what is already known (the literature review) and a need to spend time creating a write up that embeds the new learning in the wider canon of knowledge.

The balance between preparation, action, analysing, and writing up

In the commercial world the answer is the point of the study; the method, providing it is acceptable, is less relevant.

In an academic study, the value of the specific answer is sometimes almost the least important feature of the project. For example, a commercial project looking at five possible ads for a new soft drink would seek to find the winner. An academic project would normally find that sort of result too specific (i.e. not an addition to the canon of knowledge). An academic project might be more interested in questions such as, what is the relationship between different formats of ad and the way they are evaluated, or the extent to which short-term and long-term effects can be identified. Indeed, in academic project the brands and the specific ads tested will often be obscured, because the study is about the method and the generalizable findings, not (usually) about which ad did best.

The definition of quality
Academic and market researchers have a hierarchy of types of validity but the hierarchy is not the same. Market researchers tend to value Criterion validity (does the measure correlate with or predict something of interest) as their ‘best’ measure.

By contrast, the academic world tends to prioritise Construct Validity, which relates to how well new findings relate to an accepted theory of how things work. This again probably relates to the specificity of the objectives. Market researchers need something that works well enough to solve a particular business problem. The academic is seeking to build knowledge and to connect that research to a wider framework.

The difference in samples
Most market research is conducted with a sample drawn from the target population and usually the sample is constructed to be similar to the target population in terms of simple variables such as age and gender – although it usually falls well short of being a random probability sample. By contrast, a large proportion of academic research appears to be conducted with convenience samples, often students.

The most common reason, for using convenience samples, is lack of resources. In some cases there is a belief that the phenomenon being researched is equally distributed across the population, such as preference for using left or right hand.

Access to the results
In commercial research the results are normally private to the client, unless they are for PR purposes. Traditionally, the results of academic research have been made available to the wider academic world. The future of access to academic research is subject to two contradictory trends. Firstly, commercially sponsored research is tending to be more secretive, because of the commercial interests. Secondly, Governments (who are often a major funder) are pushing the Open Data agenda, making research less secretive.

Which is better?
Academic research and market research differ in several ways, but that is mostly because they have different objectives. If you wanted to use a market research project for academic purposes you would need to add a literature review, add a comprehensive write-up, and be prepared to mount a robust defence of your method. If you wanted to use an academic project for a commercial project you would need to check the ethical clearance, check the timelines were going to be relevant, and check whether the study was likely to give an actionable result.

BRC Customer Insight Conference – London

 Posted by on February 13, 2014  Business, Market Research, Marketing  Comments Off
Feb 132014
 
BRC Customer Insight

Today I attended the BRC Consumer Insight Conference in London as was very pleasantly surprised by the quality of the event and the speakers. Here are a few notes I jotted down during the day.

Peter Williams, former CEO of Selfridges and board member of ASOS.COM – highlighted some of the fundamental changes in retail, including a long-term move to a smaller retail footprint with lots of consequences for malls, high streets, and especially secondary locations.

Rory Sutherland and John Kearon presented overlapping presentations that highlighted the shift in marketing away from the rational to the emotional. At one level this was refreshing, with its emphasis on behavioural economics and psychology, on the other hand it flies in the face of the trend towards the metrics of clicks, likes, and shares. Rory and John also had a few unkind words for economics and market research – a topic I will come back to in another post.

Ruth Spencer from Boots, Mike Coshott from B&Q, Caroline Pollard from Debenhams, Alex Chruszcz from ASDA, and Robin Phillips from Waitrose all spoke to different elements of using technology and systems to understand the customer. At the heart of the conversation was a core tension between:

  1. Wanting to build a single view of the customer, for example tying different data streams together.
  2. That each shopper is not a single homogeneous, person, the shopper when they are looking to buy a pair of tights in Boots after laddering theirs is different from a shopper spending time to explore new cosmetics options.

One theme that emerged from the day was that power is shifting from retailers to shoppers, with mobile, shopping apps, compare sites all assisting that change.

One interesting part of the day was hearing about some of the mobile campaigns and activities delivered via WEVE. WEVE is jointly owned by EE, Orange, and Vodafone and therefore has unparalleled access to UK mobile phone owners. David Sear, for example, talked about a WEVE campaign for Heineken which triggered a marketing message at 27C, which happened one morning at 9:45am due to the weather – it is all a learning process. Another target group for WEVE were for people who were at Heathrow at one moment in time and in Scotland about an hour later.

Rueben Arnold of Virgin Atlantic and Paul de Last from John Lewis finished the day with cases studies looking at how they were using insight to drive customer experience.

This was billed as the first annual BRC customer insight conference, which I thought sounded a little twee. But actually, this was one of the better events I have attended over the last 12 months. It brought together a good selection of client-side people talking about what they were doing, what had worked, and bit about what had not worked. The invited outsiders did a good job of creating a broader canvass. So, I certainly hope there will be another event next year.

Key points made during the day included:

  • Don’t pick a technology because it is interesting, pick it if it answers an identifiable and worthwhile business need.
  • New suppliers need to establish proof of concept.
  • Organisations that want to use a single view of the customer have to get rid of silos, and merge online with offline.
  • The mobile revolution has only just started, it is going to shape loyalty, tracking, offers, marketing, and shopping – and iBeacon looks like an interesting part of that picture.
  • Insight is when you hear what the customer says and use that as a clue to what the customer wants, insight is not simply reporting what the customer says.

 

Jan 042014
 

The worlds of academic and commercial research are being riven at the moment with concerns and accusation about how poor much of the research and conclusions that have been published are. This particular problem is not specifically about market research, it covers health research, machine learning, bio-chemistry, neuroscience, and much more. The problem relates to the way that tests are being created and interpreted. One of the key people highlighting the concerns about this problem is John Ioannidis from Stanford University and his work has been reported both in academic and popular forums (for example The Economist). The quote “most published research findings are probably false.” comes from Ioannidis.

Key Quotes
Here are some of the quotes and worries floating about at the moment:

  • America’s National Institutes of Health (NIH) – researchers would find it hard to reproduce at least three-quarters of all published biomedical findings
  • Sandy Pentland, a computer scientist at the Massachusetts Institute of Technology – three-quarters of published scientific papers in the field of machine learning are bunk because of this “overfitting”
  • John Bohannon, a biologist at Harvard, submitted an error stewm paper on a cancer drug derived from lichen to 350 journals (as an experiment), 157 accepted it for publication

Key Problems
Key problems that Ioannidis has highlighted, and which relate to market research are:

1. Studies that show an unhelpful result are often not published, partly because they are seen as uninteresting. For example, if 100 teams look to see if they can find a way of improving a process and all test the same idea, we’d expect 5 of them to have results that are significant at the 95%, just by chance. The 95 tests that did not show significant results are not interesting, so they are less likely to be published. The 5 ‘significant’ results are likely to be published, and the researchers on that team are likely to be convinced that the results are valid and meaningful. However, these 5 results would not have been significant if all 100 had been considered together. This problem has been widely associated with problems in replicating results.

2. Another version of the multiple tests problem is when researchers gather a large amount of data then trawl it for differences. With a large enough data set (e.g. Big Data), you will always find things that look like patterns. Tests can only be run if the hypotheses are created BEFORE looking at the results.

3. Ioannidis has highlighted that researchers often base their study design on implicit knowledge, without necessarily intending to, and often without documenting it. This implicit process can push the results in one direction or another. For example, a researcher looking to show two methods produced the same results might be thinking about questions that are more likely to produce the same answers. Asking people to say if they are male or female is likely to produce the same result, across a wide range of question types and contexts. By contrast, questions about products that participants are less attached to, in the context of a 10 point-scale emotional associations are likely to be more variable, and therefore less likely to be consistent across different treatments.

4. Tests have a property called their statistical power, which in general terms is the ability of the test to avoid Type II errors (false negatives). The tests in use in neuroscience, biology, and market research typically have a much lower statistical power than the optimum. This led John Ioannidis in 2005 to assert that “most published research findings are probably false”.

Market Research?
What should market researchers make of these tests and their limitations? Test data is a basic component of evidence for market research. Researchers should seek to add any new evidence they can acquire to that which they already know, and where necessary do their own checking. In general, researcher should seek to find theoretical reasons for the phenomena they observe in testing – rather than relying on solely on test data.

However, let’s stop saying tests “prove” something works, and let’s stop quoting academic research as if it were “truth”. Things are more or less likely to be true, in market research and indeed most of science, there are few things that are definitely true.

The ‘science’ underpinning behavioural economics, neuroscience, and Big Data (to name just three) should be taken as work in progress, not ‘fact’.

Is Ioannidis Right?
If we are in the business of doubting academic research, then it behoves us to doubt the academic telling us to be more skeptical. There are people who are challenging the claims. For example this article from January 2013 claims that the real figure for bad biomedical research is ‘just’ 14%, rather than three-quarters.

Dec 182013
 

Most market researchers (IMO) who use Twitter do so with the #MRX tag, with the #NewMR, #ESOMAR and #AMSRS tags a little way behind. Indeed Vaughan Mordecai has recently posted an interesting analysis of #MRX contributor and content – and Jeffrey Henning Tweets a weekly list of top #MRX links and posts a biweekly blog on GreenBook about the top ten.

But, is all of this just creating a cozy world where a few thousand market researchers tweet to each other, and nobody else really contributes, reads, are even cares? The quickest way to get recognition amongst market researchers is to use the #MRX tag, so it becomes the default, and in doing so, perhaps, it becomes a fence or boundary of our own making?

Time add new links to the wider world?
Other leading #MRX figures, such as Tom Ewing and Reg Baker have written about what happens if you ignore the #MRX audience, your figures quickly decline. But perhaps the key is to be adding more dimensions to what we do, and for those dimensions to have an external focus?

By external focus, I mean using cues and clues that other people are likely to be looking for. Who outside the market researcher Twitterati would be looking for #MRX or #NewMR – even if they were looking for market research related material?

Options we might want to consider, when talking about the right subjects are:

  • #ROI
  • #Insights
  • #Retail
  • #B2B
  • #Mobile (we do sometimes use #MMR – mobile market research, but that does not really ‘reach out’ to the non-cognoscenti)
  • #BigData
  • #Surveys

What do you think? Is there any potential in widening the hashtags we in the #MRX chatterati use? Or, would we still be talking to the same few people?

What tags would you suggest?

Dec 132013
 

Yesterday, at the BAQMaR Conference, the Fringe Factory launched its study into what young graduates are looking for in an industry and what is their perception of market research.

The Fringe Factory surveyed over 1800 graduates across nine countries. The report produced five “eye-catching insights and recommendations”. But for me one of the key points was that only 13% of the young people surveyed said they would consider a job in market research, and only 3% listed it as the best sector.

To find out more about the study, the Fringe Factory, and the other insights and recommendation, look at the presentation below. The presentation is hosted via SlideShare – this means you can advance the slides and by click on the four arrows in the bottom right of the presentation window, turn it into a full screen presentation.

The Fringe Factory is supported by ESOMAR. To find out more about the Fringe Factory, visit their website.

Dec 062013
 

Posted by Nikki Lavoie, Chief Commercial Officer, Sky Consulting, France.

We know that research participants sometimes cannot or will not be honest in their responses. We know about behavioral economics. We know all the things to say to encourage open and honest discussions and survey responses. But what about our online and social media-based conversations?

I’m a Second Generation Facebook user. By this I mean that I’ve been around on Facebook since almost immediately after it was released to universities in the Greater Boston Area (I’ll refrain from listing the year so you can’t do the math). What started out as a site intended to allow students to evaluate one another’s’ attractiveness has become a global commodity used for connecting, promoting, expressing, sharing, and now for market research.

One of the interesting trends that has come up in relation to social media outlets, and Facebook in particular, is something I’m going to call “mediawashing” (you heard it here first, write that word down). Similar to greenwashing, mediawashing is the dissemination of disinformation that a person chooses to put forth, typically about themselves or their lives, using social media. In laymen’s terms: people paint pretty pictures of their lives, but it’s often not the whole picture.

Numerous studies have been published demonstrating that not only is there a link between social media use and things like depression or lack of self-esteem, in some cases there is a causal effect due in large part to comparisons made between one’s real life and the bits and pieces of someone else’s life that have been shared.

A recent article in Newsweek points out how mothers on Facebook are battling increased stress and pressure thanks to constant boasting and bragging about achievements, development, and even things like a baby’s sleep habits, while conveniently failing to mention any hardships or struggles. Although most recognize logically that this is a “presentation of perfection” and is not a true reflection of reality, the effects on both the reader and the poster are dramatic. The interactions that would, at one time, impact the perceptions and behaviors of parents on a weekly or monthly basis are now happening several times a day for those who are using social networks.

This is, of course, one category of examples, but the point is clear: no one’s life is perfect. No one’s days are only filled with sunshine and rainbows and happy, helpful people who do everything they can to help a person succeed. Life is hard, and many people edit those parts out of what they choose to share.

For market researchers, this poses a particular problem when using social media and similar platforms (like communities, bulletin boards or online focus groups). While it’s true that anonymity, in these cases, offers respondents the chance to express themselves with a hopefully refreshing degree of honesty, one has to wonder: how much has mediawashing trained us to edit how we present ourselves to the world?

A measuring stick of the mediawashing effect could be the following: have you ever started to write something (a comment, a status, a tweet, a caption) on a social media platform, only to erase and begin again, in order to adjust your wording or tone? Have you ever started to write something and, in the end, decided not to post at all? Have you considered uploading a photo only to hesitate or stop because it doesn’t portray you or someone else in the best light?

As people and professionals who are dedicated to unearthing human truths from the respondents we connect with, we need to not only be aware that mediawashing exists, but to actively fight against it in our data collection processes. Whether we are scraping Facebook or Twitter or running a community, the risk that we, as researchers, only see the edited and refined version of someone’s life, their preferences, their behavior or their opinions is real. But so is the opportunity for us to allow our respondents the freedom of candid, open conversations where honesty is more valued than socially desirable responses.

So now it’s time we have an honest conversation among ourselves: how do we combat the effects of mediawashing in our research practices?

Click here to read other posts in this series.

Dec 022013
 

Posted by Karen Schofield Innovations Director at Join the Dots, UK.

If you ask a consumer walking out of a supermarket why they’ve got so much junk food in their trolley (admittedly, you might want to rephrase that rather than sounding like you’re accusing them, or they’ll never stop to do an interview), they’ll probably give you a rational, and likely very plausible, reason. Probably something like ‘ready meals are convenient’, or ‘the kids like them’, or ‘I was in a hurry’. And to some extent, this might be true, but what they probably won’t be able to tell you is that if they were hungry when they got to the supermarket, the chances are that their hunger, a ‘hot’ – or emotional – state, took over and they probably would have stuck more closely to their list if they’d gone shopping on a full stomach.

We’ve all been there – we’ve skipped lunch and someone passes by our desk with a cake which is so much more tempting when we’re hungry, even if we’re trying to watch what we eat. Giving in to temptation is part of what makes us human, as we’re often (and more than we might think) governed by our emotions.

By the same token, a consumer ordering a meal in a restaurant probably won’t be aware of the anchors within the menu design which affect their choice – like the really expensive steak which makes the standard (lower priced) steak look significantly better value in comparison, or the way they’ve gone for the second least expensive wine, because they’re using price as a heuristic for quality (but don’t want to spend too much or look too flashy with the dearest wine). And it’s not just us mere mortals who aren’t as rational as we might expect; even those paid big bucks to made fair, rational and sensible decisions have the same foibles – including (scarily) judges.

We’re at an interesting turning point for the industry where our understanding of the multiple layers of influence on consumers continues to advance at a rate of knots. Meanwhile, the technology available, especially mobile devices and other tools like wearable cameras, give us more opportunities than ever to get closer to decisions as they’re being made. Joining the dots of this knowledge and tech enables a much better appreciation of the way our minds are influenced, from the people we’re with, to the physical (or digital) environment or the way we’re feeling at the time events are unfolding, and leads us to a place where decision making starts to make a lot more sense, and so becomes more insightful and actionable.

So the opportunities are all about understanding what ingredients we need to use, and then tweaking the recipe to get it right. The basic recipe contains some combination of a big dollop of context, a cup of observation, a handful of self-reflection and a pinch of seasoning, but needs to be adapted according to taste. Then it’s about giving it a big stir and baking the mixture until it’s cooked right through. We shouldn’t be afraid to try new ingredients, or experiment with different flavour combinations to see what works best, as the best chefs will tell you. (Well probably. I don’t actually know any chefs. But I know a lot of other experimenters who would agree, although they don’t work in cake-related industries and the metaphor falls down, so I’ve made an assumption.)

If the bake is the analysis, then we need to make sure the decoration is top notch too. We need to present a beautiful cake, a beautiful story for our clients. A cake so tempting it makes our audience want to devour it, without having to worry about being in a hot state or watching their weight – it’s not a real cake after all. Or maybe it is, I’ve never tried making an actual debrief cake, the closest I’ve been is taking mince pies to a presentation, maybe I’ll put that on my list of things to try… And we’d do well to remember that we don’t stop being emotional decision makers when we walk into work in the morning, which is why creating stories with our research findings is so much more impactful than boring the pants off everyone with detail and 200 charts. (I’ve already ranted about that recently, so I won’t go over it again here.) Otherwise it’d be like baking a fantastic cake, jam packed with flavour and punch, that no-one will want to go near. And what a waste that would be.

Unfortunately, there are lots of yukky-looking cakes out there. The challenge is to create one our clients want to eat. Let’s bake!

At the risk of mixing my metaphors too much, that’s pretty much our philosophy at Join the Dots – we mix up different ingredients – or dots, if you will – to understand the bigger picture. Hence our name. So whatever your metaphor – whether it’s baking, dots, or something else – whatever you do, join them up, and when you’ve done that, don’t forget the icing on the cake.

Click here to read other posts in this series.

Dec 012013
 
Posted by Jane Frost, CEO of MRS, The Market Research Society, UK.
 

Market research has a great future if it is brave enough to change. This was the challenge laid down by MRS Patron and Dunn Humby co-founder Clive Humby to a packed committee room at the House of Commons last week. The occasion was the MRS sponsored debate which asked whether big data was the death knell for market research.

The challenge is valid. Last year MRS commissioned PWC to produce a report on the size of the market for research in this country. It was deliberately called “the Business of Evidence”, because I believe that only collectively and only by defining ourselves by our client value can we build on what has been historically, and remains as we speak, a world leading sector.

We need to adopt the language of the people who pay us. We should not be asking our clients to do our work for us in promoting the value of what we do. You rarely hear the finance director defining himself by his accountancy qualification. I have rarely heard a marketing director do so either. So how come we as sector manage to promote so many labels which are of relevance only internally?

We are a service industry which develops the intellectual capital that businesses and policymakers need to take decisions. The customer understanding supplied by us can transform businesses, increase revenues and cut costs. That customer understanding can come from any source. So-called big data, for example, is just one supply stream. As a sector we should embrace it, use it and shape it by our standards. To our clients “big data” is a shiny new toy: one they know will be fun, but they don’t quite know how to use. By running scared or by ignoring its glitter and promise we do start to render ourselves out of touch. Big data is not even new, I can certainly make a case for it going back to the Magna Carta and many of you may argue that the census was its real genus.

As I write this, the label “big data” is showing some signs of going out of fashion. However data analytics teams are growing, and if we can’t prove the value of professionalism and creating an integrated customer view using all the knowledge streams at our disposal , current research and insight teams may be renamed customer data teams.

We have some key messages to deliver, and we are more than capable of doing so. We can own the use of data as a research methodology rather than an independent idea. My own experience in speaking to clients shows that they welcome an authoritative contribution to the data debate. Collectively we need to speak with one voice on four key messages:

1) Quality: reinforcing the value of having trained and qualified professionals working for you. MRS recent successes in, for example, gaining government recognition for the importance of accreditation in research procurement shows that this can be done.

2) Managing data risk: the use of personal data, and big data in general are potentially a significant risk to clients. There are the legal and ethical risks, the increasing threat of legislation, and the increasing potential cost of large datasets without a defined value or use. Many people do not recognise how much effort needs to go into creating reliable data. Misuse of personal data, and general decline in trust is starting to create new “hard to reach consumers”, increasingly high value groups, who work at avoiding identification. I believe that personal data is potentially a material risk that should be on the radar of every company’s audit committee.

3) Corporate social responsibility: the management of personal data needs to have the same value as the management of ingredient sourcing and environmental impact. Unilever’s Polman believed that procurement would be an important part of Brand in future lets ensure this includes procurement of data.

4) Controlling the question and the costs: the benefit of helping understand the questions that data should be answering will clearly have a cost benefit to clients and help the utility of their data investments. This is a key role for qualitative research for example, but it needs explaining.

We have first mover advantage in the Fairdata Mark. Use it. We know that it is a good way into clients when used to address these issues. In the UK we are world leaders in research training and accreditation, and MRS will shortly launch a CPD scheme. If we collectively support this it will become more important to clients.

We believe the UK market for evidence is £3 billion big. Statistics show it is back on the road to growth .To meet the opportunity and the challenge we need collectively to adopt data sciences as our own, addressing misconceptions about the status of data, and the best way to exploit it.

Click here to read other posts in this series.

Nov 292013
 
By Peter Harris, Managing Director, Vision Critical Asia Pacific.

I’ve had the opportunity to attend a few MR and Marketing Industry conferences in Australia, North America and Asia over the past 12 months. As always, these conferences are designed to scare the living daylights out of marketing and research professionals. They are highlighting how much things are changing, that consumers are more empowered than ever, that technology is the driving force, that clients are demanding more, faster, for less, and the fast flowing giant river of information (big data). In short, they are driving home the fact that the Revolution is on, i.e. “If you don’t like change, you will like relevance less”. In general I think this is right. But each of us has a chance to make a difference.

As a global profession, our biggest opportunity and biggest threat will be defined and determined by how much we ourselves are willing to be flexible in a digital driven world. We need to find ways to keep up with change and feel comfortable in a land where we don’t know what is around the corner. It’s hard for many MR professionals to do this (as we love to be in control and understand) but we need to try.

It’s cliché now to say the world is changing quickly, but it is. MR is driven by speed, agility, ROI, obtaining answers using multiple data sources and real time reporting. The biggest threats I see for MR in this world include:

  • Ignoring or not letting new players/experts into our tent so we can learn and collaborate from and with them. We also need to co create the new privacy world, convince governments of the benefits and ensure all players follow the rules otherwise we all risk being shut out in a world where customers do want a say in how things are.
  • If we continue to be obsessed with representative samples in a world where this is virtually impossible to achieve and do not take advantage and find ways of using new sample sources that are well profiled.
  • If companies continually try and make all of their money on fieldwork, surely with b2b sample sources like LinkedIn, improving customer databases and the growth of insight communities the days of high margin fieldwork are short-lived.
  • If we don’t change our approaches to contacting people so that we fit more into their lives, vs. interrupting them. Our contact with customers, consumers, citizens needs to be shorter, more engaging and we need to give back once they share with us.
  • If we fail to highlight and monetise our real expertise which is organising and analysing customer or consumer responses (however they are collected) and uncovering real answers to business problems and this doesn’t mean simply what was stated. We know it is about understanding what was meant.
  • If we don’t take advantage of the benefits that technology solutions can bring to MR.

There are however many exciting opportunities to balance out the threats including:

  • Making the most of mobile and new forms of sample to understand in the moment and how people live.
  • Leverage technology to understand the unconscious, reduce time, be able to deliver more for less and more frequently and develop longitudinal sight of customers over time that helps us put the pieces together as to why things happen.
  • Find ways to tell more stories that highlight ROI of MR investment and the impact of getting a customer voice into the organisation.
  • Work more cooperatively and develop trust between clients/agency and between agencies that can complement each other.

I’m extremely positive about our profession’s future and most global studies say that MR professionals want to change. Consumer empowerment and putting the customer at the centre of decision making is a shift, not a fad, so in simple terms the market is heading towards us, and we need to be flexible as we continue to evolve.

Click here to read other posts in this series.

Nov 292013
 

Posted by Tiina Raikko, Director, Fuel, Australia.

These are some of the opportunities. And in many ways have been what clients have wanted since way back when. The only difference is that technology and the digital age has made achieving this more real. It is fair to say that the research industry has been responding. Compared to 20 years ago we can get quality research faster and cheaper – the holy trinity which seemed so impossible back then. Simply moving many traditional tools and methodologies online has achieved this. The opportunity remains to look at our methodologies and approaches and ask “how do we do this even faster and more cost effectively?”

Speaking from an FMCG perspective, the pressure on the research space is continuing to increase. There is less and less budget and less time to turn work around. We can’t create time so sometimes a quick and if not dirty but a little bit grubby method is better than nothing. Doing it right is best but if we don’t have the time then it’s academic.

With less money to spend we need to pick the most important projects and we need to be clever about how we use the budget for greatest effect. Faster, cheaper, less perfect solutions are sometimes the answer. We are an industry that has always prided itself on trying to do high quality work. We should never lose sight of that because the skill and rigour we bring will always be important. Increasingly though we need research that is ‘fit for purpose’, of an acceptable quality rather than necessarily the highest quality, more collaborative with consumers than rigorous in design. We know when we can’t have ‘perfect’ in design, we can get ‘good enough’ faster and cheaper which are often the overriding drivers. Hence the emergence of the Survey Monkey’s, robo-callers, Field Agents…

Obviously not everything can be done in the blink of an eye. Some research will continue to require greater rigour and thinking time which is only right. I don’t want my U&A or my strategic shopper qualitative work done overnight because I recognise it can’t be done well enough.

The good news is of course that some of this represents incremental work. These new methods have allowed us to test and check where money but particularly time simply didn’t allow in the past.

In a world with less budget there is also more focus on return. How did we use the last piece we did? What decisions did we make as a result? Did it predict our success/share accurately? When we get to the bigger, more complex research when we commit the budget and the time we didn’t really feel we had there is more demand for ‘quality’ in terms of our ability to predict success/behaviour and ensure ‘stickiness’ in the business. ‘We spent quite a bit on some research but we didn’t really use it’ or ‘it didn’t really perform in the market like they said’ doesn’t bode well for the next project of its type.

In many ways the opportunities for the industry are the same as they have always been… be faster, cheaper, better predictors but recognise that not all needs require the same ‘standard of finish’. Research doesn’t always need to be perfect, it needs to be fit for purpose.

What has changed more significantly are the threats. The online world, technology, social media have offered up new opportunities for us to connect more directly with our consumers without necessarily working through researchers. New competitors have emerged outside of traditional research to offer ‘fit for purpose’ solutions that don’t necessarily conform to our traditional ideas of good research. The opportunity for the industry is to continue to embrace all the new technology, be open to less perfect approaches, bring what rigour you can to it (which is more than some offering the services currently can do) and manage our expectations and understanding of what we are getting and not…and keep doing the ‘proper’ stuff well☺

Click here to read other posts in this series.