Jun 282014
 

This week’s Economist has an interesting article about the founders of Napster (Shawn Fanning and Sean Parker) and the difficulty they have had in coming up with a successful second presence in the market. Towards the end of the article the Economist refers to one of my favourite terms in the area of new business, “First-mover disadvantage”.

First-mover advantage?
Whenever I meet start-ups, or people back from the latest hi-tech innovation fest, the talk is often about first-mover advantage. The idea is that a company gets in first and secures a long-term advantage. However, although there are examples of first-mover advantage (e.g. when a first mover can tie-up the market for scarce materials), it is much more common to see first-mover disadvantages.

First-mover disadvantage
Examples of first-mover disadvantage go back at least as far as the printing press, noting that in the 16 Century Gutenberg died bankrupt). The economist article quotes Motorola and the mobile phone along with Netscape and the browser. To this list we could add:

  • Alta Vista had first mover status in search engines, but was overtaken by Yahoo! and then Google.
  • When personal computers first appeared the early advantage was with companies like Commordore, then Apple, then IBM, and now the PC is largely a commodity item, with a range of manufacturers, and most of the early leaders no longer in the market (Apple is still making personal computers, but has a relatively small market share).
  • Henry Ford appeared to have secured a first-mover advantage in 1908 with the Model T, but was overtaken in the 1920s by Chevrolet.

The awareness of first-mover disadvantage dates back a long way, for example here is a Forces article on it in 2007. In 2001 the Harvard Business Review reported a study that found that first movers in consumer goods and industrial goods tended to have a 4% LOWER ROI than later entrants to the market.

There are numerous causes of first-mover disadvantage, most of which relate to second mover advantage. The second mover can see what is working, they can aim to meet the unmet needs of the incumbent (which often means cost, but can mean efficacy, range, style etc).

Another source of first-mover disadvantage is that if a first mover is making money from its current model it often neglects the need to change, to disrupt itself, leaving it open to be disrupted by others.

So the next time somebody is pitching a product, investment, or job opportunity, watch out for that use of first-mover advantage.

In-and-Out
When somebody is talking about first-mover advantages. It can be a good idea to check whether it represents and in-and-out opportunity. An in-and-out opportunity is where there is a short-term first mover advantage, and there is an understanding by the people running it that the optimum strategy is to ramp it up quickly, generate revenues, and then get out.

Other Examples?
What are your examples of first-mover disadvantage?


 

Jun 252014
 

Lots of people seem to have a big down on jargon, but is that fair or useful? In Research-Live, Lucy Hoang of Northstar asks whether jargon is a necessary evil? In her post, Lucy pointed out some of the downsides, highlighted the uses of jargon, and shared some good points about helping newcomers get to grips with key terms, for example the use of COB for close of business.

Some of the comments on her post reiterated the view that all jargon was bad. However, I think some jargon is both necessary and indeed helpful.

In terms of definitions my starting point is the Oxford English Dictionary (the OED to its fans) which defined jargon as “words or expressions used by a particular profession or group that are difficult for others to understand.” Jargon is also of relevance to sociolinguistics where it is one of the terms that can indicate a speech community, where the use of language is different from the wider community, but facilitates communication within the group, where the group can be as broad as a profession, or as narrow as a family or group or friends.

Facilitating communication
The starting point for any aspect of communication is whether it aids the message recipient receiving/understanding the message the speaker/writer is attempting to convey. This means the speaker/writer must choose words that ‘work’ for the purpose of the message and which ‘work’ for the audience. Any use of jargon needs to be based on a reliable assumption that the recipient is going to understand it.

Bad Jargon
I do not think all jargon is good, indeed it might be the case that the majority of jargon is not good. When I talk about jargon that is bad I mean anything that confuses the audience or distracts the audience or reduces the ability of the message to be communicated.

Examples of bad jargon include:

  1. Language which is now out of date. For example, I sometimes see notes that ask the recipient to revert, or where the recipient of my message promises to revert. Revert used to be a common way of saying ‘reply’ or ‘respond’ – however, in the UK/USA/Australia few people under 50 are familiar with this term. Many Latin terms fall into this out of date category, at least in the worlds of marketing and insight, for example inter alia, amongst other things, is no longer readily and widely understood.
  2. Jargon from other professions. If you are talking to market researchers and marketers, then jargon from other fields should be avoided, or explained. For example the Big Data acronym ETL is likely to confuse, even if spelled out as Extract-Transform-Load it is likely to confuse. In these cases if ETL is important (and in these days of big data it is of growing relevance) then it needs to be explained.
  3. Jargon that is used to make the speaker appear smart or trendy, as opposed to helping communicate the message. For me, two recent examples are “Swim lane” (a specific responsibility within the business) or “Tiger teams” (a team of specialists, often technical IT specialists).
  4. Jargon that reinforces discrimination or bad taste. Many people feel that sporting analogies tend to be discriminatory in the sense that they are much more familiar to men than women, and represent an ‘in group’ (i.e. men) and an ‘out group’ (i.e. women). Dinking the Kool-aid (i.e., unquestioningly following the company line) is a fairly tasteless reference to the Jonestown Massacre. And, “opening the kimono” seems to me both sexist (otherwise why not open the yukata – worn by men and women) and somewhat creepy!

Good Jargon
Good jargon includes things that:

  1. Make the meaning clearer
  2. Reduce the effort required of the reader
  3. Place the emphasis of the message in the right place.
  4. Make the communication more engaging.

Making the meaning clearer
When talking about survey questions to people who know about survey questions the jargon term “Grids” is much clearer than a plain English description. The jargon term random probability sample, is usually much more precise than a paragraph explaining what it is. The term verbatims is rejected by Microsoft Word, because it is a truncated form of something like “verbatim responses” or “verbatim comments”, but it is very clear and very precise, amongst users of research – it means the comments from research participants, collected as part of a research process.

Reducing the cognitive load
Hey, cognitive load is certainly jargon. I could have changed my headline to say “Reducing the effort required by the reader to understand the message by recognising that recent developments in neuroscience have indicated that analytical thinking processes require considerable effort and that the brain typically tries to reduce work, which can result in the brain ignoring elements of the message or of jumping to conclusions.” But, I would argue my heading is better for the people I am expecting to read this post.

Referring to CATI instead of spelling it out (or instead of a plain English description) reduces the reader’s cognitive load. RDD (Random Digit Dial) is an even better example, when you are sure the reader is familiar with the term. So, I would write RDD in a paper for sampling geeks, I would write random digit dialling if writing for a wider research audience, and outside of research I would probably not refer to this degree of detail.

Putting the emphasis in the right place
The fairly new term C-suite refers collectively to people like CEOs, COOs, CFOs (Chief Executive Officer – i.e. the boss, Chief Operations Officer – often the person running the bits of the company that make it work, and Chief Financial Officer – the person running the financial side of the business, in particular the accounts). If the reader can reliably be assumed to know the term C-suite, then a sentence can be written about the need to communicate with the C-suite without the sentence having to focus on the detail of who, which is important when the focus should be on the message.

One of the key elements in communication is where the emphasis is. Jargon allows the writer/speaker to focus on the message, rather than giving equal weight to every part of the message, when the jargon is understood by the recipient.

Making the communication more engaging
Plain English definitely has its role. But nobody would have watched Shakespeare’s plays if he had used plain English, nobody would have read Hemmingway if he had used plain English, and nobody would listen to the messages of people like Seth Godin, Tom Peters, and Daniel Kahneman if they had used plain English. Busy people, people in marketing and insight professionals expect messages to be engaging, they want storytelling, they want creative use of images and language. These things tend to require jargon, metaphors, analogies, and idiom. Plain English runs the risk of not being listened to.

Some disparaged examples
Forbes is running a poll about jargon, seeking to identify the most annoying examples. So, here is my defence of some of their contenders:

  • Take offline: This means to stop discussing a topic in a group situation so that it can be covered later, perhaps in a one-to-one discussion, rather than in a group. Why do I like it? Because it allows a problem to be diffused politely and easily. It is great when two people in a meeting disagree and the rest of the room do not wish to be involved until the two people in dispute have come to a single view.
  • Full service: In marketing and market research this term is becoming increasingly useful as there are a growing number of companies who are not full service, choosing focus on one service, one media, or one method.
  • Ecosystem: This is a relatively new term in the marketing and insight world. It means looking at how things work in total, rather than looking at just one aspect. The mobile ecosystem, for example, refers to telephone service providers, handset manufacturers, users of phones, mobile advertisers, app producers etc. The reason to use the term ecosystem is to encourage the listener/reader to embrace the whole picture, not just the bit they have historically looked at.
  • Scalable. A scalable business or method is one that can be increased in size without a major problem. A good example of a business that is not scalable is face-to-face qualitative research. To double the size of the business requires twice as many people, and the people are hard to find. A good example of a scalable business is one that depends on technology, in which case a doubling of the sales might require very few extra people. When assessing new businesses or opportunities one of the key issues is whether the option is scalable.
  • Bleeding edge: The bleeding edge is when a development is so full of new developments that it is going to create stresses and problems because of its newness. For example, people who use Google Glass at the moment are going to meet challenges from legislators, shop owners, the general public, and from technology. The benefit of the term Bleeding edge is not only that it describes a phenomenon succinctly, but it also embodies a warning. For most people the bleeding edge is not a good place.

Remember
The key issue, as Lucy makes clear in her post, is making sure that the writer takes the reader into account. One of the great bits of advice in Lucy’s post is that we need to help newcomers to our industry learn the key terms and we should avoid stigmatising people who have not heard of: COB, CFO, IPO, CTR, or TLA.

ps the acronym TLA is a joke, it refers to Three Letter Acronyms


 

Jun 222014
 

We like to think of ourselves as rational creatures and we like to think we can trust our ears. However, watch the video below and be ready to change your mind.

The Mckurk effect , the understanding of which dates back to 1976, shows how hearing and vision interact with each other. One of the interesting things about this effect is that even once you are aware of it you still experience it.

From a marketing and market research point of view key messages are:

  1. Changing the sound can change the perception, which means that the real sound should be tested as part of the research.
  2. More generally, the behavioural sciences, such as behavioural economics and neuromarketing are changing the our understanding of how marketing works and how it should be evaluated.
  3. Perception is not reality, which in terms of persuasion means that reality is not always relevant.
  4. People exposed to this sort of effect may be tricked, but if they are they are likely to be angry once they are aware – so include checking to post purchase remorse as part of the research.

Can you suggest other similar effects that help remind marketers and market researchers that they can’t trust their model of the rational consumer.


 

Jun 212014
 
Nissam Small Car

A very large part of market research is based on asking people questions, for example in surveys, focus groups, depth interviews, and online discussions. In general, people are very willing to answer our questions, but the problem is that they will do it even when they can’t give us the right answer.

At IIeX last week Jan Hofmeyr shared the results of some research where respondents had been asked about which brand they buy most often and he compared it to their last 3 and last 6 purchases from audit data. He found that in the last 3 purchases 68% of people had not bought the product they claimed to buy ‘most often’, and in the last 6 purchases 58% of people had not bought their ‘most often’ brand.

The video below is designed for entertainment, but it illustrates the bogus answer problem really well:

There are two key reasons why asking questions can produce bogus answers:

  1. Social desirability bias. People are inclined to try to show themselves in the best possible light. Ask them how often they clean their teeth and they are going to want to give an answer that makes them look good, or at least does not imply they are lazy or dirty. In the video, many of the people know that music fans are supposed to know about music, so they don’t want to appear dumb.
     
  2. Being a poor witness to our own motivations and actions. Writers like Daniel Kahneman, Dan Ariely, and Mark Earls, have written about how people tend to be unaware of how they make decisions. Some of the people in the video are being primed in the question to assume that they know about the brand may possibly be deceived by their own thought processes, with what they do know being used a s pattern generator to produce plausible thoughts.

Of course, in addition to these two reasons, some people simply lie – but in my experience that is a tiny proportion (when seeking the views of customers and the general public) compared with the two reasons listed above. However, the problem of conscious lies increases if incentives are offered.

One way to reduce the number of false answers is to make it much easier for people to not answer a question, ideally by not having to say “I don’t know”, and letting people guide you to the strength of their answer. Look at the video again and you will see that many of the people being interviewed are trying to signal they don’t really know about the bands, for example “I don’t know any of their music but I’ve heard from my friends that ….”. For the sake of the interview and the comedy situation the interviewer presses them into appearing to know more. In an information gathering process we should take that as a cue to back off and make it safe or even ‘wise’ to avoid going any further.

Another important step is to avoid asking questions that most people won’t ‘know’ the answer to, such as “What is the most important factor to you when selecting a grocery store?”, “How many cups of coffee will you drink next week?”, “How many units of alcohol do you drink in an average week?”.

If you’d like to know more about asking questions, check out this presentation from Pete Cape.

The problems with direct questions are one of the major reasons that market researchers are looking towards techniques that use one or more of the following:

  • • Implicit or ‘neuro ‘techniques, such as facial coding, implicit association, and voice analytics.
  • • Passive observations, i.e. recording what people actually do.
  • • In the moment research, where people give their feedback at the time of an event, not at a later date via recall.


Jun 172014
 

Most samples used by market research are in some sense the ‘wrong’ sample. They are the wrong sample because of one or more of the following:

  • They miss people who don’t have access to the internet.
  • They miss people who don’t have a smartphone.
  • Not representing the 80%, 90%, or 99% who decline to take part.
  • They miss busy people.
Samples that suffer these problems include:
  • Central location miss the people who don’t come into central locations.
  • Face-to-face, door-to-door struggles with people who tend not to be home or who do not open the door to unknown visitors.
  • RDD/telephone misses people who decline to be involved.
  • Online access panels miss the 95%+ who are not members of panels.
  • RIWI and Google Consumer Surveys – misses the people who decline to be involved, and under-represents people who use the internet less.
  • Mobile research – typically misses people who do not have a modern phone and who do not have a reliable internet package/connection.

But, it usually works!

If we look at what AAPOR call non-probability samples with an academic eye we might expect the research to usually be ‘wrong’. In this case ‘wrong’ means gives misleading or harmful advice. Similarly, ‘right’ means gives information that supports a better business decision.

The reason that market research is a $40 Billion industry is that its customers m(e.g. markets, brand managers, ec) have found it is ‘right’ most of the time. Which begs the question “How can market research usually work when the sample is usually ‘wrong’?”

There are two key reasons why the wrong sample gives the right answer and these are:

  1. Homogeneity
  2. Modelling

Homogeneity
If different groups of people believe the same thing, or do the same thing, it does not matter, very much, who is researched. As an experiment look at your last few projects and look at the data split by region, split by age, or split by gender. In most cases you will see there are differences between the groups, often differences big enough to measure, but in most cases the differences are not big enough to change the message.

The reason there are so often few important differences is that we are all more similar to each other than we like to think. This is homogeneity. The level of homogeneity increases if we filter by behaviour. For example, if we screen a sample so that they are all buyers of branded breakfast cereal, they are instantly more similar (in most cases) than the wider population. If we then ask this group to rank 5 pack designs, there will usually be no major differences by age, gender, location etc (I will come back to this use of the word usually later).

In commercial market research, our ‘wrong’ samples usually make some effort to reflect target population, we match their demographics to the population, we screen them by interest (for example heavy, medium, and light users of the target brand). The result of this is that surprisingly often, an online access panel, or a Google Consumer Surveys test will produce useful and informative answers.

The key issue is usually not whether the sample is representative in a statistical sense, because it usually isn’t, the question should be whether it is a good proxy.

Modelling
The second way that market researchers make their results useful is modelling. If a researcher finds that their data source (let’s assume it is an online access panel) over predicts purchase, they can down weight their predictions, if they find their election predictions understate a specific party they can up weight the results. This requires having lots of cases and assumes that something that worked in the past will work in the future.

So, what’s the problem?

The problem for market research is that there is no established body of knowledge or science to work out when the ‘wrong’ sample will give the right answer, and when it will give the ‘wrong’ answer. Some of the cases where the wrong sample gave the wrong answer include:

  • 1936 US presidential election, a sample of 2 million people failed to predict Roosevelt would beet Landon.
  • In 2012 Google Consumer Survey massively over-estimated the number of people who edit Wikipedia – perhaps by as much as 100% – see Jeffrey Hennings review of this case.

My belief is that market researchers need to get over the sampling issue, by recognising the problems and by seeking to identify when the wrong sample is safe, when it is not safe, and how to make it safer.

When and why does the right sample give the wrong answer?

However, there is probably a bigger problem than the wrong sample. This problem is when we use the right sample, but we get the wrong answer. There are a wide variety of reasons, but key ones include:

  • People generally don’t know why they do things, they don’t know what they are to do in the future, but they will usually answer our questions.
  • Behaviour is contextual, for example choices are influenced by what else is on offer – research is too often either context free, or applies the wrong context, or assumes the context is consistent.
  • Behaviour is often not linear, and quite often does not follow the normal distribution – but most market research is based on means, linear regression, correlation etc.

A great example of the right sample giving the wrong answer is New Coke. The research, evidently, did not make it clear to participants that this new flavour was going to replace the old flavour, i.e. they would lose what they saw as “their Coke”.

In almost every product test conducted there are people saying they would buy it who would certainly not buy it. In almost every tracker there are people saying they have seen, or even used, products they have not seen – check out this example.

The issue that researcher need to focus on is total error, not sampling error, not total survey error, but total error. We need to focus on producing useful, helpful advice.


Jun 132014
 
Steve Wills

On July 16 Steve Wills will be giving a NewMR lecture on Insight Management and various initiatives to make it a recognised professions – click here to find out more about the lecture.

One of those initiatives is the creation of a MSc Insight Management degree by the University of Winchester, in the UK (to the South-West of London). Below you can read more about the course.

 

MSc Insight Management

The University of Winchester’s MSc Insight Management is designed for working managers, delivered on a part-time weekend basis. It will appeal to managers from diverse business support organisations such as market research, data analytics and competitor and market intelligence. The degree will give students an understanding of the Insight Management function and equips them with key skills in insight generation and delivery for business decision-making.

The degree develops students’ ability to critically evaluate the information needs of an organisation and the potential value it can generate. It explores ways in which organisations make sense of the information they generate, examining consumer decision and business decision processes. It examines the barriers to getting that information used when decision makers are swamped in diverse and often conflicting data and reviews approaches to generating insight through creative thinking techniques, within both divergent and convergent processes.

Being able to convey and articulate the meaning of insight at all levels from the Board through to those working at the sharp end of organisations forms a major part the degree.

You can find out more about the course from the Winchester University website.

Winchester Business School is a signatory to United Nations Principles of Responsible Management Education and the programme has been designed to fit within this framework.


Jun 112014
 

Guest post from Gaelle Bertrand, Client Director, Brand Insight, Precise, UK.

This post is based on material Gaelle contributed to the #IPASocialWorks ‘Measuring Not Counting’ project – and is slightly different to most of the other posts in this series (click here to see a list of the posts in the series) but it provides a good overview of using social media to evaluate media campaigns.


Using social media to measure traditional media campaigns

Introduction
Measuring the effectiveness of communication campaigns through traditional media such as TV advertising has long been the remit of quantitative researchers across the globe. Representative sample surveys aimed at measuring the public’s awareness of a campaign, recall of its messages and more importantly whether it has shifted the needle in terms of brand awareness and perceptions are the norm. However, the advent of social media and the unprompted brand mentions it yields means that researchers now have a unique opportunity to get a read on most campaigns’ effectiveness. So what does social media analysis bring to the equation?

Strengths and weaknesses
One of the key strengths of social media is its immediacy, so it is an excellent way to get an early read on what people think of your campaign within the first hours of its launch.

The fact that posts are self-generated and can be mined retrospectively is also a key asset. It means that researchers do not have to rely on respondents’ recall, as with more traditional methods, and can potentially measure true unprompted awareness from the level of mentions the campaign receives in social media. It also means that benchmarks of awareness and perceptions prior to the campaign can be easily derived after the campaign has ended as there are no time constraints. This is a key advantage that traditional research does not have.

Social media can also reveal the most salient aspects of the campaign without respondents’ being prompted, which you could argue is a purer reflection of consumer perceptions and attitudes towards the campaign, and ultimately how they affect brand image, than those derived through traditional research techniques.

Social media does not just enable measurement though it also provides an unprompted in-depth understanding of initial reactions to a campaign which could only be replicated through qualitative research techniques.

While it all sounds very positive so far, there is a key aspect which must not be forgotten: social media’s representativeness (or some would say lack thereof) of the public’s opinion.

Despite the fact that the reach of social media is expanding daily and that Facebook has a reported active UK user base of over 31m and Twitter 10m, the demographic representativeness of this audience is likely to be put in question.

Many would argue that as long as this fact is clearly used to contextualise and interpret the content of conversations, it becomes a secondary issue. This also strongly reinforces the need for social media not to be used in isolation from other data collection techniques to provide context. The bigger question, it seems, is whether the attitudes and perceptions expressed in social media conversations reflect those of a wider audience. There is strong evidence that it does but piloting the approach before measuring any campaign is a must to create benchmarks pre-campaign and validate the approach.

Best Practices

  1. Run a benchmark analysis prior to the campaign. This will be key to measuring any shifts in levels of conversation about the brand, but also existing attitudes and perceptions. This will also be a useful exercise to determine which metrics the campaign will be measured on. Using a 3-month time frame before the campaign is likely to smooth out any spikes driven by other events or campaigns.
     
  2. Build an intelligent search query. Using the campaign strapline or title will not be enough to gather relevant content. Use key words which relate to key elements of the campaign e.g. central character and premise but also key words associated with the themes or topic broached. This will ensure that the range of content gathered is in consumers’ own words.
     
  3. Apply sampling principles. The social media data set is vast and generally cannot be analysed in its entirety without significant resource investment. Intelligent sampling is therefore essential. Sampling can be done across the body of mentions I.e. across all social media channels using random sampling principles or be restricted to one or all of the main consumer channels (Facebook, Twitter, YouTube).
     
  4. Remember that volumes and share of voice hide rich insights. While volumetrics are sometimes useful, they are not the be-all and end-all of social media analysis. The exercise is about measuring and not counting. This is why human analysis is important in this context.

Key considerations
The increasing use of hashtags by brands which serve as prompts to the campaign somewhat remove the candid nature of social media conversations about these activities, and effectively tag ‘prompted’ mentions. This should be considered when analysing results and analysed separately if appropriate. You have to be prepared for the fact that your campaign may not be talked about by social media users. It does happen!


Jun 092014
 

Today I had the pleasure of taking part in a panel discussion with Lenny Murphy and Simon Chadwick, ably chaired by P&G’s Greg Rogers, as part of the Canadian MRIA’s annual Conference in Saskatoon. Only one of us, Greg, was actually in the room, or indeed in the country. Simon, Lenny and I all joined via webcams (Simon and Lenny from the USA, and me from the UK).

I look forward to hearing from the audience what they thought, but I really enjoyed it (find out more about the Conference via Annie Pettit’s blog). Virtual events are not unusual, indeed NewMR was perhaps the first to pioneer them in the market research space. And, I have dialled into events in the past, typically when something has gone wrong with the plans (for example clouds of ash grounding planes). But this was the first time where I joined a panel where all the guests joined by webcam, intentionally.

I hope that face-to-face conferences will remain a core part of how the MR industry goes about its sharing, learning, and networking – but I think remote speakers and panels could be a growing part of the picture, especially as the technology gets better, and as we get better at using it.

In terms of content, my 4 key points were:

  • The MR industry has changed much more than most people recognise, the rise of mobile, the move to insights, and changes in the client context are signs and causes of change.
  • Most big data investments are going to fail over the next 18 months. CMOs whose projects work will perhaps be rewarded, but the majority, who projects fail, will surely suffer.
  • Marketing and market research are going to increasingly merge, and this is going to cause tensions.
  • My advice for somebody entering our industry? Find something to be really good at. If you are great ethnography or data, at reporting or semiotics, at methodology or facial coding you will find a good place in our industry.

And, of course, the panel all agreed that Canadians are very nice people, but also the home of some great researchers and some great innovations.

Jun 092014
 

Guest post from Kristin Luck, President and CMO at Decipher, USA.

Click here to see a list of the other posts in this series.


After spending my childhood on a farm in rural Oregon without television or even a touch tone phone, I was determined to spend my early adult/post University years as an ‘early adopter’. I spent much of the late 90’s proudly sporting a Palm Pilot (then a Blackberry, then an iPhone) and becoming the go-to person in my circle of friends and colleagues for information about all things tech related. I mastered LinkedIn. I thought I had this whole social media thing nailed. And then there was Facebook. And Twitter. And Instagram. And Pinterest. If you’ve ever tried to use all six (and these are just the six I’m active on) for personal use…or business use….or (even more challenging) both, what I’m going to say next may resonate with you- I absolutely flailed. My social media presence was a disorganized time suck and I backed away from the whole mess of it. When colleagues asked why I wasn’t active on Twitter and Facebook I said I didn’t have the time. Or that I just wasn’t interested. Or that I didn’t think social media worked for my business. The truth is that I did, I was and it could. I just needed the right strategy.

Today I’m a social media junkie. I use LinkedIn daily to connect with prospective clients and colleagues. With over 3,400 Twitter followers I was recently named one of the site’s top 100 branding experts. I launched a Market Research group on Facebook that today is the largest in the industry with over 4,500 members. I’ve mastered using both Instagram and Pinterest.

At Decipher, we’re looking at social media as our primary marketing tool moving into 2015. We’re engaging with our clients and partners on social media more than ever before. We’ve learned that you CAN effectively market B2B on Twitter, Facebook, Instagram and Pinterest – it’s all about creating a strategy. It’s simply not enough to have a personal social media presence or a few social media sites up for your business. Without a clear social media strategy, you’ll struggle to increase customer engagement and, ultimately, sales.

To get the most out of your social media efforts, your strategy should include:

  • Determining which sites are most beneficial to post to and when to post to them
  • The types of content you can reasonably create and effectively promote (what’s sticky about you or your brand?)
  • Creation of a native storytelling experience
  • How to engage with current and potential clients online
  • Identifying the right metrics to use to measure your progress toward social media goals

Still stumped? We’re researchers- storytellers by trade. Think of social media as a storytelling platform for you and/or your brand. Talk to your audience. Don’t interrupt. Leverage pop-culture. Cultivate your brand personality. Have a sense of humor. And above all else, be consistent and self-aware.

And follow me. @kristinluck @deciphertweets


Jun 032014
 

We at NewMR are keen to hear the different ways that market researchers approach social media. We are interested in the private use, the brand building use, and the research use. We have invited a variety of people to share their thoughts and you can read them by accessing the links below.

‘What social media means to me’
Click on the names below to visit other posts in the series.

Would you like to share your take on social media via a blog post on NewMR? We are happy to review suggested posts, ideally about 300 to 600 words. Send you suggested copy to admin@thefutureplace.com.

Would you like to share your take on social media? If so, please email your suggested contribution (perhaps 300 to 800 words) to admin@newmr.org. Please also include your name, photo, and description.