Nov 292012
 
Click here to read in Japanese – 日本語 Picture of Hong Kong

Most market researchers are familiar with the Rogers Adoption Curve, which divides the adoption of a successful new technology in to Innovators, Early Adopters, Early Majority, Late Majority, and Laggards.

In a typical version of the curve, the proportions tend to be:

  • Innovators 2.5%
  • Early Adopters 13.5%
  • Early majority 34%
  • And the slower two categories make up 50%.

However, in Japan, in market research and perhaps beyond, I think the proportions in the Rogers Adoption Curve need re-visiting. Data presented by Mr Hagihara (author of ‘Next Generation Market Research’) at a meeting of JMRX in Tokyo this week, showing the adoption of CATI in the 80s and 90s, suggests that Japan was slow to innovate in market research. More recently the data presented by Mr Hagihara show that Japan was very slow to start to adopt online surveys. However, by 2011 Japan had the highest percentage of online research in the World. In Japan 40% of research in 2011, by value, was conducted online, according to JMRA and ESOMAR.

Talking with leading opinion formers in Tokyo this week, I formed the opinion that the Adoption Curve has a different shape in Japan. The Innovators are quite rare everywhere and this is particularly true in Japan. A key difference appears to be that there are fewer Early Adopters in Japan, much less than the 13.5% in the classic curve.

However, and in contrast, Japan seems to have more people in the Early Majority. The picture appears to be that initially Japanese market research suppliers and buyers are more conservative than their counterparts in USA and Europe. But, once a technique reaches a tipping point, Japanese companies seem to move faster enabling them to catch-up and over-take more traditional countries, as they have done with online surveys. For me the interesting question will be whether the same picture is true of research communities. These have been slower to take off in Japan, but there are signs that a tipping point is being reached, which might partly explain why almost 300 people turned up at three events in Tokyo this week to hear me speak about the future of research and role of communities.


Below is a translation of this article into Japanese by Mr. Ryota Sano, Chief Executive Officer, TALKEYE INC, ESOMAR Representative for JAPAN

日本のマーケットリサーチにおける異なった普及カーブ

マーケットリサーチャーの皆さんはロジャースの普及カーブ(Rogers Adoption Curve)、成功する新しい技術の普及の段階を「イノベーター」、「アーリーアドプター」、「アーリーマジョリティ」、「レイトマジョリティ」および「ラガード」に分類したもの、をよくご存じだと思う。一般的なカーブでは、それらの割合はそれぞれ、

  • イノベーター 2.5%
  • アーリーアドプター 13.5%
  • アーリーマジョリティ 34%
  • 普及の遅い二つのカテゴリ(レイトマジョリティおよびラガード ) 50%
とされている。

しかしながら日本では、マーケットリサーチ、そしておそらくそれ以外の分野においても、ロジャースの普及カーブの割合について検討し直さなければならないと考えている。今週東京で開催されたJMRXのミーティングで、萩原氏(「次世代マーケティングリサーチ」の著者)が提示したデータによると、1980,90年代におけるCATI(Computer Assisted Telephone Interview)の普及度は、日本のマーケットリサーチ産業がイノベーションを受け入れるスピードが遅かったという傾向を示唆している。さらに、萩原氏提供のより最近のデータは日本のオンラインサーベイ普及の立ち上がりが非常に遅かったという事実を示している。しかし、2011年までに、日本は世界の中でもっともオンライン調査の比率が高い国になった。JMRAおよびESOMARによると、2011年における日本の調査売上高の40%はオンライン調査によるものである。

今週東京で当地のオピニオンリーダー達と話しをして、私は日本において普及カーブは異なった形状をしているのではないかと考えるようになった。イノベーターが非常に少ないことはどこでも一緒であるが、日本においてはそれが顕著である。最も重要な違いは、日本ではアーリーアドプターの割合が少ない、つまり古典的なカーブにおける13.5%よりもずっと少ないことにある。

しかしながら、対照的に、日本におけるアーリーマジョリティの割合が多いようにみえる。構図としては、最初のうち日本のマーケットリサーチサプライヤおよびバイヤーは米国や欧州のそれらよりも保守的である。しかし一度ある手法が転換点に到達すると、日本の会社はより早く動き、より伝統的な国々をキャッチアップして追い越してしまうよう(オンラインサーベイでそれが起こったように)。個人的には、同じ構図がオンラインコミュニティーにおいても当てはまるのかどうかに興味がある。日本における(オンラインコミュニティの)立ち上がりは遅いが、転換点が近づいているという予兆はある。なぜそれがわかるかって?今週東京で開催された三つのイベントにのべ約300人もの聴衆がつめかけ、リサーチの未来およびコミュニティの役割についての私の講演を熱心に聴いてくれたのである。

Nov 272012
 
Click here to read in Japanese – 日本語

Yesterday in Tokyo I attended two events (one run by the JMA and one by JMRX – sponsored by GMO Research) and a client meeting, and one specific question arose at all three. The background to the question lies in Japan’s experience with MROCs (in particularly short-term, qualitative research communities). Although some companies have been very successful, several others have not, and some clients are beginning to be worried about MROCs.

So, the question I was asked three times was “How do you create a good MROC in Japan?” By the time I had spoken to three audiences I had refined my answer down to three clear points:

  1. Good recruitment. A short-term, qualitative MROC (e.g. one month, 60 people) needs to be based on the right people. These people need to be informed about what they will be expected to do, they need to understand how to access the MROC, they need to be engaged with the topic (they might love the topic, hate the topic, be curious about the topic, have recently started using it, or perhaps have given it as a gift – but they need to be engaged).
  2. Good moderation is essential. Conversations do not just happen, they are the result of good introductions, good questions, good probing, and interesting tasks. Too many clients want to get onto the serious questions too quickly. But, just like in a focus group, trust and understanding has to be built first. The moderator should agree with the client a clear community plan, showing how the research needs will be met during the project.
  3. Good analysis. Some research agencies simply tell the client what the people in the MROC said – this is not helpful, the client can read that themselves. Listing out and counting what was said in an MROC is not analysis. Analysis looks at a) what did respondents mean, and b) what should the client do.

I was very pleased to see at the meetings copies of my book (The Handbook of Online and Social Media Research) in Japanese (translated by GMO Research). Hopefully, the meetings we are having here and the book will help make all of the Japanese MROCs as good as the best ones.

It was very helpful to be accompanied to all of my meetings by Shigeru Kishikawa, the head of the newly opened Vision Critical Japan office and a great expert in MROCs in Japan.

Of course, the answer to the question how to run a good MROC in Japan, is also true in London, New York, Singapore, Helsinki, and everywhere else.

For more information on new research techniques, people can also check out the online conference happening next week, The Festival of NewMR.


Below is a translation of this article into Japanese by Mr. Ryota Sano, Chief Executive Officer, TALKEYE INC, ESOMAR Representative for JAPAN

日本でMROCを成功させるためには?

昨日、私は東京で、二つのイベント(一つは日本マーケティング協会主催、もう一つはGMOリサーチのスポンサーによるJMRX)と、クライアントとのミーティングに出席した。どのミーティングでもある特定の質問が出たことは興味深い。その質問の背景は、日本におけるMROCの経験(特に、短期間かつ定性的リサーチコミュニティー)に求められる。何社かは(MROCで)大きな成功を収めているものの、他の会社はそうとは言い難く、その結果、いくつかのクライアントはMROCに対して不安を抱き始めている。
もうおわかりだろう。いずれの会議でも異口同音に受けた質問は、「どうやったら日本でよいMROCを実施できますか?」であった。三つの会議で聴衆に向かって回答をすることにより、その質問に対する私なりの答えを三つのポイントに収斂させることができた。
1.リクルートが大事 短期間かつ定性的MROC(例えば、1ヶ月、60人規模)は「正しい」対象者から構成されている必要がある。対象者は「彼、彼女らが何をすることを期待されているか」をよく聞かされている必要があり、MROCへのアクセス方法を理解している必要があり、テーマに関与している(engaged)必要がある(彼・彼女らはその話題が好きかもしれないし、嫌いかもしれないし、興味を持っているかもしれないし、その商品・サービスを最近使い始めたかもしれないし、プレゼントとして送ったかもしれないが、いずれにしても彼・彼女らはテーマと結びついていなければならない)。
2.うまいモデレーションが必須 会話は自然には始まらない。会話はよい導入、よい質問、よいプロ−ビング、面白い課題の産物である。多くのクライアントは小難しい質問に始めから入りたがるが、グループインタビューと同様に、まずお互いの信頼関係と理解を得ることから始めなければならない。モデレータは、リサーチプロジェクトにおいてなにが知りたいのかを確認しながら、明確なコミュニティ運営プランについて事前にクライアントと同意しておくべきである。
3.価値ある分析 リサーチ会社の中には、単純にMROCで対象者が何を発言したか、だけを報告する会社もあるようだ。しかしこれではクライアントの助けにはならない。なぜなら単なる発言録ならクライアントも読めるからである。MROCでの発言された単語をリスト化して、それを数えるのは分析とは言えない。分析とは、イ)対象者の発言が意味するところを捉え、ロ)クライアントがどうすべきなのかを考察することである。
イベントで私の本(The Handbook of Online and Social Media Research)の日本語版(GMO社翻訳)を見ることができたのはたいへんうれしいことであった。東京でのミーティングおよび私の本が、日本のすべてのMROCが世界の最高水準に近づく手助けになることを願ってやまない。
新規開設されたVision Critical東京オフィス代表であり、日本のMROCのエキスパートである岸川茂氏にすべてのミーティングに同行いただいたことはたいへん心強かった。
もちろん、日本でよいMROCを運営する心得は、ロンドンでも、ニューヨークでも、シンガポールでも、ヘルシンキその他世界のどこでも通用するものである。 新しいリサーチ手法に関する情報がもっと欲しい方は、来週開かれるオンライン会議The Festival of NewMRもチェックしてみて欲しい。

Nov 252012
 
Picture of Hong Kong

I am about half way through my current tour of Asia, I have been in Singapore and Hong Kong and fly to Tokyo tonight. During my time here I have split my time between events and one-on-one meetings with clients and agencies. I am convinced that the next leaps forward for NewMR will come from Asia.

There are several reasons why I think that Asia will be the next factor in accelerating change:

  1. Hunger for change. One theme I am hearing from agencies and clients from China, Korea, Singapore, etc is a real desire to move forward, to get away from the static idea of surveys and focus groups and to embrace better alternatives. The interest in ethnography, semiotics, Big Data, neuroscience, social media, and communities is immense. Most of the sessions I have been booked for have been sold out.
  2. The complexity of the market. In Asia the languages are more complex (more complex for computers that is), and the variations within are greater (for example in China the gulf between what is possible in a Tier 1 city like Guangzhou and a Tier 3 like Weihai, or a Tier 4 city like Chaozhou is immense – but even Tier 4 cities have more than one million people). In tackling these problems, with greater software flexibility, with more emphasis on mobile, and by reaching into areas where the internet is more likely to be via an internet café – the platforms are going to take the next leap forward.
  3. The structure of markets. In Asia many markets are relatively small by European and North American standards. In the West the typical way to start a new project is to start in USA, or UK, or Germany, and then roll it out to more countries as it proves its worth. In Asia many of the requests we get are to start with, say, Singapore, Malaysia, and Hong Kong – because between them they have a big enough budget. But this means being innovative in terms of costing, project management, and project deliverables. This innovation is going to help market research globally.

I am convinced that Asia will be the fulcrum for the next advance in MR and I intend to be here to be involved and to learn from it. I will be back in January for Merlien’s Insights Valley conference in Malaysia and for ESOMAR’s APAC conference in Vietnam in April, and I suspect pretty regularly for the foreseeable future.

At the Festival of NewMR we have some great papers from Asia, most of which have global relevance – it is time for the world to, once again, start learning from Asia.

Nov 222012
 

Earlier this week I was in Singapore, attending the MRSS Asia Research Conference, which this year focused on the theme of Big Data. There was an interesting range of papers, including ones linking neuroscience, Behavioural Economics, and ethnography to Big Data.

One reference that was repeated by several of the speakers, including me, was IBM’s four Vs, i.e. Volume, Velocity, Variety, and Veracity. Volume is a given, big data is big. Velocity relates to the speed that people want to access the information. Variety reminds us that Big Data includes a mass of unstructured information, including photos, videos, and open-ended comments. Veracity relates to whether the information is correct or reliable.

However, as I listened to the presentations, and whilst I heard at least three references to the French mathematician/philosopher René Descartes, my mind turned to another French mathematician, Peirre-Simon Laplace. In 1814, Laplace put forward the view that if someone were (theoretically) to know the precise position and movement of every atom it would be possible to estimate their future position – a philosophical position known as determinism. Laplace was shown to be wrong, first by the laws of thermodynamics, and secondly and more thoroughly by quantum mechanics.

The assumption underlying much of Big Data seems to echo Laplace’s deterministic views, i.e. that if we have enough data we can predict what will happen next. A corollary to this proposition is a further assumption that if we have more data, then the predictions will be even better. However, neither of these is necessarily true.

There are several key factors that limit the potential usefulness of big data:

  1. Big Data only measures what has happened in a particular context. Mathematics can often use interpolation to produce a reliable view of the detail of what happened. However, extrapolation, i.e. predicting what will happen in a different context (e.g. the future) is often problematic.
  2. If you add random or irrelevant data to a meaningful signal, then the signal is less clear. The only way to process the signal is to remove the random or irrelevant data. If we try to measure shopping data and we collect everything we can collect, then we can only make sense of it by removing elements irrelevant to the behaviour we are trying to measure – bigger isn’t always better.
  3. If the data we collect are correlated with each other (i.e. they exhibit multicollinearity) then most mathematical techniques will not interpret their contribution of the factors correctly – rendering predictions unstable.
  4. Some patterns of behaviour are chaotic. Changes in the inputs cause changes in the outputs, but not in ways in which are predictable.

One of the most successful organisations in using Big Data has been Tesco. For almost 20 years, the retailer Tesco has been giving competitors and suppliers a hard time by utilising the data from its Clubcard loyalty scheme. Scoring Points (the book about Tesco written by Clive Humby and Terry Hunt) shows that one key to Tesco’s success was that they took the 4 points above into account.

Tesco simplified the data, removed noise, categorised the shoppers, the baskets, and times of day. Their techniques are based on interpolation, not extrapolation, and they are able to extend the area of knowledge by trial and error. Big Data is going to be increasingly important to marketers and market researchers. But, its usefulness will be greater if people do not over-hype it. More data is not necessarily better. Knowing what people did will not necessarily tell you what they will do. And, knowing what people did will often not tell you why they did it, and that they might do if the choice is repeated or varied.

Marketers and market researchers seduced by the promise of Big Data should remember Laplace’s demon – and realise that the world is not deterministic.

Nov 182012
 

Helen Thomson has a great article in New Scientist (you’ll need to register to read it) about how we already have the technology to attend an event via a robot. Thomson starts her article by talking about a 7 year old child who can’t attend school because of allergies and who attends via a robot, linking from his pc/video to the robots audio tools in the classroom, a phenomenon known as telepresence.

Thomson talks about two leading brands of robots that are currently available on the market. The two brands are VGO and Anybots, which currently cost about $6000 and $10,000, respectively. Pricey, but not as expensive as flying somebody from London, to Sydney, to Hong Kong, Tokyo, to London, which is what happens to me sometimes.

However, Thomson reports that this technology is about to get a lot cheaper. Double Robotics have announced a product for 2013 which will use an iPad for its head and cost about $2000.

Thomson talks about a wide range of telepresence examples and issues; including: drones being used in the battlefield, surgeons operating on patients thousands of miles away, and even robots manipulated via signals detected through an fMRI scanner.

My interest is in how these sorts of robots might impact the world of market research. Here are a few thoughts and I would love to hear your thoughts on how else they might be applied:

  1. Replacing video conferencing, especially when only one person is not present. In the future the missing person can attend as a telepresence, and even be taken to the social after the meeting.
  2. At conferences, key speakers might attend virtually, taking it in turns to inhabit the robot on the stage.
  3. Perhaps face-to-face interviews, at airports, conventions, supermarkets etc be outsourced to other, lower cost, countries, by using robots to be at the location, and interviewers based somewhere else in the world to drive the process.
  4. At trade shows in the future the stand could be supported by tech support, product heroes, and sales engineers, appearing on the stand via a shared robot.

Unlike AI, text analytics, and speech recognition, telepresence does not need a massive leap in technology, it uses existing technologies which are getting cheaper, and couples these with a route already explored by telesales, tele-support, and CATI.

Notes, these types of robot are not intelligent robots and they do not represent the cutting edge of robotic technology, they are drones operated by a human, they do not look human (an iPad for a head is seen as an improvement), and they have limited mobility (e.g. they can’t go up and down stairs).

Nov 162012
 

In 2011, at events and conferences around the world the world seemed to be on the edge of a new world, a world where automated coding, and in particular automated sentiment analysis, would allow researchers to tackle megabytes of open-ended text. A great example of that confidence was the ESOMAR 3D Conference in Miami.

What a difference a year makes. Last week in Amsterdam the news was all about researchers manually coding vast amounts of open-ended comments, because the machines would not deliver what the researchers had hoped they would deliver. The prize, undoubtedly, went to Porsche and SKOPOS who reported on a social media study where they captured 36,000 comments, mostly from forums, and ended up coding the comments manually.

I remain convinced that automated techniques will continue to develop and will soon open the door to large data sets. But for the time being, much of the material that market researchers handle will need to be, at least partly, coded by hand.

My suspicion is that Twitter will prove to be less useful than blogs, open-ended comments in surveys, and conversations in MROCs. When I work with Twitter, my feeling is that the grammar is to unstructured, the prevalence of irony too high, and the error by people tweeting too high to render even manual coding useful. I think the swing back in 2012 was probably a response to the over-claims by many of the providers in 2011. I suspect that 2013 will be characterised by very specific examples where text analytics will have been applied successfully to a market research problem.

Nov 142012
 

When Brad Bortner, of Forrester, coined the term MROC (Market Research Online Community) in 2009, he defined it as a qualitative tool. This definition has been used widely to define qualitative communities, in contrast to online access panels (large, quant, and minimal community), and in contrast to community panels (large, qual and quant, with community) (Visit VCU to read more about Community Panels).

However, the difference between access panels, community panels, and MROCs may soon be a redundant distinction. One of the reasons that things are changing is that the platforms for conducting research via communities are changing. In the early days of using communities for research, there were essentially two types of platforms, discussion-based systems and panel based systems.

Discussion-based systems, such as forums and bulletin-boards, tended to have good discussion tools, but they had few survey options and few community management tools. This made them ideal for small communities, e.g. 30 to 300, where the overhead of looking after queries, incentives, sampling etc were very simple. Companies that opted for this type of platform tended to stay with a qualitative type of research, making the term MROC, synonymous with both the type of research and the type of software used.

Panel-based systems, such as community panels, started by adapting the panel management and survey software from panels and added a community element. In the early days of community panels, the discussion element was much less developed, partly because of where the software came from and partly because the big money was coming from companies wanting to do surveys.

However, the distinction between an MROC and a community panel has never been entirely clear, with some people using the term MROC for any community that is branded, private, and used solely for market research. This lack of clarity may about to be resolved by the term MROC being used by most people in this more generalised way.

As with the original separation between the term MROC and community panel, the reason for the change relates to the underlying software. The new software that is beginning to emerge is capable of running a vast range of communities. It can support qual research communities, it can support qual & quant research communities, it can support consultative communities, crowdsourcing communities, and probably much more. At the moment, community panels use community panel software and MROCs tend to use MROC software. In the future, I believe, the clients and the researchers will not need to know what type of software is powering their research community.

I suspect that the term MROC (because it is short, punchy, and ambiguous) will be used to refer to any community that is private, branded, and used solely or largely for market research. Of course, each company is likely to develop its own in-house term for their preferred type of MROC, for example insight community, creation community, etc. This is good branding for the company, but they will find it useful to locate their version within the wider range of research communities, i.e. MROCs.

What are your thoughts?

Nov 102012
 

Yes, it is a truism that we want better, cheaper, faster research, and that we have always wanted better, faster, research. However, right now research is becoming redundant to many decision makers because it is not fast enough. This theme was covered in a recent article on Research-Live.

A couple of weeks ago I was at the annual conference of the AMI (Australian Marketing Institute) and speaker after speaker highlighted the pace of change and the need to respond quickly. Mark Lollback from McDonald’s showed how his company went from tasting a product at a regular review session to launching it, with TV advertising, in fourteen days. Joanna McCarthy of Kimberley Clark showed how they used scenario planning and ‘war gaming’ to be able to respond to social media stories in real-time (in sensitive areas like product malfunctions in toilet paper and nappies).

I was at the AMI Conference as a keynote speaker and to launch my latest (free) book The Quick and the Dead which looks specifically at the need for speed, and suggests a new framework for how organisations can adopt a Built for Speed approach, you can download a copy of it via my Built for Speed post.

I will be presenting my views on the need for speed at the Festival on NewMR on December 5, so, if you haven’t already registered, visit our Festival page and sign-up.