Archive for the ‘Research Design’ Category
Daniel Attivissimo this week looks at the defining steps that will make or break a market research project.
What do a doctor, automotive mechanic, and market research professional all have in common? Their ability to provide conclusions and information on solutions relies on the mastery of one fundamental step – defining the nature of the problem.
Peter Drucker, influential management consultant, once stated, “The truly serious mistakes are made not as a result of wrong answers but because of asking the wrong questions.” This simple and almost elementary observation outlines one of the leading causes behind many marketing research project failures.
To put this theory into everyday terms, I’ll use a relatable personal experience as an example. Recently, I had the pleasure of breaking down on the highway which led to my car being worked on for the better part of a week. The symptoms were obvious, the car wasn’t running at all, but the underlying problem was not as apparent. Working with only the symptoms would not have been enough information for the mechanic to properly address and fix the problem. It was necessary to dig a little deeper into the underlying causes behind the problem before deciding what the best method of fixing the car would be.
Sounds pretty understandable, right…..?
Well, the marketing research process is very similar to the example I just gave, in which the process is integrated and iterative – meaning, that no step is independent and the results of the previous steps affect the design and outcome of the following steps. That is why the most important step of all is the first step, defining the problem at hand – not just describing the symptoms (i.e. declining market share, decreasing profit margin, etc.). It sets the course for everything else in the research design, from the objectives and methodology to the questionnaire design all the way through to the presentation of the results to the client.
By definition, our responsibility as marketing researchers is to provide our client (typically some form of management) with information that aids in decision making. One of the most disastrous outcomes of a marketing research project would be reaching the end of the project only to find out that the information obtained holds little to no relevance in addressing the true nature of the business issue. The waste of time and money would be likened to the mechanic returning your car only to find yourself broken down a few miles down the road – but at least you have that new oil filter and full supply of windshield wiper fluid.
To effectively accomplish the task of identifying the problem (or opportunity), we must first gather all pertinent information to fully understand the background of the business issue. Three useful steps would be:
Below, is a comparison of the example of a mechanic and a general business issue that many companies face:
Figure 1: Identifying The Problem To Design The Research
please click on the image to enlarge
In the end, because of the mechanic’s ability to effectively define the problem causing my car to not run, he was able to provide me with a cost effective solution.
In the same way, the research team at B2B International is expert at providing its clients with actionable insights because of an acute attention to fully defining the background to the business issue, translating it into a research problem, and designing an appropriate approach that will effectively provide our clients with information that addresses the problem.
In her latest TNI, Simi Dhawandares to dream about getting her hands on a million pounds, but just how will she carry it?
What’s the largest amount of cash you’ve seen at any one time? For the majority of us, everyday, over-the-counter transactions may involve the exchange of a few £20 pound notes, possibly a few more £10 pound notes and we’ll all certainly be familiar with the £5 pound note – a handy wallet-warmer for the masses.
With this in mind, it was only a couple of weeks ago that I gamesomely directed a brain-teaser over to my colleagues when asking “How many briefcases would be needed to transport £1 million in cash?” The answers were vague and possibly lukewarm given that no-one had seen this amount of cash first-hand. However, of the responses offered, these were as follows:
So, there was my answer – one suitcase. Or was it?
My experience as a researcher, has led to some reluctance to mindlessly accept an answer without thinking it through. Some might call this over-analytical; I’m more inclined to call it cautious. My dilemma here was that whilst I’d asked the question, I’d failed to give enough surrounding detail. For one, I hadn’t stated the cash-type (was this £50 notes, £1 coins or a mixture?) and for another thing, I hadn’t specified the exact size of the briefcase, in question.
Based on the above statistics, whilst the variation of several inches might not appear to be much on a superficial level, this could potentially be quite significant when comparing the space available to carry the cash in question, where those extra cubic inches could make all the difference!!
To demonstrate, I wanted to see how size of briefcase impacted upon the number of cases needed to accommodate £1 million in £1 coins. To begin, a quick calculation based on the dimensions above told me that with the medium sized briefcase, I had a volume of 1,404 cubic inches. Looking then at the dimensions of a £1 coin (225 mm in diameter and 3.15mm thick), this had a volume of approx 1.25 cubic centimetres or 0.076 cubic inches. Therefore, the number of £1 coins that a medium-sized briefcase could hold was £18,473……which told me that at least 54 briefcases would be needed in total to accommodate the £1 million pounds discussed earlier. But how did this compare with the small and large briefcase?
Using the same method to calculate this figure, interestingly, the number of briefcases needed varied quite significantly, where for the small briefcase, at least 82 would be required to transport the £1 million pounds in £1 coins versus only 35 (approx) that would be required for the large briefcase. Therefore, size of briefcase, in this example, could be proven to change the outcome dramatically.
Whilst, in truth, the purpose of this exercise was not to advocate the transportation of £1 million in such an impractical manner, it is a comparable demonstration of the considerations and difficulties often encountered when it comes to market sizing. We, as researchers, have to combine common sense, careful thought and research practices to effectively draw out a realistic estimate of the size of any potential opportunity. It is seldom the product of a series of clear-cut figures and distinctions that can be used in a straightforward mathematical equation. The process often involves a thorough and intelligent market assessment that needs to demonstrate plausibility which is, of course, backed up by supporting facts and figures, but which also takes accounts of all threats (e.g. competition, market fluctuations etc) as well as opportunities (e.g. exponential growth, supporting regulations etc). Therefore, extra care should always be taken when scoping out such a project from the offset so that the research has a clear focus and direction, void of ambiguity.
Therefore, to the same colleagues who I directed this question to, I now re-phrase and ask you, “How many briefcases of average UK size (to be researched) would be needed to transport £1 million in cash made up of £5 notes?” Answers welcome, but more questions about this research are what could, in theory, add more value.
For more information on how B2B International can help, visit the Market Assessment page of our website.
This week, Oliver Truman looks at some of the honestly-held misunderstandings we make in everyday life, and at why misconceptions about the market research industry should make us sit up and take notice.
Sometimes we all make mistakes. As human beings, we’re loath to admit our failings, particularly when we think we might have got something wrong.
Only the other day, a friend was bemoaning the arrival of students back into Manchester for the start of the new academic year. Longer queues at cash machines, processions of drunken youths in fancy dress and buses packed to the rafters were just a few of his misgivings. “Bloody retrobates”, he muttered.
At first I hadn’t realised, but after a few seconds it sunk in. “Did you just say retrobates a second ago?”, I enquired. “Yeah, retrobates. You know, delinquents”, he said. After several more verbal exchanges, it became apparent that my chum had been using the word retrobate instead of reprobate for quite some time, possibly even his entire life.
As it was perhaps a little too painful to admit it, he gamely attempted (for several minutes) to argue that he was right and I was wrong. However, in the age of instant access to knowledge, a quick mobile web search revealed the error of his ways. The score was settled.
Throughout that evening, as several more pints of English Ale were imbibed, me and my friends at the local pub were now alert to the slightest error – whether linguistic, factual or otherwise. Other highlights in the inaccuracy stakes that evening included:
Errors of the retrobate sort are referred to by linguists as “eggcorns” – The term itself involving an idiosyncratic substitution of similar-sounding words to mean acorn. There’s a tremendous website documenting these everyday anomalies at the Eggcorn Database. Some personal favourites include:
In the world of market research we are, to an extent, also on the receiving end of popular misconceptions about our industry and the work we do.
There was an interesting article on the BBC website this week about those who respond “don’t know” in opinion polling. Aside from the thought-provoking discussion about how such responses should be treated when reporting survey findings, it was the comments section at the end of the page that really grabbed my attention.
Here are a couple of comments that made me realise just how misunderstood the market research process might be amongst the public at large:
Recently on the radio, there was a phone-in on the subject of a recently conducted opinion poll and listeners were asked for their comments. One caller refused to believe the result, citing the fact that it was a survey of “only” 2000 adults. “I’m sure the other 60 million people in this country don’t think that way – It doesn’t capture that they think” was their claim.
On the flipside, us survey wonks should also accept that some of the blame rests with the research industry. Market researchers don’t help themselves when we talk to non-research audiences about sample sizes, weighting and quotas. Moreover, research also needs to be conducted in a way that is likely to engage and learn from, rather than alienate the audience. Unless the most appropriate techniques and methods are deployed, the credibility of the research process can be put at risk.
The comment below came from the comments in the same BBC article I mentioned earlier. I think it neatly captures an instance in which market research really doesn’t help itself:
Nick Hague this week takes us on a world tour, explaining why you should never be surprised to get such varied responses to your global customer satisfaction questions.
We tend to have a human instinct that ‘deep inside’ all people are the same – but they are not. Therefore, if we go into another country and make decisions based on how we operate in our own home country – the chances are we’ll make some very bad decisions – Geert Hofstede
After spending what seems like the last few months living out of a suitcase delivering research findings to a myriad of companies in countries ranging from Germany, Belgium, Spain and Ireland to the USA and China, it has hit home to me even more so, how important it is to understand individual country differences. These differences might be cultural, behavioural or attitudinal, but a researcher needs to know what lies behind a given score before making informed recommendations for action. Carrying out international research is all in a day’s work at B2B International!
Enquiries for customer satisfaction and loyalty research have risen in recent months as the global recession bites harder and companies are turning their attention towards retaining their existing customer base. We are often tasked with carrying out customer satisfaction studies that cover multiple geographies. Implementing and evaluating such research requires an understanding of the different cultures and infrastructures within a particular geography; for example will a Chinese respondent answer an unsolicited telephone call or will an e-survey alienate half your target market in Spain? Another complexity that comes up in multi-country studies is making sure a translated questionnaire has the same meaning across multiple geographies. However, one of the most important aspects of carrying out international research is having the insight to why individuals from different countries around the world convey such different ratings; especially customer satisfaction ratings, when receiving a similar if not identical service from the same global organisation.
So my Thursday Night Insight rant this week is about response styles and I pose the question: Why do customer satisfaction response styles differ between countries?
Typically, in any customer satisfaction survey the norm is to use a 10 point scale where 1 means totally unsatisfied and 10 means totally satisfied. When asking this question to customers across different countries I can definitely make the following general observations:
However, one point that should be made clear is that these observations are generalizations and what we do see is that respondents from North America typically give higher satisfaction scores than their UK or Western European counterparts. One reason, I personally believe, is down to cultural differences. For example, I have an American colleague who works within our European HQ and on his first day at B2B International he greeted me with the question ‘how are you today?’ to which I replied ‘OK’. He looked aghast and said ‘why, what’s the matter?’ There was no problem or issue but my typical English response led my colleague to think that something was wrong based on our different cultural backgrounds. Therefore, based on these differences, Americans would typically rate a product or service as a 9 or 10 (totally satisfied or excellent) while Europeans would rate a similar issue as a 7 or 8 (an okay, acceptable, satisfactory score). Another reason for higher satisfaction scores in the US could be that Americans are more likely to respond to a survey even when service levels are good and expectations are being met whilst Europeans only respond if the service is poor or they have a gripe to bear – however, this is a personal point of view and so like any good researcher I wanted to know if any external research has been carried out looking at geographical scoring differences.
Supporting the internal B2B viewpoint is a piece of research I came across carried out with 116,000 employees of IBM Corporation operating in more than 40 countries. Using these findings, Geert Hofstede from Maastricht University developed a framework that identified four different typologies based on national culture that impacted on response styles. These typologies were:
Power distance: The degree to which people in a country accept a hierarchical or unequal distribution of power in organizations. Therefore respondents would typically score mid-response ratings and countries showing this type of response style include Malaysia, Taiwan, Singapore, India, Philippines, China, Brazil, Chile and Mexico
Uncertainty avoidance: The degree to which people prefer structured vs. unstructured situations. Cultures high in uncertainty avoidance prefer unambiguous situations and are therefore more likely to use the endpoints of the scale as opposed to the middle, thus exhibiting an extreme response style. Countries showing this type of response include Belgium, Poland, France, Spain, Portugal, Turkey, Korea and Japan
Individualism: The degree to which people in a country focus on working as individuals vs. working together. Cultures high in individualism are less likely to exhibit a middle satisfaction score because they would emphasize their individual opinion as opposed to their perception of the group opinion. Among all the response styles, individualistic cultures may exhibit extreme response styles and include countries such as US, Canada, Australia, UK, Denmark, Sweden, Norway, Belgium, Italy, Hungary and France.
Assertiveness: The degree to which people in a country emphasize traits such as assertiveness and insensitivity to feelings. One could hypothesize that individuals in these cultures would favour more extreme response styles and that “softer,” more “sensitive” cultures exhibit more modesty or middle response styles. Countries that have been categorized as assertive are the UK, Germany, Italy, Hungary and Japan. However, it should be pointed out that Geert’s research is inconclusive with regards to the impact of this dimension on response scores.
In conclusion, the key takeaways are thus. Every business needs a feedback loop to assess their performance and provide an ongoing measurement and benchmark for future progress. Customer satisfaction surveys are excellent at delivering this feedback, but different country cultures do impact on responses and response rates and so, when analyzing international research findings, a researcher needs to use their knowledge and judgement to whether a response is based on different levels of performance, or simply because of a result of cultural difference.
In the future, when comparing international customer satisfaction research findings, it might be useful to take the following three steps:
Finally, to wrap up this week’s ramblings I should point out that when it comes to customer service and customer satisfaction, one issue that transcends all geographies is that it is imperative that the customer is listened to, and feels valued and cared for. Relationships are key in any business to business market throughout the world, and so invest in your people as they are the face of your business and typically are the driving force behind excellent satisfaction scores whether you are based in Torquay, Tokyo or Timbuktu.
In his latest Thursday Night Insight, Alaric Fairbanks gives us a glimpse into his life conducting market research in China.
Our permanent Beijing office has been up and running for about two and a half years now, and over this period I have been asked by both colleagues and clients outside the country about how working in market research here in China is different from in the west, and how it is similar. On a high level, there are obvious similarities: we have to win and design projects, identify respondents and sources of information, collect data and analyse data…the same as my colleagues elsewhere. This is pretty much as you may expect anywhere.
Things get slightly different, however, on the type of projects: For the first two years, the majority of projects were market analysis and market development, with less interest in more quantitative research projects like customer satisfaction. There is, though, more and more customer sat work happening, as clients become more established, and of course we become more established with existing companies. In our experience, market analysis demand tends to focus on both the factual (i.e. size, structure and trends) and the analytical (i.e. what this means for developing sales). These projects tend to feature more qualitative investigative techniques and, in some ways, are more akin to a jigsaw where you first have to find the pieces.
Clients have, in the main, been larger foreign companies and multinationals who already have a presence in China, but the projects are often commissioned abroad. Often this is because the market research function or strategic decision making unit is located in corporate headquarters, although we are seeing a lot more work commissioned from within China itself. Another reason for foreign-based commissions is the need for third party verification (or otherwise) of information coming from their China-based operations. On a practical level, having a large proportion of clients based in Europe and North America means that face-to-face meetings for commissioning and presentations are not so common, and telephone conferences and web presentations form a larger part of communications. This also means that interesting hours are often worked at commissioning and presentation meetings!
As I already mentioned, an increasing number of clients are from within China and other parts of Asia. Again, the overlying characteristics of working with them remain the same i.e. understanding their needs, proposing a suitable methodology, negotiating timescales and price, etc. Where differences occur, however, are in lead times (longer) and very often in the brief itself. We have seen an increase in the number of specific written briefs, but these are still very much in the minority. Another interesting characteristic is how these clients prefer to communicate. After initial contact, many prefer to rely on instant messaging over the internet, mainly qq or similar services, for day-to-day communication, rather than telephone or e-mail. This is also having an impact on research methodologies.
Methodologies for data collection here include all the usual suspects and, language aside, would be largely familiar to clients and colleagues in other countries. There are, though, some differences in application, for example focus groups tend to work better in smaller numbers, 6 to 8 being optimum. It is often argued that, especially in business, face-to-face interviews are necessary here. Very often this isn’t the case. We recently had a project looking at the market in the ‘biosolids’ industry, meaning we had to talk to respondents in Chinese sewerage works. Initially this seemed quite daunting, until it became clear that these people were extremely receptive; no pushy sales people come and call (for perhaps obvious reasons), and they are seldom asked about the intricacies of their work. Recruitment was aided by the incredible take-up of social networking and bulletin boards among Chinese professionals. As a country undergoing rapid change, it is perhaps no surprise that methodologies and attitudes to them are changing too. From a ‘consensus’ of f2f being the only acceptable technique just a few years ago, telephone and indeed online have increased in importance incredibly quickly, with instant messaging even being used for in-depth qualitative work. Whilst respondents are often very keen to work with different approaches, the market outside China and occasionally inside sometimes sticks to believing outdated truisms.
In this short space available, it is clear that although many principles and approaches are of course similar, there are nuances affecting all aspects of the process. Where this may be more complicated or even contentious is around how this is manifested in everyday work, for example the amount of time required on quality checks, and HR issues and administration. I’ll try to cover these in my next post.
To learn more about our work and our team in China, visit www.b2binternational.com/China