Archive for the ‘Data Quality’ Category
Arguably the most important tool for any b2b company is an up-to-date database of its customers. Most b2b companies have just a few thousand (and sometimes just a few hundred) current and potential customers. And yet, if you asked to see a list of these companies with all the required contact information, you would be surprised how many would fail this acid test.
This list is the most important vehicle for talking to customers. It drives communications through newsletters, product launches, company announcements, price increases, and more. It is the starting point for a more sophisticated customer relationship management (CRM) system.
Why is it so difficult to prepare this simple tool? There are a number of reasons:
1. Salespeople are extremely protective of their accounts. They believe that knowledge is power and they seek to keep their contacts close to their chest.
2. The customer contacts are a moving feast with a constant churn which is hard to track.
3. The task of keeping the database up to date is often considered trivial and boring; beyond the responsibility of anyone in a position of seniority. Indeed it is sometimes delegated to an intern as something to keep them occupied.
Keeping an up-to-date database of customers and potential customers is an ongoing effort. Yet, it is one of the most critical tasks in growing any business.
Japan’s disappearing citizens only serve to show Caroline Harrison the importance of keeping your information up to date in this week’s Thursday Night Insight.
Six months ago, a story doing the rounds in the international press caught my attention: Tokyo’s oldest person goes missing! Japan launches nationwide search for centenarians! Almost 200 of Japan’s centenarians missing!
How bizarre, I thought. How can you misplace so many elderly people?
Of course, when you delve deeper into the story, you begin to understand – sort of – what has happened:
In early August last year, the 113-year-old woman who was listed as Tokyo’s oldest person was found to be missing; whereabouts unknown even to her own family. In fact, officials discovered she had not lived at the address where she was registered for some 20 years!
But what made the story particularly toe-curling was that it came to light just days after the city’s oldest man, who would have been 111, was found dead and mummified. He had actually passed away some 32 years earlier!
Now, according to Japan’s latest audit of those aged over 100, nearly 200 centenarians are ‘missing’. Twenty-one of these would be older than the nation’s current official oldest person of 113 years of age – and one is a 125-year-old woman whose registered address was turned into a park back in 1981…
Much as these revelations amused me at the time, I admit it’s a slightly macabre topic which isn’t actually all that funny. What’s more, there is another important point to make which, for market researchers, may be just as distressing.
It’s all very well for Japan to lay claim to many of the world’s oldest citizens, but if they’ve been dead for 30 years then, in my humble opinion, it doesn’t really count! Of course, as with anything, any figures you have at your fingertips today, will already be going out of date by tomorrow.
In exactly the same way, there’s absolutely no point in thinking you understand how, when, what and why your customers buy from you if the information is four years old. Chances are these customers will have moved on – or at the very least altered their buying patterns or changed their requirements.
Equally, what’s the use of having information on a market’s size, structure and potential if you then wait for 18 months before deciding to enter this market? Things can have changed enormously in that time and what once looked like a fantastic opportunity could well have turned into a much more challenging – if not completely impossible – prospect.
So my message today is simple but critical: the data you rely on to make important decisions MUST be up to date. And, for all of you who argue that this Thursday Night Insight would have had far more relevance and impact when these stories first broke back in the summer of last year…well, my point exactly!
It is true that the buzzword in industry is analytics. This seems surprising to us in the market research industry. Data and analytics has been our baby for the last 50 years. When you drive your car, of course you need to look out of the window, but you would be a fool to set off without checking your fuel gauge or occasionally looking at your speedometer. A map may come in useful or, more likely today, a Sat Nav (GPS). Our industry has long provided much of the good data on the company dashboard and the Sat Nav to guide your journey.
The problem is that data is fast becoming a commodity. There is so much data handed out for nothing. It is in front of you in the newspaper. It hits you from the television. It sits under your nose in your company and, of course, it abounds on the net. In fact, most of us are paralysed by too much data.
However, there is some data that is almost invaluable. Just think of the things you would like to know about your market. Which customers are likely to be buying the products or services you sell in the next few months or weeks? And when they do buy, what will drive their decision? Where do you sit in their consideration set? What are the unmet needs in your market and how could you satisfy them? What will your market look like in five years’ time? Who will be the competitors to wrestle with then? The list could go on and on.
What do you think? Will data be the new plastics?
By Stefan Stern
Last week a very wise man – OK, it was my chief executive – said a smart thing. “Data is the new plastics,” he declared. This was a sly reference to a famous scene in the film The Graduate. What he meant, I think, was that the unlikely subject of data has suddenly become fashionable. It is now the sort of discipline you might encourage your son or daughter to pursue.
Clever people talk knowingly about “analytics” – managing better with the use of data – as if they have discovered the secret of business success. Perhaps they have. Software companies are certainly pushing the concept hard.
Last month the consultants Accenture announced a partnership with the IT company SAS. They are forming an analytics group which will offer what they call “predictive solutions”. This means getting hold of useful data fast and interpreting them intelligently, to try to anticipate sudden changes in your market, or to spot gaps others have not yet seen. IBM is touting its analytics capabilities aggressively, while SAP is also talking a good analytics game.
I was recently given a briefing by Vivek Ranadivé, the chief executive of Tibco, a Nasdaq-listed software company, on the emerging possibilities of our data-rich world. Mr Ranadivé is something of a visionary in this field. His first book, The Power of Now, was published 11 years ago. This was followed in 2006 by The Power to Predict. His latest book, The Two Second Advantage, will be out this year.
Mr Ranadivé is dismissive of what he considers outdated approaches to the handling of data. “We have 20th-century infrastructure trying to solve 21st-century problems,” he says.
During the past two decades, companies have become good at storing large amounts of data. Databases contain historical information about transactions that have been carried out. But what about all those near-misses, when customers visit your website, stay a while but leave without buying anything? A passive database will not record any of that activity. It will not even know that such things have happened.
Mr Ranadivé says we should think of business in terms of events, not transactions. Near-misses are customer events, too. The latest approach to data tries to spot these events in real time, so businesses can make use of that information quickly. In the jargon, this is called “in-memory analytics”, so called because memory has become a cheap and almost infinite commodity, and all that customer activity can be monitored live, as it happens.
Faster transmission of information makes a lot of things possible: marketing campaigns that react quickly to what customers want, smoother-functioning supply chains, even the introduction of the “smart grid”, which can spot possible power outages much sooner.
Last month Thomas Davenport, professor at Babson College, and Jeanne Harris, director of research at Accenture’s high performance institute, published Analytics at Work, a primer for managers who want to introduce a more rigorous approach to the use of data. It is a challenging read, in part because it makes plain how much work has to be done to capture and use data effectively.
But even academic experts agree that, however sophisticated your approach to data, you still need judgment to make good decisions. When Prof Davenport met a pilot at a party and started discussing analytics, he received this reply: “Oh yes, we’ve got lots of that in modern airliners – avionics, lots of computers, ‘fly by wire’, and all that. But I still occasionally find it useful to look out the window.”
Others are even more sceptical. Paco Underhill, a retail guru and chief executive of the consultancy Envirosell, says that today it is almost too easy to accumulate data. Instead of going to witness things first-hand, managers do a lot of their thinking sitting down, staring at spreadsheets. He is a great advocate of rubber-soled shoes. Get away from your desk, he says, and go and see for yourself. Wear rubber soles at your Envirosell interview if you want to get hired, Mr Underhill advises.
Not everyone will be fired up by the idea of plunging deep into a world of data. In the 1960s, bright young graduates, like the Dustin Hoffman character in the movie, did not all choose to pursue a career in plastics. But one young chap at General Electric did. Welch, I think his name was. Things seemed to work out pretty well for him.
When I used to work at a well known bank, bouncing cheques and cancelling payments for those unlucky people frequently going beyond their credit allowance, a colleague and line manager of the team I worked in used to have what I suppose you could call a catchphrase. He would walk up behind an unsuspecting member of the team and utter the following words…
Faster, faster, faster… must go faster!
I never really understood whether or not this was his bizarre motivational technique gleaned from reading a book on people management or whether he really was just irritating. Who knows? Maybe it was both. His catchphrase sums up well a trend that I have noticed in my 10+ years in data processing and market research
Market research has always, as far as I can tell, been a fast-paced business. And this is very true for those working in data processing, especially as computing power and techniques in processing data improve. The time between questionnaire sign-off to CATI or e-survey setup seems to shrink, as does the time between fieldwork ending and the delivery of cross tabs. Everything has to be as thoroughly checked and accurate as before. It just needs to be delivered that little bit quicker. Over 10 years I think this is an undeniable trend.
Ever since I joined B2B International, I’ve looked at ways to improve the turnaround times in data processing that allow us to keep meeting our clients’ expectations. There are many ways of doing this. For example, we can invest in new software, as we have recently with Confirmit for our e-surveys and QPS Insight to allow us to carry out data entry more efficiently. We can also use the software we currently have more wisely, such as writing VBA scripts in Excel to automate certain tasks. Whatever the method we use to meet this challenge, it is a challenge I enjoy.
Having said that, I often ask myself whether there will come a point when people working in our industry will start to have to say no more often. Will the time come when the speed of delivery will start to have some detrimental effect on all-round quality of research carried out, and at what point does the trade-off between quality and speed become acceptable? At B2B International we’re well placed to deal with the challenges that market research throws at us. A combination of a strong team, smart thinking, and a constant drive to improve the way we work means we haven’t yet reached the point where “no” is something we have to say very often – and long may that continue!
David Ward this week uses his data processing expertise to show us how we can spot and weed out ‘rogue respondents’ to get the most reliable and valuable data from our online surveys.
The internet is a very useful tool in market research. We can reach a much wider and larger audience than using traditional pen and paper or CATI interviewing (spam filters allowing). We can make the interviewing experience more visual, route respondents to the questions relevant to them, view results in real time before the tabulation processes have begun, program logic checks in to the survey to catch errors before the analysis stage…the list of positives goes on. Of course there are, as with any interview method, pros and cons to choosing a particular method of interviewing.
On our travels on the internet we’re never too far away from someone wanting to collect our opinions about this and that. For example, when I was looking online for a new car recently, nearly every website I went to had some sort of pop-up window asking me to take part in a survey and, more often than not, some incentive was provided to entice me to spend my time completing the survey. Personally I’ve never been tempted to complete one of these surveys appearing in pop-up windows, but that’s just me. However, what is to stop someone seeing the incentive, thinking ‘I’d quite like a new iPod’, and just randomly clicking through the survey in double-quick time with no thought to their responses? For the respondent, the incentive is there to get the survey completed and have a chance of winning the prize, but not necessarily to give each question the thought required. There isn’t a lot we can do to stop this happening; however, there are things we can do after the data is collected to spot these rogue respondents.
The scope for logic checks on online surveys would be vast if you took it to the nth degree. We might lose a large number of respondents if we did do this though – through frustration at constantly having their responses questioned – but there is a balance to be found. As Head of Data Processing at B2B International, what steps can I take to ensure the quality of our data? There are no guarantees but we can take steps when setting up online surveys and reviewing the data to highlight suspect records. As I have said, we can program logic checks into our online surveys to make sure the survey is filled in correctly. We can make sure numbers add up to 100% where needed, or that respondents select the correct number of items from a list.
One of the easiest ways to catch potentially bad records in the data is to quickly check the time stamp for each interview. Did someone manage to complete a 20-minute interview in 5 minutes? This would suggest someone has just clicked through the survey with a click happy mouse finger, without giving due consideration to their answers.
Another telltale sign to look out for is something termed as “straightlining”. Has a respondent gone through a grid or battery of questions and given the same response each time?
We can also look for inconsistencies in the logic of the answers and for unlikely values in numeric questions. Part of this can be done during the survey and more checks can be run once the data have been collected.
We could also add in questions purely for verification purposes to allow us to judge whether or not the respondent is reading the question. For example, in a grid of questions we could add one that simply says “for verification purposes please answer strongly agree”. Along the same lines as this we could use data from any panels we have purchased to ask respondents to verify certain details. Comparisons can then be made between the original panel data and our own data collected online. Differences between the two could be viewed as suspect.
Finally, we can look at responses to open questions. We can check that fields do not contain random characters, or single character answers. If we find this, can we be sure we can trust the other answers given?
Failing one of these checks does not necessarily mean the data is not to be trusted but failing two or more may be grounds for removing that respondent. Perhaps the strongest guide to base the decision on is the time taken to complete a survey, but whichever method or combination of methods are used, having the checks in place gives us added confidence in the findings we present to our clients.
I’m not sure we will ever be able to completely stop respondents just clicking through an online survey giving responses that are illogical, poor quality and clearly not much use to market research, but knowing that there are telltale signs we can look out for which can indicate a respondent we may need to exclude is certainly reassuring.