Archive for the ‘Kyle Cockett’ Category
B2B International, which has chosen Manchester’s The Christie as its nominated charity of the year, has beaten its annual fundraising target of £1,000 in less than six months.
The business-to-business market research specialist – which also has an office in London as well as in locations across mainland Europe, North America and Asia – now hopes to double its initial target.
Fundraising activities by the team have included: helping at the Manchester United bucket collection at their final home game of the season; Research Executive Kyle Cockett cycling the Manchester to Blackpool bike ride; as well as various cake-bakes, book-swaps and a company raffle.
B2B International Marketing Manager, Caroline Harrison, says, “We are thrilled to have reached our target so far ahead of schedule and are more committed than ever now in continuing to raise funds for this worthy cause. Many staff and their families have been touched by cancer and this effort shows the support to the charity.”
Natalie Pike, Corporate Fundraising Officer, from The Christie added, “We are delighted that B2B International nominated The Christie as its chosen charity this year and are grateful to both them and all our other fundraisers for every penny they raise.”
Find out more about the work The Christie does at: http://www.christie.nhs.uk/
Following the recent stock market floatation of Facebook, Kyle Cockett this week discusses the potential implications for the market research industry.
From a quick glance at some of the recent technology acquisitions published in the press, it would be understandable to hold a belief that we find ourselves on the cusp of a sophomore dot com boom. Microsoft has recently purchased the business enterprise social networking service Yammer for $1.2 billion, while Facebook acquired the free photo sharing program Instagram for the princely sum of $1 billion. In addition to this, Facebook recently had a well publicised initial floatation on the NASDAQ stock exchange. The initial price offering of $38 valued the company at approximately $104 billion, with the launch generating around $19 billion for the company’s founder, Mark Zuckerberg.
In theory, all of the acquired ventures listed above have no formal business model in place; such high valuations are likely to leave some financial analysts scratching their heads. After all, at least the original dot com boom start-ups, such as eBay, had some underlying plan for revenue generation in place. The acquisition of Instagram in particular raises questions. On paper, the application is merely a platform for users to share retro or Polaroid styled photographs with friends or followers. It is likely that the sole objective of the purchase is to direct Instagram’s loyal, highly engaged users towards Facebook’s photo sharing services. On the surface, any concerns raised around the lack of a successful business model do not appear to trouble Zuckerberg, who at the time of Facebook’s initial floatation stated:
However, it is likely the public floatation of the company will now generate extra pressures in terms of upping advertising revenues. Zuckerberg is likely to find himself between a rock and hard place – the fulfilment of building something that makes a ‘big change’ to the everyday lives of Facebook users or generating the advertising revenues that shareholders will demand, undoubtedly angering core Facebook users in the process.
On the face of it, things appear to be rosy in terms of revenue generation. Facebook’s advertising revenues already reached a record high of $3.1 billion in 2011, representing a 69% increase from the previous year. However, while these figures may appear impressive, Facebook revenues are floundering in the face of their nearest competitors. Google has half the page views of Facebook but generates ten times more in revenue terms. One of the main reasons for this gap is that Google users often have purchasing intent – they are often actively looking for products to purchase when they type terms into a search engine. In comparison, Facebook users are more likely to want to chat with friends or share media, without having the distraction of display advertising. As a result, question marks have been raised over the effectiveness of Facebook advertising. General Motors recently cancelled a $10 million deal after deciding that paid Facebook advertising had little impact on their revenues.
As things currently stand, it is hard to envisage how Facebook will ever be able to remodel its offering to gain the same purchasing intent as Google users. Therefore, it is feasible to suggest that Facebook may now turn to alternative revenue streams to satisfy the thirst for shareholder dividends While the purchase of Instagram generated the most hoopla in the business press, Facebook has also quietly acquired application developers such as Tagtile and facial recognition software developer face.com. In terms of publicity and headlines, these purchases have quietly crept under the radar. However, they represent the biggest hint towards the future direction of the company. Tagtile is of particular interest – it is a customer loyalty system offering rewards for those who ‘tap’ their smartphones in store. This will add yet another layer of rich location and purchasing data to Facebook’s user database.
So, what are the implications of these recent developments for the Market Research industry? Examining the current terms of service of Facebook, there are two points that stand out – data may be used to:
From these terms, we can easily conclude that Facebook is already being employed for promotional research purposes. Once Tagtile and other applications begin to be fully integrated with user data, Facebook will be able to provide an even clearer picture of who views promotions, how many follow on with purchases and where they purchase from. One of my recent blog posts was on the powerful benefits of harnessing ‘big data’ sources. Facebook has the potential to build the biggest, most powerful data source of all. Is it conceivable that with a slight amendment to their privacy terms Facebook could create a new revenue stream by selling this data to external agencies? Previous hints have already been dropped from internal Facebook sources about the company’s potential as a giant worldwide consumer panel.
At present, the market research industry is only just beginning to face up to the use of social media research extraction tools – the debate currently rages on as to what is truly ‘public’ when shared through open access social media spheres. Without any doubt, there would be similar ethical dilemmas for research agencies should Facebook begin to open up user data. Of course, this is assuming agencies aren’t bypassed completely should this development happen. There are also likely to be question marks over the usability of such data – do the everyday actions of consumers reflect their adopted online personas?
I would be interested to hear the opinions of any fellow researchers or readers.
This week, Kyle Cockett takes a look at the growing trend of ‘big data’ in the research industry, and the potential future implications.
During my tenure as a Research Executive, I have found myself working with a growing number of data sources during quantitative research projects. With a growing appetite for actionable insights, I increasingly find myself working in partnership with clients to source internal data that will help to provide quantifiable and conclusive findings. The process of harnessing such data is becoming progressively easier due to the increasing number of clients with well managed CRM systems in place. Coupled with the quantity of data shared on social media sources – regardless of the ongoing ethical debate – this means there is often an abundance of data to be examined and analysed during the reporting stage of projects.
The ever expanding size of datasets is not a new phenomenon – datasets characterised by large amounts of complex data from disparate sources have been around for many years. Tesco are often cited as one of the forerunners of large scale data mining with their Clubcard scheme, which gathers data on the purchasing habits of millions of customers. Despite this, only recently has the term ‘big data’ risen to prominence within the industry to describe such datasets, perhaps prompted by the ever increasing number of data sources – social media, smartphones and blogs are just a few examples of relatively new data streams. Google Trends reveals that the use of the term ‘big data’ has been growing in use exponentially in the past few years – and it is expected to grow even further. Ray Poynter, of Vision Critical, positions big data as the ‘one big trend’ at the forefront of the market research industry, ahead of twelve other multiple strands of expected change. As Poynter indicates, this prediction is firmly backed by the latest industry reports:
This does not necessarily mean the end of traditional quantitative research techniques, such as telephone surveys. However, it is expected that the findings from such surveys will increasingly begin to be used in conjunction with data from other sources – they will become one of the many scores or metrics fed into the big dataset. This is not expected to be an easy transition – many researchers currently work with data that has a size in the order of megabytes, while most big datasets are in the order of terabytes or even greater. Poynter states a belief that the move towards big data will be a ‘bumpy and a not altogether pleasant one for many market researchers’. If this is the case, then what is the benefit of gathering such large data sets? According to a research report by McKinsey Global Institute, big data is:
In order to conquer this frontier, it is essential that clients exploit the full potential of big data – the rise of big data provides huge scope for actionable insight and predictive modelling. Through the use of big data analytics, it is possible to create models that can predict changes in revenue, develop targeted customer value propositions, develop advanced segments, and identify where to focus resources among customers amongst other possibilities. There are many examples of such data mining already present in the retail industry – Tesco has a successful strategy of sending targeted discount vouchers to customers based on their typical basket of goods, while many online retailers offer purchase recommendations based on the past purchase history of their customers. These personalised touches often give retailers the extra edge over their next best rival. Applications are not always limited to customer satisfaction or retention either – by developing bespoke offerings, businesses can extract the full willingness to pay from their customers by accurately targeting premium offerings to appropriate segments. McKinsey estimate that a retailer using big data has the potential to increase operating margins by 60 per cent.
While many of these examples are heavily focused on retail, big data analytics also have the potential to shift approaches in business to business markets. Though B2B companies do not have the potential to mine data from social media sources as consumer companies do, they often still have well developed CRM systems that can provide a wealth of information on their customers. Who knows, it may take only one extra measure to provide insight that leads to a competitive edge?
To find out more about how we can extract insight from your big data, click here.
Kyle Cockett this week considers the growing popularity of behavioural economics in the research industry.
As an econometrician by training, I am naturally interested in the behavioural, cognitive and statistical biases found in research. Often, it is further understanding of these underlying aspects of research that allows the greater provision of explanatory and actionable findings to clients at the conclusion stages of projects. However, just how strong is our understanding of the underlying drivers behind human decision-making?
At a recent research conference, I listened to Phyllis McFarlane, Managing Director of the international market research agency GfK NOP speak of the growing need for a change in approach to research. The argument was clear – it is time to move away from the stale, outdated assumption of human beings as rational decision makers. Indeed, we have learnt as researchers that the theory of rationality does not always hold – under certain circumstances, such as intense pressure or a lack of information, respondents can be prone to making unconscious, irrational decisions. Therefore, with this in mind, can we always expect to gain a full understanding of respondents from quantitative scaling questions when conducting research? Is there a case for probing to gain some deeper understanding of the underlying cognitive biases and psychological heuristics behind decision making?
I recently read an article by Andrew McCormick from Brand Republic on the growing use of behavioural economics as a research and marketing tool. Behavioural economics, also known as ‘nudge theory’ in the political arena, has long been mooted as an alternative approach to the neoclassical assumption of rational human behaviour. The theory of behavioural economics is built around a central treatise of bounded rationality, where the rationality of individuals is limited by the information available to them, the cognitive limits of their minds and the finite amount of time available to make decisions. In McCormick’s words:
Some of the key heuristics behind behavioural economics include:
• Loss aversion – The tendency to prefer avoiding losses to acquiring gains. It implies that a respondent who loses £100 will lose more satisfaction than another person who finds £100 will gain.
Behavioural economics has remained as a ‘fringe’ discipline for a long period. However, according to McCormick a growing number of brands are now seeking to harness its use, even if the ‘chasm between knowledge and implementation is still significant for many’. The research industry also appears to be taking note – Research Magazine has dedicated a new series of articles to the topic. McCormick cites many recent examples of brands employing behavioural economics, including Hyundai seeking to reduce the fear of loss among consumers when purchasing new cars and Lloyds bank ‘nudging’ customers towards the use of their money manager tool. Although these examples are based on consumer products, we often see evidence of irrationality in business to business markets. Many projects for utility suppliers show clear evidence of status quo bias – many respondents are quite happy to remain with the same supplier for long periods before they feel compelled to change.
So what does the rise of Behavioural Economics hold for the future? McCormick states a belief that:
My personal belief is that the same outcome is likely to apply to the research industry. At the fieldwork stage, there is a perhaps a case for stating that well-constructed research has already been uncovering underlying behavioural economics for a long time, through the use of projective techniques at the qualitative stage and derived measures at the quantitative stage. However, as research agencies assume a more consultative role, like ourselves at B2B International, it is crucial that we learn to inform and advise clients on how to give their customers those crucial ‘nudges’ at the conclusive stages of projects.
In his first Thursday Night Insight, Kyle Cockett examines the dangers of taking statistics at face value.
Although I have only been working in the field of market research for a relatively short length of time, I have quickly realised the value of a well executed piece of quantitative research. When correctly designed, using a valid, reliable sample, quantitative research can be used to provide clients with strong conclusive findings, often enhanced by the use of inferential statistical techniques. Correlation analysis, CHAID analysis and factor analysis are all examples of such techniques that can add value to the overall conclusion, for example, to prove or disprove a prior hypothesis the client holds, such as ‘is group x significantly more satisfied than group y’ etc.
Almost every day we come across the findings of various types of research reported in the media, often on a range of weird and wonderful topics – but how many of us actually question the research method utilised and where the findings have come from?
Recently a story made the national news concerning a Plymouth school teacher – Richard Gribble – who found that pupils were finding it increasingly difficult to concentrate in class. The headline read:
Children addicted to computer games ‘unfit for school’
The headline was augmented by a passage of text which stated that ‘games addicted children are missing meals, talking about computer games during lesson times, tired and show poor concentration according to new research’. Upon investigating the story further, I found that the same damning verdict on our nation’s children was delivered by several other media outlets, one, reporting that ‘primary pupils are falling asleep at their desks after playing computer games until 4am’. It was only on delving deeper into the article that it revealed that the sample size of the research was in fact Mr Gribble’s own primary school class of 26 pupils – hardly representative of the population sample, especially when the National Office of Statistics estimates there are around 11.5 million under sixteen year olds! Unfortunately this point appeared to be lost to the majority of online commentators, for whom the power of the headline overshadowed the reliability and validity of the methodology used to arrive at the finding.
Indeed, it seems that the famous Albert Einstein quote ‘if the facts don’t fit the theory, change the facts’ has become all too literal when it comes to general media reporting of research findings. In fact, the National Health Service has now dedicated the news section of their website to investigating and reporting the statistical significance and causality between many of the tenuous links often reported in the media – can saucepans really cause early menopause? Can a pill cure your fear of heights? Is breastfeeding linked to school grades?
Michael Blastland, writing in his ‘Go Figure’ column for the BBC, perfectly sums up the problem of taking numbers at face value without reading between the lines. Take a look at the picture above. The picture was taken just prior to the general election. It shows a well known potato crisp company handing out free packets of crisps displaying the image of the three prospective prime ministers. Ostensibly, on face value the image implies that Nick Clegg is the most popular of the three candidates; based on the seeming popularity of his bags of crisps, depicted by the emptiest bin. We might be surprised by this, but at the same time, question why the image would lie? However, further thought raises a number of competing explanations; has the Gordon Brown bin just been refilled? Are the crisps different flavours, with Nick Clegg’s the most popular flavour? Which candidate was supplied with the most boxes of crisps initially?
I guess the point I am trying to make is that anybody can take numbers at face value. Blastland himself professes that ‘clever people – and newspapers and politicians – say outrageously daft things, often, with them and about them’. However, without some form of added insight, figures in their own right are often of little value. Click here to find out how you can use statistical techniques to add value to your research project at B2B International.