It’s worth spending 10% of your ad budget on research if it makes the other 90% more effective.
A manufacturer of barbed wite and field fencing once faced a new season with warehouses full of products. The options were fairly restricted since:
- (a) all fencing is made to a British Standard and there is little difference in quality or design between the products of the different manufacturers;
- (b) the objective is to sell in as much as possible to the key distributors (who sell on to small agricultural merchants who in turn supply the farmer).
Our manufacturer could increase distributors’ discounts but this might provoke retaliation from competitors and thus lower profits. He could step up his sales efforts, though in practice the team may be doing all they could, and excessive pressure in this genteel market might boomerang.
The manufacturer finally decided to differentiate the barbed wire by a promotion which featured a prize to the farmer who best completed a simple competition; there was another prize for the merchant who sold the fencing to the lucky winner. Access to the competition could be gained only by enclosing a label from one of the fencing products along with the entry form. The advertising agency’s views were sought and they checked out the idea with Farmers Weekly. There appeared to be no doubt about it, farmers are suckers for competitions. The idea had to be a good one.
The promotion was launched with cut-out entry forms in Farmers Weekly and loose forms available in the agricultural merchants. Sales to distributors rocketed – at last they had something different to sell. The agricultural merchants sold the fencing out to farmers, and without doubt the campaign was a great success. It had achieved the objective of building up market share.
Taking A Step Back – Using Research To Diagnose Campaign Problems
Unfortunately the story doesn’t end there. The ads had worked, but could they have worked harder? The company’s advertising manager certainly thought so: fewer than 100 farmers actually entered the competition – yet Farmers Weekly sells over I00,000 copies a week. And a few weeks earlier it had carried a competition which, with a smaller budget, had generated thousands of entries.
In an attempt to get to the bottom of the mystery market research was commissioned. The findings were revealing. The prize for the successful competitor was a Mediterranean holiday for two. Farmers, with a year-round workload, saw the prize as a disincentive and did not enter. They had, nevertheless, bought the fencing because it was easily available from the merchants and their awareness of the fencing manufacturer and their brands had been raised. They had at least read the advertising copy.
Nevertheless the increased market share that had been gained clearly owed more to push by the distributors, who had stocked up, than to the pull of direct demand by farmers. A more attractive prize would have yielded more entries and hence a still larger increase in sales and market share.
With hindsight the reason for the competition’s failure to draw a satisfactory response appears obvious. At the time the choice of a prize was being considered, however, it seemed a good idea to offer the farmer a chance to get away from it all. None of the many experts involved in designing the campaign was aware of the pitfall because at no stage did any of them consult the farmer.
The Return On Investment Of Advertising Research
Suppliers of industrial goods and services are notoriously sceptical as to the value of researching their advertising. Indeed, for a small industrial company with a budget of between £20,000 and £40,000 per annum the cost of research must seem daunting. What is sometimes forgotten is that the advertising budget recurs annually, whereas the ad research budget need not. It can make sense to consider the ratio of ad research to ad expenditure on a five year rather than an annual scale.
In practical terms, what can an industrial marketer do to ensure that his advertising is as effective as possible? The first task is to set quantitative objectives. This may sound obvious, and yet it is seldom done. A simple example is to set a target of a certain number of enquiries from a campaign. Measurement is easy, and the effectiveness of ads can be established from variations between journals and between responses at different times of the year. The target for the response set in advance of the campaign is the criterion for judging it.
Many campaigns do not have as their principal objective the winning of enquiries. They may aim to improve or rectify a company image, increase awareness of a product, educate the user or effect a change in his attitudes. Quantitative objectives should still be set to ensure there are some criteria by which to judge the effectiveness of the advertising.
On most occasions this is easier said than done as quantitative objectives can normally be set only by using past experience as the guideline. Furthermore in order that the effectiveness of the survey can be appraised it becomes necessary to measure image, attitudes, awareness or knowledge, before and after the campaign. The change that has taken place over the period is the measure of the success of the campaign.
The small samples which can provide adequate information in other areas of industrial marketing research (such as assessing market size, shares, etc.) are not sufficient in measuring advertising effectiveness. As one is comparing changes in the target market it is necessary to have a large enough data base to know that the shifts are significant and not due to chance. Two hundred pre-campaign and two hundred post-campaign interviews should be regarded as the minimum. The cost of such an exercise need not be large, as telephone interviewing using mailed out “clutters” of ads has proved a suitable and relatively inexpensive technique.
Research need not be confined to pre and post advertising Studies. It can identify the best sales platforms for the ads and visuals are likely to be most effective. Such research is especially valuable as it ensures that the forthcoming campaign will be a success. Clearly research into proposed ads is best carried out face to face. But unlike the measurement of changes in awareness caused by the advertising, copy testing can be done on a much smaller sample. Interviews can be in-depth with individuals or, should circumstances permit, in the form of a group discussion.