X

From Good To Great – Best Practices In Benchmarking

In most b2b markets, customers source from multiple suppliers and inevitably draw comparisons between providers. It is dangerous for b2b brands to become complacent about their performance, even when scores are high. There could always be another supplier that could break the loyalty or inertia by offering a solution that is better in some way.

Management Consultant, Peter Drucker, argued that if something can’t be measured, it can’t be managed. We would take this a step further and argue that if you can’t benchmark your brand, you can’t better it. With this in mind, we have identified 8 tips and insights on how to better your brand through benchmarking, drawing on our best practices and insights from our global databank of several hundred b2b brands.

  1. Track the trends: Tracking surveys such as customer satisfaction and brand trackers capture brand performance on a reoccurring basis. The value from tracking research comes from monitoring the impact of investments on improving KPIs such as brand awareness, satisfaction or advocacy. Determining appropriate sample sizes is critical in this type of research, in order to avoid differences being due to the margin of error from too small a sample size.

  2. Benchmark against best-in-class: Richer insights come from comparisons with competing brands, especially versus the superior “best-in-class” brand. Although this sets the bar in terms of the brand to beat, note that it doesn’t necessarily reflect the best possible performance as it could be that all brands are mediocre within the category and therefore the best-in-class brand could simply be the “best of a bad bunch”.

  3. Compare apples to apples: It is advisable to compare performance against more direct competitors addressing the same target audience – for example, Toyota benchmarking its performance against Ford and Honda, rather than against Mercedes or BMW who might be best-in-class but arguably serve different markets. However, this is not to dismiss learning from more peripheral (or even unrelated) brands as this is where transferrable success could be identified.

  4. Outperform the norm: On average, a typical b2b brand has a brand efficiency score of 15%. In other words, of those aware of the brand, 15% are brand advocates. Brand efficiency is important as strong performance on this metric indicates that advertising and communications, as well as product and service delivery, are effective in driving delighted customers to recommend the brand. In order to overtake competitors, good brands should aim to convert around one-in-five of those aware into being brand advocates; better brands would convert around one-in-four; the best brands would convert around one-in-three. This naturally becomes more difficult with awareness increases, as brands have to work harder in appealing to a wider audience in order to drive higher consideration, usage and therefore advocacy.

  5. Anchor with the industry average: For a more representative view of a market and to set more realistic goals, industry norms are recommended for providing relevant context against similar brands, and to avoid results being skewed by brand outliers. Brands in education, construction and engineering, for example, typically enjoy strong NPS results. Compare this to very weak advocacy scores in the energy sector where the product is seen as undifferentiated (beyond price) and the customer is locked in (as opposed to loyal). Thus benchmarking against just an overall b2b NPS average of 24 doesn’t take into account these industry variations.

  6. Avoid certain country comparisons: In international research, it is necessary to benchmark by geographic region (often by country), recognizing that there are usually cultural norms influencing how people rate brands. For example, emerging markets tend to rate much higher scores on likelihood to recommend (the advocacy question used for calculating the NPS). The highest NPS results for b2b brands are typically found in countries like Mexico, Turkey, Brazil, Taiwan, Russia and India (ranging from the high 60s to the high 40s). The least enthusiastic scorers are in developed markets, particularly Canada, Belgium and Japan (where the average NPS for b2b brands ranges from 6 to 9). Without recognizing the cultural differences, a comparison of brand performance in Mexico versus Japan, for instance, could falsely conclude that Mexico is superior to Japan. Stick to comparisons within the same country, or compare within the geographic region where appropriate.

  7. Accept that metrics reflect different – and sometimes unrelated – performance: Our proprietary value metric, the Net Value Score (NVS), measures how well a brand delivers benefits relative to the price charged (i.e. its value). Positive performance has been linked to growth in market share, and this metric indicates the actions required to increase value, e.g. strengthening the customer value proposition, increasing resonance of the benefits, or adjusting prices. High perceived value does not equate to strong advocacy and vice versa. For example, the fibers and fabrics sector performs well on NPS but poorly on NVS, probably because it’s easier to deliver a strong customer experience in this sector but challenging to differentiate what is essentially a commodity product. In order to grow, brands in the fibers and fabrics industry therefore need to build their value propositions as increasing advocacy is not enough.

    Interestingly, brands in APAC score very highly on value perceptions, despite moderate scores on NPS. As a traditionally cost-focused region, brands in APAC typically place more importance on price than on building brand advocacy. This explains why value perceptions could be higher in APAC as audiences there could believe that they are getting a better deal. It also suggests that cultural idiosyncrasies need to be recognized when explaining differences in metrics, and that an improvement to one KPI won’t necessarily benefit another.

  8. Raise the bar: According to McKinsey, improving a customer experience from average to exceptional (where the customer is wowed in some way) can lead to a 30 to 50 percent increase in likelihood to purchase another product. In other words, raising the bar clearly impacts the bottom line and so brands must continually seek to improve the KPIs measured in market research.

    Bear in mind that the higher the score, the more difficult it is to make improvements. For example, it is relatively easy to obtain a satisfaction score of 7 out of 10, but difficult to improve satisfaction above this score. Increment increases as seemingly small as +0.2 are considered noteworthy for scores above 7 out of 10.

    Tracking research provides a means through which targets can be set, and the goals depend entirely on the resources available to address the required improvements. One research survey alone may not be enough to deliver all the answers on necessary actions to address. For example, low advocacy scores might require a deeper dive to understand whether improvements need making to the product versus some other variable such as availability to purchase.

    It is nevertheless feasible to set targets based on an annual improvement (e.g. 5% per annum), or on absolute targets (e.g. an NPS of 30 to be reached as quickly as possible). The bar should be set fairly high to encourage teams to strive towards excellence, but not too high where goals are impossible to meet.