Measuring customer satisfaction is a delicate affair for B2B companies. I recently wrote about three ways to measure this customer data.
One of the options I covered was online surveys. B2B companies like them because they can be inexpensive. That's true, although companies have to be careful that an online survey makes sense for them (i.e. is it an appropriate way to measure satisfaction given their industry and client base, and can they get useful information?). For example, a law firm that works with corporate executives and charges millions of dollars for their services shouldn't use an online survey to measure client satisfaction. Likewise, a company that has just five clients shouldn't use one either.
But there are many B2B companies that do use online surveys. I see more companies using them as a systematic way to measure customer satisfaction (CSAT) and benchmark performance. These surveys are appealing because they provide quantitative data on specific issues. They can be deployed consistently year over year to measure progress. And standard questions can be benchmarked between divisions and even other companies to give leaders a sense of how their organization is doing relative to others.
While CSAT assessments can be a good way for B2B companies to know what they've done well and what they can work on, companies have to be careful about how they do their assessment. An assessment done poorly is worse than no assessment at all.
There are two big risks for B2B companies in doing a CSAT assessment.
The first is doing an inappropriate assessment that nets useless or misleading results. Even worse is when those results are used to make policy changes to customer service. CSAT is an area where the old adage "measure twice, cut once" applies. Refer to my previous post on CSAT assessment for B2B companies for a few guidelines on which approach makes sense for you.
The second risk of doing a CSAT assessment is doing nothing with the results. Companies do this all the time and I have to shake my head. Why go through all the hassle and expense of doing the assessment if you aren't going to make any changes? And even worse—why raise expectations among customers that changes may be coming if no changes will be made?
This mistake has been made by so many companies over such a long period of time that customers are now very skeptical. Many don't bother participating in CSAT assessments because they assume nothing will be done with the results. One interesting question that more companies are asking as part of their CSAT process is, "Do you expect changes will occur as a result of this assessment process?"
What a great question. It cuts to the heart of the issue and tells firms whether they have a reputation for responding to customer needs or not. The results on this question can skew high because those who do participate in the CSAT are more likely to expect change compared with those who don't, but it's still a very compelling question. It also places a clear burden of responsibility on the company and those who commission the CSAT to actually do something with the results.
So if you're considering doing a CSAT assessment, think about these two risks and how you're going to avoid their pitfalls. Make sure your CSAT will deliver meaningful results for your customers and your organization for many years to come.
Lisa Shepherd is author of Market Smart: How to Gain Customers and Increase Profits with B2B Marketing and president of The Mezzanine Group, a business-to-business strategy and marketing company based in Toronto. She was the youngest female CEO of a PROFIT 200 company in 2007 and 2008 and is a frequent public speaker on B2B marketing strategy and execution.
More columns by Lisa Shepherd