Software start-ups almost never build a technical support team when the company starts. The salespeople handle support and training at first.

As the company grows it will become obvious to everyone that the salespeople could sell more if they weren't interrupted so much with support and training duties. That's when the company needs me.

I know the tools and processes for starting technical support and professional services teams. I understand the software product lifecycle, and how customers' services needs change through those lifecycle stages.

The articles on this blog will cover the whole process of building and managing these services teams, over the whole product and customer lifecycle.

I know a lot, but not everything. Ask me your tough questions. Challenge my assumptions. I look forward to learning from you.

I am available for contract work, if you want to talk to someone about the specifics of your situation.

-Randy Miller | william.randy.miller (at) gmail.com

Wednesday, October 27, 2010

Managing support quality

Question: How do you measure the quality of the work that your technical support team is doing?

Answer: I experimented with quality measurements several times.  There is a fundamental limitation to quality measurement: quality is the judgement of the customer, and the customer rarely cares to take the time to tell you how you did.  Quality measurements are assumed to always skew towards bad grades because only customers who are unhappy about quality will take the time to tell you.

I define quality with three metrics:
a. Time to resolution
b. Accuracy
c. Customer experience

Time to resolution is measured in the standard metric set, discussed in a separate article.

Accuracy and customer experience are measured with post-case follow-ups.  I have done those follow-ups in four different ways.
a. I had my helpdesk software (RightNow Web) send a ‘please tell us about your experience’ email with a short poll after each case was closed.  Response was very low and results were skewed severely to the bad.

b. For a time I randomly selected cases closed in the past day and called those people to ask about their experience.  It was time-consuming.  The answers indicated we were doing a great job in 90% of cases, and floundering badly with the other 10%.

c. I developed a short questionnaire and assigned each CSR the task of calling one customer each day.  They each picked one case randomly from a report of all of the cases closed the previous day by CSRs other than themselves.  The CSRs got bogged down answering other questions for the people they contacted.

d. I engaged a small polling company to call every client who had at least one support case the previous month.  The 90% to 10% breakdown held up.  We were able to get (just) enough data to analyze trends and find commonalities in the 10% failures, and we began a series of process improvements to fix those problems.

My goal with each of these processes was to develop a baseline of performance and then to work towards improvement.  That means that I had to keep the same measurement process running through the entire process change.

No comments:

Post a Comment