Software start-ups almost never build a technical support team when the company starts. The salespeople handle support and training at first.

As the company grows it will become obvious to everyone that the salespeople could sell more if they weren't interrupted so much with support and training duties. That's when the company needs me.

I know the tools and processes for starting technical support and professional services teams. I understand the software product lifecycle, and how customers' services needs change through those lifecycle stages.

The articles on this blog will cover the whole process of building and managing these services teams, over the whole product and customer lifecycle.

I know a lot, but not everything. Ask me your tough questions. Challenge my assumptions. I look forward to learning from you.

I am available for contract work, if you want to talk to someone about the specifics of your situation.

-Randy Miller | william.randy.miller (at) gmail.com

Friday, October 29, 2010

Technical support metrics

Question: What are the right metrics to measure for a technical support team?

Answer: Metrics are primarily a function of your helpdesk software.  You should think through your metrics needs before you select your helpdesk software.  My experience with helpdesk software packages indicates that this is a sore weakness with most packages.  (I'm working on my own independent evaluations of all of the major packages.  I'll include metrics as a core component in each review.)

There are several components of performance that must be measured simultaneously.  When the open case backlog grows you need to be able to isolate the cause of the problem.  For instance, if it is true that, on average, cases opened by email take longer to close than cases opened by phone, then you should not be surprised by a growth in the backlog if you have had a growth in the cases opened by email.

These are the components that I measure:
a. Communication method (phone, email, helpdesk, product-internal messaging, etc.)
b. Product & version (when we are supporting multiple products or versions)
c. Severity
d. Type (application bug, performance problem, usage question, feature request, etc.)
e. Assigned staff member

For each of these components I measure these values:
a. Number of items created per day
b. Number of items open at the end of the day
c. Ages of the open items at the end of the day

My standard procedure is to run a single report about half an hour before the end of the normal workday.  That report shows a list of the open items, their ages, and each of those five components.  I look at the report daily, so big problems stand out immediately.  If the backlog is elevated, or if there are severe cases that are aging too much, then I dig in and understand the reasons for the aging.  I might ask some staff members to stay late, or stay late myself to get caught up.

The team also separately monitors the case age by severity in order to perform escalations, as per the terms of our SLAs.

I also tracked the amount of time we spent on support for each customer.  There isn't much point for this data until the support team's processes are working well and customers are overwhelmingly happy with the support they receive.  But when you get to that point this data can provide important insights into your profitability by customer.

At Journyx we evaluated the common characteristics of customers with high and low support burdens.  We found a significant number of clients who cost us more to support than they paid for their annual maintenance contracts.  We presented our findings to the rest of the company in profiles of our most profitable and unprofitable clients.  These profiles led to some changes in pricing, product strategy, marketing focus, and sales focus.  All of these changes enabled Journyx to focus on attracting more profitable customers and fewer unprofitable ones.

No comments:

Post a Comment