Software start-ups almost never build a technical support team when the company starts. The salespeople handle support and training at first.

As the company grows it will become obvious to everyone that the salespeople could sell more if they weren't interrupted so much with support and training duties. That's when the company needs me.

I know the tools and processes for starting technical support and professional services teams. I understand the software product lifecycle, and how customers' services needs change through those lifecycle stages.

The articles on this blog will cover the whole process of building and managing these services teams, over the whole product and customer lifecycle.

I know a lot, but not everything. Ask me your tough questions. Challenge my assumptions. I look forward to learning from you.

I am available for contract work, if you want to talk to someone about the specifics of your situation.

-Randy Miller | william.randy.miller (at) gmail.com

Friday, April 8, 2011

Electronic Document Signing

I evaluated electronic signature tools for a client.  I found four reasonable options.  Each has strengths and weaknesses.

From a services standpoint, it is clearly worth the effort to ramp up and use one of these tools for your vendor and client contracts.  Stop killing trees.  Stop filling filing cabinets.  Save time.


Summary:

  • If your priority is ease of use: Adobe e-Sign (free)
  • If your priority is both ease of use AND documents that look like what you are used to: EchoSign ($15/month)
  • If you priority is documents that look like what you are used to, and your documents are each very unique: RightSignature ($14/month)
  • If you priority is documents that look like what you are used to, and many of your documents are similar (templates): DocuSign ($25/month)


Details:

Electronic signatures are fundamentally different from ink signatures.  Ink signatures establish the signer's identity through handwriting.  Electronic signatures establish identity based on the email account, login, and password that were used to log in to the electronic signature tool.  The audit trail of the tool maintains that a certain person approved ("signed") a certain document at a certain date and time.

The complexity and cost of the more expensive tools comes from them trying to make electronic signatures LOOK like ink signatures.  You are literally only paying for aesthetics--paying in both money and the time it takes to sign and send each document.  Generally speaking, the cheaper options are also easier and faster for your co-signers to use.


1. Adobe e-sign
Free
By far the fastest and easiest to use.
Only supports PDF documents.
Does not allow for filling in any fields.
Looks nothing at all like ink.
Shows a clear audit trail on a new page that it attaches to the end of the document.

2. EchoSign
$15/month
Very fast and easy to use.
Supports all common documents formats (Word, PDF, etc.).
Does not allow for filling in any fields.
Looks a little like ink.
Adds the ink-like signature, full name, date, and email address to the bottom of the document.  You have to log in to the tool to see the audit trail.

3. RightSignature
$14/month
A little bit cumbersome to use at first.
Supports all common documents formats (Word, PDF, etc.).
Allows filling in fields, but this feature will add several minutes of work for you each time you send a document.
Looks just like ink.
Automatically signs documents for you when you send them.
Shows a clear audit trail on a new page that it attaches to the end of the document.

4. DocuSign
$25/month
Exactly like RightSignature, except:
- When you send a document that is basically the same as one you have sent before it recognizes the form fields and does all of that for you automatically.
- Does not let you automatically sign documents you send--it's rather clunky about that.
- You have to log in to see the audit trail.
This is the only company that required a credit card in order to demo, and I have received lots of marketing email from them.

My customer chose EchoSign.  She is not a highly technical person, and she seems to be very happy with the service.

Friday, October 29, 2010

Supporting custom code

Question: How can a technical support team support handle custom code, like that developed by a professional services team?

Answer: This was a tough problem.  I tried several solutions over the years.  Our final solution to this was fantastic, but I can't take much credit for it.  Two of my lead staff members, one from PS and one from Tech Support, did the real work of building an elegant solution.

I was managing both technical support and professional services.  When we built out our custom code business I was very hands-on in all aspects of it.  I did everything but the coding.  So, at first, I handled all of the support for custom code myself.  But that couldn't scale.

I had ideas and intentions for rolling out a program whereby the support team would take over supporting the custom code.  It was something that I talked about with both teams off-and-on for quite some time.  So both teams had plenty of time to consider the problem before the problem really presented itself.

It turned out that we became victims of our own success.  We had improved the capabilities of our custom code to the extent that it was completely blended in to the core application.  End-users were completely oblivious to the fact that they were using one-off custom code, which is exactly how it should be.  But once we reached that level of integration the customers using custom code began approaching the support team for help instead of calling me.  My CSRs began falling down rabbit holes trying to diagnose problems that made no sense to them because they involved custom code.

Finally my CSRs began telling me that they needed a better way to know about custom code in advance.  We began tracking custom code as a separate type of support case, and measuring the metrics on those cases.  There weren't that many cases.  We found that we weren't spending that much more time on these cases.  But there were usually extra emails going back and forth, which caused the resolution to take significantly longer than normal cases.

And when I looked at that report I realized that all of these cases were for our most important clients.  That's when I handed the problem off to my two team leads and challenged them to work out a solution.  The solution they designed was elegant and scalable.

I chaired a meeting of the technical support and professional services teams.  We openly discussed the problem.  We drew a map on the whiteboard of the current workflow process for the creation and delivery of custom code.  The teams came up with two changes to that process, and one change to the process of supporting custom code.  Both teams were excited about the benefits that they would receive from the changes.

The professional services team began building long-term test sites for every custom code customer.  Prior to this they would only place custom code on temporary sites that they had used during coding, testing, and training.  Then the sites would go away.  Now these sites became permanent, and the support team was given full access.

The professional services team also began including a member of the technical support team in the training webinar for the customers.  During the training the support team member would make notes about the custom code, the on-screen verbiage, click locations, and affected database fields that would tell them that a support call involved the custom code.  These notes all went in to the internal knowledgebase in our helpdesk, along with the client's docs on the custom code, the link to the test site, and location of the source code in our code management system.

With all of that data at their fingertips, the technical support team took over L1, L2, and L3 support of the custom code.  When a case came in for a customer with custom code the CSR could immediately see that the client had custom code.  They could see what the custom code did, and they could immediately log in to a test site where they could see the custom code in action.  They had the full documentation of the custom code, and could examine the source code themselves, if needed.

The technical support team didn't involve the professional services team in support any more at all.  If the custom code was broken then the support team just fixed it and issued a patch like they would for any other bug.  It was more often the case that the problem was that the client had changed their configuration or requirements.  And then it was easier to have that conversation with the client, to explain that changes are billable, as the case was passed from technical support to professional services.

The time to resolution for ‘custom code’ cases plummeted to the point that it matched other types of cases.  Because the support team was ready for the custom code it took no longer to support than anything else.

One of the great side-effects of this change was that the support team began making and sugegsting improvements to the professional services team's core code library.  These changes mostly involved error-handling.  For instance, instead of erroring-out when required configuration data was missing, the support-improved code would present a simple message like, "Users _____, _____, and ______ have no value assigned for their _____ field.  Please click here to update those users, and then re-run this report."

Another great side-effect is that the professional services team spent much less time on support, which freed them up to do more pre-sales work and bill more project hours.

Technical support metrics

Question: What are the right metrics to measure for a technical support team?

Answer: Metrics are primarily a function of your helpdesk software.  You should think through your metrics needs before you select your helpdesk software.  My experience with helpdesk software packages indicates that this is a sore weakness with most packages.  (I'm working on my own independent evaluations of all of the major packages.  I'll include metrics as a core component in each review.)

There are several components of performance that must be measured simultaneously.  When the open case backlog grows you need to be able to isolate the cause of the problem.  For instance, if it is true that, on average, cases opened by email take longer to close than cases opened by phone, then you should not be surprised by a growth in the backlog if you have had a growth in the cases opened by email.

These are the components that I measure:
a. Communication method (phone, email, helpdesk, product-internal messaging, etc.)
b. Product & version (when we are supporting multiple products or versions)
c. Severity
d. Type (application bug, performance problem, usage question, feature request, etc.)
e. Assigned staff member

For each of these components I measure these values:
a. Number of items created per day
b. Number of items open at the end of the day
c. Ages of the open items at the end of the day

My standard procedure is to run a single report about half an hour before the end of the normal workday.  That report shows a list of the open items, their ages, and each of those five components.  I look at the report daily, so big problems stand out immediately.  If the backlog is elevated, or if there are severe cases that are aging too much, then I dig in and understand the reasons for the aging.  I might ask some staff members to stay late, or stay late myself to get caught up.

The team also separately monitors the case age by severity in order to perform escalations, as per the terms of our SLAs.

I also tracked the amount of time we spent on support for each customer.  There isn't much point for this data until the support team's processes are working well and customers are overwhelmingly happy with the support they receive.  But when you get to that point this data can provide important insights into your profitability by customer.

At Journyx we evaluated the common characteristics of customers with high and low support burdens.  We found a significant number of clients who cost us more to support than they paid for their annual maintenance contracts.  We presented our findings to the rest of the company in profiles of our most profitable and unprofitable clients.  These profiles led to some changes in pricing, product strategy, marketing focus, and sales focus.  All of these changes enabled Journyx to focus on attracting more profitable customers and fewer unprofitable ones.

Wednesday, October 27, 2010

Management style

Question: What is the best management style for a services team?

Answer: I can't say what is best.  I can only say what I have done.  I had very low turnover and our customer satisfaction ratings continually improved--so what I did must have worked at some level.

I hire smart people.  I put them in a position to succeed.  I actively maintain the lines of communication with each person.  I monitor their progress and help when needed; but I stay out of their way as much as I can.  My job is to help and support them as they do their jobs.

I am tremendously committed to the old adage "praise in public and correct in private."

In my experience, these are the things I have found necessary to put my people in a position to succeed:
1. They must understand the goals and boundaries of their positions.
Goals are obvious.  I see no need to cover those here.

Every stated goal needs an anti-goal, too.  Anti-goals are the failure conditions.  It's not enough to say that the goal is to deliver their assigned widget within 3 days.  The anti-goal needs to say that widgets that are not delivered within 12 days will bring certain consequences.

Boundaries are less finite than goals, but also important.  The easiest example of a boundary is in procurement.  How big of a check can you sign?  Boundaries for services people usually involve situations where customers are unhappy.

For instance, in my support team the level 1 and level 2 CSRs do not have the authority to decide whether or not we will patch a given bug.  They have an escalation process for path requests.  If a customer asks them then they are instructed to say, "Building patches is a complicated process, and a decision cannot be made until a developer actually looks at the broken code.  I am working as fast as I can to get all of the necessary information to the right people so that decision can be made."  If the customer pushes they escalate the case.  But they do not have to be responsible for making that decision.


2. They must understand the company’s mission.
This seems like it should go without saying, but I've met many services people who could not articulate their company's mission.  It's not just that they will meet people and need to be able to do good word-of-mouth marketing.  They will be making many little decisions and understanding the mission will help them make the right decisions.

You should also remember that they constantly communicate with your customers.  They need to be able to support the company mission with everything they say.


3. They need the right tools.
I am a big believer in high tech gadgets like multiple monitors, bluetooth headsets, wikis, VPNs, etc..  Whiteboards are also tremendously useful for collaboration.  (Yes, I'm a committed early adopter.)


4. They need a reliable and efficient workflow process.

I am an INTJ.  I think in terms of processes and systems.  It's just how I'm wired.  This is the core reason I think of myself as a manager.  I'm sure that other personality types can be be great managers.  But in terms of process I have difficulty describing what I do because it comes so naturally.  But process is very important in services.

Services is 90% soft skill and 10% tangible deliverable.  What I mean is that the services team is usually delivering some tangible thing, like a software patch, or a configured piece of software.  But that thing is only about 10% of the job.  The other 90% is the communication about the thing.  And because services is so soft, it is very very easy to drop the ball.  Good process is all about keeping track of all of the balls that are in the air.  If the process fails and a ball hits the floor the staff member who touched it last will take the arrow, even though they are often not to blame.

The services processes have to ensure that no ball hits the floor, ever.  But that level of process can easily mushroom to become so much overhead that no one has time left to actually do the work--hence the need for efficiency.

My default strategy is to build the minimum amount of process to get by, and then to adapt it every time a ball hits the floor.  My mantra in this is "never repeat a mistake."  



5. They need plenty of warning before each change.
This is about keeping them motivated and happy.  Surprises at work are about as welcome as surprises at the dentist's office.  I understand the need to keep strategy secrets.  But whenever possible, warn your services people before every business change.  Even big scary organizational changes can fly by smoothly, if people are warned.


I maintain the lines of communication through these ongoing projects:
a. I take time to talk informally and privately with each team member.  When budget allows I take each team member to lunch once a month, or so.
b. I actively communicate company news to the team.
c. I regularly ask the team for ideas to improve their toolset and workflow processes.  When they have good ideas I attempt to implement them and give them credit.

I also engage the rest of the company on behalf of my team.  I make these two requests on behalf of my team:
a. When they receive a praise of someone on one of my teams, please share it publicly.
b. When they receive a criticism of someone on one of my teams, please bring it to me privately for resolution.




Managing support quality

Question: How do you measure the quality of the work that your technical support team is doing?

Answer: I experimented with quality measurements several times.  There is a fundamental limitation to quality measurement: quality is the judgement of the customer, and the customer rarely cares to take the time to tell you how you did.  Quality measurements are assumed to always skew towards bad grades because only customers who are unhappy about quality will take the time to tell you.

I define quality with three metrics:
a. Time to resolution
b. Accuracy
c. Customer experience

Time to resolution is measured in the standard metric set, discussed in a separate article.

Accuracy and customer experience are measured with post-case follow-ups.  I have done those follow-ups in four different ways.
a. I had my helpdesk software (RightNow Web) send a ‘please tell us about your experience’ email with a short poll after each case was closed.  Response was very low and results were skewed severely to the bad.

b. For a time I randomly selected cases closed in the past day and called those people to ask about their experience.  It was time-consuming.  The answers indicated we were doing a great job in 90% of cases, and floundering badly with the other 10%.

c. I developed a short questionnaire and assigned each CSR the task of calling one customer each day.  They each picked one case randomly from a report of all of the cases closed the previous day by CSRs other than themselves.  The CSRs got bogged down answering other questions for the people they contacted.

d. I engaged a small polling company to call every client who had at least one support case the previous month.  The 90% to 10% breakdown held up.  We were able to get (just) enough data to analyze trends and find commonalities in the 10% failures, and we began a series of process improvements to fix those problems.

My goal with each of these processes was to develop a baseline of performance and then to work towards improvement.  That means that I had to keep the same measurement process running through the entire process change.

When to add a services team

Question: When is the right time to add a services team (either technical support or professional services)?

Answer: The short answer is to wait as long as you can.  That's what most software companies do as an impulse, and that instinct is correct in most cases.  But you will have to build out a services organization in order to reach the mass markets.

Salespeople can do usually level 1 tech support and basic implementation training.  Developers can do level 2 and level 3 technical support.  The founder or product manager or some product visionary can do advanced implementation work and deeper training, if any of your customers even ask for either of these services.

This will work while a company is in start-up mode.  When those people get too bogged down or frustrated with wearing the additional hats, then start hiring service people tactically.

You also need to recognize that the clients you can attract will vary based upon the maturity of your product and service offerings.  The theory here is called 'diffusion of innovations' (this is the root theory behind Geoffrey Moore's 'Crossing the Chasm'.)  This theory states that different types of people and organizations are willing and able to adopt innovations at different rates.  The theory breaks people and organizations into five categories:
* Innovators
* Early Adopters
* Early Majority
* Late Majority
* Laggards
(credit: found at this blog.)

The percentages in those categories are referring to market size.  Those are just rough estimates.  The market size per category will vary greatly from industry to industry and segment to segment.  But every study I have ever seen has come out with a bell curve of basically this shape.

New products only appeal to members of the innovators category.  As the company and product mature it will appeal to each new group in order: early adopters, early majority, late majority, and laggards.

I'll describe each of those 5 categories in separate articles, soon.  For now it should suffice to say that members of each category have certain expectations for the sales process and for post-sale support.  They will look at your website and immediately recognize whether or not your company is ready for them.

If you don't have a technical support team in place then you are only selling to innovators and early adopters.  You might not be selling to early adopters, depending upon how mature your technical support team is, or how talented your salespeople are at technical support.

If you don't have a professional services team you will have a hard time selling to early majority customers.  It is doable if your salespeople are experienced in performing training and helping with implementations.  You will not make serious inroads in to the early majority market without a dedicated professional services team, though.  And you will definitely not sell to late majority or laggards without it.

The bulk of a product's revenue will come from the majority categories.  My experience is in taking a product that only sold to innovators and early adopters, and adding the service components to make it palatable to the early and late majorities.  That is what this blog is all about.

Further Study: Carnegie Mellon's Software Engineering Institute has a practice that studies process and performance improvement.  These are the people who helped establish the original PMP standards.  They do a lot of research on these topics.  Whatever you might think of the detailed processes and systems that they seem to always develop, their research is fascinating.  They developed a Capability Maturity Model Integration (CMMI) for services organizations.  They also have a CMMI for software development. (Hit 'Model Download' on either of those pages.)

Tuesday, October 26, 2010

Motivating support staff

Question: How do you keep technical support people motivated to do good work?

Answer: Technical support is a grinding thankless job.  The turnover rate is usually high.  I have three strategies for motivating technical support staff (a.k.a. customer service reps: CSRs) members for better performance.

First, I hire the right people.  In my experience, the best way to motivate a person is to tap in to something that they already want to do, and then empower them to do it.  Technical support is full of two distinct needs: helping people and figuring out technical problems.  So I always start with hiring people for technical support who are internally motivated to either help people or solve problems.  I like to have a mix of these two qualities on the team.  I vary the mix according to how much need there is for figuring out new problems.

I search for this internal motivation during the screening and interviewing process.  I ask prospects to provide their Meyers-Briggs personality types.  I don't exclude people based upon their reported type, but it is one data point towards them being the type of person I want.

These 6 M-B personality types excel at customer service:
ISFJ (or here)
INFJ (or here)
INFP (or here)
ESTJ (or here)
ESFJ (or here)
ENFJ (or here)

These 3 M-B personality types excel at problem-solving:
ISTP (or here)
ENTP (or here)
INTJ (or here)

I also specifically look for this internal motivation in my interview questions.  There are several questions that give the person an opportunity to demonstrate their desire to solve problems or help people:

  • If you won a hundred million dollars, what would you do with your time?
  • Tell me about the last time you helped someone else with no expectation of receiving anything in return.
  • What is the most interesting problem you ever solved?

My experience with these types of people is that they demonstrate a very high level of dedication to solving problems and helping customers.


Second, I engage my CSRs to get their ideas on how to improve the team's performance.  I set the strategic direction and challenge the team to work out the tactics required to achieve the goal.  I have found that when they come up with the improvement ideas they are strongly motivated to see the improvements work.  I make a point of giving credit when the plans succeed.  I never lay blame when the plans don't work out.

On many occasions my CSRs have come up with ideas there were both better than my ideas and required more from them than my ideas.  And these ideas have almost always worked better than expected.  My CSRs changed their triage process, and made it work twice as fast.  They changed their case-closing process several times, each time bringing measurable improvements in customer satisfaction.  A few of the case-closing changes were tedious and difficult, but they took emotional ownership of the changes and did the hard work to make the changes stick.


Third, I work to find opportunities for growth and possible promotion for my CSRs.  Technical support is often viewed as a dead-end job.  I actively work to make it not so.

I talk with each CSR and we identify a next step for their career.  I do not mince words in explaining to them that I will help them achieve their next step in proportion to their dedication to achieving my goals.

I had to deliver on my promises a few times before my reputation was cemented.  I found side-projects that my staff members could do that would demonstrate progress towards their career goals--real experience and resume stuffing.

I even sent one CSR to a week-long crash-course on programming in order to help him make progress towards his goal of becoming a developer.  When he returned I began leaning on him for little tools, and giving him time to work on writing bug fixes.