Home    ITIL  Index

Using Balanced Metrics as Service Support Indicators

Deciding what to measure will cause real-world changes in behavior, writes ITSMWatch columnist Rob England.
Dec 28, 2009
By

Rob England





When measuring service support, it is important to get a balanced picture across many metrics and not get too hung up on one or two. Otherwise you will distort behaviour. The primary purpose of metrics in service support is to monitor how well we are performing. There are many metrics in use and some fine books on the topic. I will not try to canvass them all here but, as an example, service support measurements might include a wide variety of metrics such as:

End-to-end, or IT as a “black box”:

  • Number of incidents and requests closed per week
  • Mean time to restore service
  • Customer satisfaction

By support team:
  • Number of assignments closed per week, per queue
  • Percentage of assignments open, by age, per queue
  • Mean time to respond to an assignment, by priority
  • Knowledge items contributed, as percentage of assignments

Specific to the service desk:

  • First call resolution
  • Calls abandoned
  • Average time of service desk calls
  • Mean time between user contacts

Where to Begin

The study of measurement is a broad and complex one. Two factors seem to come up a lot in discussions with my service support clients: getting a balanced picture, and choosing metrics fitness for purpose. In this article, I will look at balance and in a future article we will look at fitness for purpose.

All metrics change behaviour in undesired as well as desired ways. Look at the example metrics above. If a team is measured on how many assignments they closed for the week, then simply reassigning all their incidents to another team will give them an exemplary metric, but it isn’t exactly a behaviour we want to encourage.

Incidentally, something else to note in those metrics above: at the service desk, measurement is all about calls; at the queue or group or team level, it is all about assignments; and at the overall level, it is all about … hmm… we don't’ have a single word for it. We need a collective word for what ITIL splits into Incidents and Requests. I like the term Responses: everything the service desk needs to respond to, whether it comes from users or from automated monitoring systems or from regular scheduled tasks or wherever. Measure Responses.

For another example of undesirable impacts on behavior, consider Calls Abandoned (where phone callers give up before the phone is answered). If we focus too heavily on Calls Abandoned as a metric, staff will cut short calls that they are on in order to answer new calls. Abandoned calls are always a bad thing, but are they a worse thing than hustling an existing caller off the phone when it gets busy? Surges happen, even on the quietest of days; you will inevitably have times when the incoming traffic exceeds the service desk’s ability to answer it. You cannot get Calls Abandoned all the way to zero without very expensive excess telephone-answering capacity.

And of course it is in the very nature of service desks for all hell to break loose. Statistical theory tells us something that may seem counter-intuitive at first (it did to me): evenly spread data is by definition not random. It has a pattern. It is evenly spread and hence predictable. When things cluster ―a surge of calls to the service desk, for example―it may mean one of two things: (a) something has happened to make them all come at once or (b) something hasn’t happened to make them all come at once. Find a better a statistician than me to explain how you calculate whether the cluster goes beyond what one might expect from a random event. But rest assured, even in the most stable peaceful IT environment you will get surges of calls. You must set a realistic target for Calls Abandoned.

So, how hard do we push the issue of the level of abandoned calls? Enough to resort to “log and flog” (also known as “tag and bag”)? That is, do we degrade the service to those who do get through by taking their details and abandoning them until later with a promise of a call-back, or do we lose callers in the queue? The answer is that both are undesirable, and over-emphasising either metric―Calls Abandoned and First Call Resolution―will introduce negative behaviours. We need a balanced approach.

Balanced Scorecard

Unintended distortion introduced by focusing too much on a single metric can be reduced by using a balanced portfolio of metrics, such as the popular balanced scorecard concept (as recommended by both ITIL and COBIT).

There have been several discussions of balanced scorecard for IT in general and service management in particular (e.g., see ITIL Continual Service Improvement 5.4.1). Here is another, slightly different perspective that combines service management essentials with the original Kaplan/Norton concept of customer, internal, learning, and financial quadrants, and with the "four areas" of Val IT: Are we doing the right things? Are we doing them the right way? Are we doing them well? And are we getting the benefits?

Combining these we get a scorecard that looks like this:

Choose your metrics to give a balanced “portfolio” of measures in each quadrant. Real behaviors are a balance between conflicting priorities: e.g., to get through as many calls as possible whilst resolving as many calls as possible; to restore service quickly while identifying underlying problems; to control costs and satisfy customers. Select the metrics to reflect the same mix of priorities.

Then derive a score for each quadrant, and from them an overall score. Organizations that get into balanced scorecard in a big way spend time working out the correct weighting of each metric in the formula for the overall score. You will have multiple scorecards. They can "nest" inside each other at different organisational levels. This scorecard above applies to the whole of IT, or to the service support area, or just to the service desk.

In addition to scorecards at multiple nested levels, they can be a real-time dashboard for monitoring or a periodic report card for evaluating; and they can be for different audiences. These data are required by managers, customers, auditors, executives and managers. A good way to cluster all these stakeholders manageably is into three audiences: investors (paying for it), controllers (policing it) and deliverers/providers (doing and managing it). Each audience probably wants a slightly different view―a different scorecard―and they want to both monitor and evaluate.

In each case try for balance. For every metric you use, think if there is another metric which you would introduce with “On the other hand …”

Rob England is an author, commentator, and consultant. More thoughts from Rob can be found on his blog at www.itskeptic.org.

Tags:
metrics, ITIL, ITSM, COBIT, balanced scorecard



Comments  (click to add your comment)

Comments

    Name or nickname

    Email address

    Website

    Write comment
    You have characters left. (Maximum characters: 1200).

     


    IT Management Daily Newsletter




    Related Articles

    Most Popular