Designing a Metric for Software Delivery or Incident Management

The Fundamentals of Metrics in Technology Projects

George Marklow

--

Photo by dylan nolte on Unsplash

Introduction

The word metric often gets, at best, a mixed reception. It is:

A standard for measuring or evaluating something, especially one that uses figures or statistics.

Metrics make process issues apparent, ensuring that risks are identified more quickly and alerting management to take corrective measures.

However, many people are fearful of metrics, afraid they will be singled out as points of failure in a process.

In this post, I’ll summarise the basics of how to design and combine metrics, and provide suggestions on how to make them less scary. I’ll also give examples of useful metrics for software delivery and incident management.

Motivations for measuring something

Let’s say we want to design a metric to track and interpret the ‘number of urgent bug fixes required after deployment.’

There are good reasons for this — lot’s of post-release bug fixes indicate problems with:

  • application development
  • quality control
  • pre-release smoke-testing
  • post-release checks

High-priority bugs are a considerable burden for IT departments, requiring call-outs at unsociable hours and taking developer and testing resources away from their current projects. It might also cause SLA breaches if not fixed on time.

The worst-case-scenario is reputational damage, including coverage in the media, lost revenue, and fines in some regulatory cases, e.g., a bug in a bank’s software that causes all credit card payments to fail.

Defining a Metric

There are six parts to a metric:

  1. Definition
  2. Justification
  3. Audience
  4. Calculation (where appropriate)
  5. Interpretation
  6. Reporting

Let’s now define our metric that monitors the number of post-release bug fixes.

1. Definition

This section documents basic details about the metric such as Id, Name, and Definition. Metrics must be specific and measurable.

Definition

2. Justification

Given the time and resources required, metrics can be expensive to report. Therefore, we need a justification to explain why a particular metric is wanted.

Justification

3. Audience

Who will follow this metric?

Audience

4. Calculation

This particular metric requires no calculation. In other examples, such as Percentage of Bugs Unresolved, a simple percentage would be found and take the minimum and maximum values 0 and 100, respectively.

5. Interpretation

This section defines when the metric has Passed and Failed and a range in-between where a Warning should be issued. We also describe the boundary conditions of the metric — such as maximum and minimum value.

Before we can assign any target values, benchmarking should be done to get an idea of the current performance statistics. Then, we can work out the gap between where we are now and our goal position and provide training and awareness for staff about the new metric.

Interpretation

We might want to award a special status for exceptional work to overcome any resistance or negativity encountered when introducing a new metric. For example, we could add a Gold Star score, awarded if a release causes no significant bugs:

A Suggested Improvement to Interpreting Metrics

This modified metric does two things:

  1. It keeps the metric realistic
  2. It overcomes the temptation to track only failing metrics and not recognize and reward excellent teamwork and leadership.

6. Reporting

The metric document should describe the reporting process, including ownership and how frequently to report.

The report should visually explain:

  1. Past performance
  2. Current performance (a snapshot of performance so far)
  3. Performance relative to other processes

Combining Metrics

You can combine metrics to deliver an overall performance score in a simple, two-step calculation:

  1. Give each metric result a score of 1 for a Pass, 0 for Warning, and -1 for Fail.
  2. Multiply that score by a number that indicates how much weight should be given to the metric

Example

In the following table, we define Pass and Fail conditions, with Warning being between the two values.

An Example of Weighted Metrics

From this table, we tell that:

  1. The overall score is 2 (3 + 0 -1)
  2. The highest possible score is 6 e.g. (3 * 1) + (2 * 1) + (1 * 1)
  3. The lowest possible score is -6 e.g. (3 * -1) + (2 * -1) + (1 * -1)

Management can now easily switch priorities by adjusting the weighting without changing the underlying metrics themselves.

Examples of Metrics

Here are some examples of metrics to consider in a software development and support division:

  • # incident tickets escalated to management
  • # high-priority bug fixes deployed
  • # times the Support Team used a Breakglass account to a production server
  • % of on-time releases
  • % of bugs resolved
  • % of bugs returned to Development by QA more than once
  • # defects QA finds during release testing
  • # days deployment delayed
  • % of server unavailability time

Thanks for reading! Let me know what you think in the comments section below, and don’t forget to subscribe. 👍

Sources:

  • The IT Service Management Forum: Metrics for IT Service Management, 2nd Edition, Van Haren Publishing, 2006

--

--

George Marklow

George is a software engineer, author, blogger, and abstract artist who believes in helping others to make us happier and healthier.