Home > Research > SDLC Metrics: An Approach for Ranking and Selecting Metrics

SDLC Metrics: An Approach for Ranking and Selecting Metrics

Not all metrics are good and using poor metrics can have a serious negative impact on your organization. Maximize the value of your software development lifecycle (SDLC) metrics by using this thoughtful and judicious approach to ranking and selecting them.

In a Metric Fixated world, it is easy to fall prey to the siren-song of “what get measured gets done,” but the truth is that blind faith in metrics is a recipe for disaster. So how does IT leadership select and manage their SDLC metrics in an effective way to maximize the value while minimizing the risks? Follow these simple steps:

  • UNDERSTAND the risks of metrics
  • KNOW the rules of good metrics practices
  • RANK AND SELECT the right metrics to meet your prioritized goals

This is the last in a series of three tech briefs on the topic of SDLC metrics. In the first brief (UNDERSTAND), we presented the dark side of metrics, and warned of the serious dangers associated with poor selection and management of metrics. In the second (KNOW), we presented a set of simple-to-follow rules to avoid common pitfalls associated with bad metrics. In this tech brief (RANK AND SELECT), we present a straightforward approach to helping your organization evaluate and select the most effective SDLC metrics for your needs.

Our Take

The secret to effective selection of SDLC metrics is to choose the fewest number of metrics possible while simultaneously selecting those that maximize value and minimize the serious risks associated with bad metrics and never using them for reward or punishment, but rather to help your development team(s) solve problems and achieve goals.

To do this, start by identifying your development team’s prioritized business-aligned goals (i.e. what goals does the team need to achieve to support the business’ goals, ranked in order of importance). Do this by working collaboratively with the business to understand their needs and translate them into a prioritized list of development team goals (keep this list short (2-4 as a guide) because your team can only effectively focus on a few goals at once). As an example, below is a hypothetical set of prioritized team goals that have been aligned to the business:

With your prioritized goals in hand, it’s time to identify potential metrics for use by your team. Begin by compiling a list of potential metrics to be used (ignore any metrics that are not relevant to achieving your prioritized business-aligned goals). For each potential metric, capture a name and brief description, then identify how the metric would be gathered and reported (think carefully about how to gather each metric in a way that reduces the risk of things like gaming and unintended consequences). As an example, the table below captures five potential SDLC metrics (in practice, your team should identify 10-15 potential metrics to be evaluated):

Now it’s time to evaluate and rank your list of potential SDLC metrics so that you can then short list the best metrics to meet your team’s needs. To do this, your team will score each metric against a set of defined “pros” and “cons.” We recommend you use the following pro/con criteria (but feel free to adjust these to the individual needs of your organization):

Now, using a five-point scale from low to high (low, med-low, medium, med-high, and high) have your development team score each of the potential metrics against the pro/con criteria. In the example below, we have scored each of the five metrics relative to one another (we used our prioritized business-aligned goals from above to determine the Value to Team scores). Note that we have used traffic light color coding (i.e. green is “good,” red is “bad”) to help visually compare scores:

Once scoring is complete, have your development team roughly sort the list of potential metrics from best to worst (this sorting has already been carried out in the table above). You can “eyeball it” based on colors, or come up with some simple formula like a weighted average score of pros/cons for each metric (you don’t need to be precise here as long as they are roughly ordered from “best” to “worst”).

Now, working from top to bottom, have your team openly discuss each metric, compare it to the others, and decide on which to include in your team’s new set of SDLC metrics. You won’t necessarily take the very top scoring metrics on the list because some of them could overlap in some way. We suggest reducing your list down to the 3-5 best SDLC metrics for the team (feel free to adjust this range for your organization’s specific needs, but remember that each new metric you add to the list will diminish the value of every other metric, because your team can only focus on a few things at a time).

At this point, your team will have collaboratively settled on the smallest set of SDLC metrics they believe will help them the most in achieving their prioritized business-aligned goals. This buy-in from the team is critical to effective adoption and use of SDLC metrics. Remember never to use these metrics for punishment or reward (which will only undermine their effectiveness). Your team can now put these metrics into place.

As one last step, schedule a follow-up SDLC metrics discussion with the team (three to six months later is suggested, but do what works best for your circumstances) to review the metrics for effectiveness and decide whether any changes are needed (whether it is changes to which metrics are gathered or how they are gathered). Remember that the team’s needs and goals can change over time, and so your SDLC metrics should be reviewed and updated regularly to maintain their effectiveness.

If you would like to know more about this approach to metrics, speak to your Info-Tech Engagement Representative about the soon-to-be-released SDLC Metrics blueprint.


Want to Know More?

GitHub at SoftwareReviews

JIRA at SoftwareReviews