#42 - Breaking Down Metrics to Build Up Success
Using the KPI Tree Approach to Create Impactful Working Agreements
When we immerse ourselves in data points, insights, and signals as part of our work, it’s easy to get overwhelmed. Deciding what to focus on and what to ignore can be challenging. This dilemma is present in almost every role: how a Product Manager chooses the key metrics to track, how a Sales Engineer scopes and prioritizes an implementation, how a Salesperson identifies the core dilemmas, and how an Engineer pinpoints the root cause of an incident.
A Gartner survey found that knowledge workers use an average of 11 software applications (along with additional tools they may create for themselves). For example, in the case of a software incident, consider the multitude of applications, logs, and messages one needs to sift through to get to the root of the problem. Root Cause Analysis (RCA) is a methodology that involves digging deep until the actual cause of an issue is identified.
In this post, I will discuss how a KPI tree employs a similar approach—only from the other direction—to help you stay focused and improve what truly matters.
I’ll focus on the Change Lead Time metric in software development in the example below. This metric measures the time it takes to implement and deploy a piece of software to production. Improving it in isolation is nearly impossible, like trying to resolve a vague, acute incident without clarity. To tackle this, we’ll use the KPI tree technique to break it down.
First, we’ll deconstruct the metric by identifying the stages contributing to the total lead time. We’ll ask ourselves: What are the specific steps that make up the entire lead time cycle?
Change lead time is the sum of all the steps in the process. We can ask ourselves, “How can I improve the time it takes to review a PR?” This brings us closer to the root cause and a leading metric we can actually improve.
But we can go even further:
The time to reach a PR is the number of commits in a PR multiplied by the average time to commit code. This makes it specific and actionable. But we won’t stop here:
After breaking it down further, we can now focus on improving even smaller metrics, like reducing the average coding time or the time to understand requirements. The point is clear: the more we dissect the primary metric, the more actionable it becomes.
Next, we’ll work in reverse, exploring how optimizing these lower-level metrics can significantly impact the high-level objective.
The more precise the requirement, the less time it takes to understand it. To ensure clarity, I’ve added criteria for an explicit requirement, such as limiting it to one user story and including at least two examples. These working agreements are designed to influence the primary metric by improving its underlying sub-metrics.
We can continue this process with the remaining metrics and evaluate the results over time. Based on their impact, working agreements can be refined and adapted. The key takeaway is that we are now laser-focused on improving our primary metric.