SMART PRACTICES

Principles for planning in software development

Brad Hipps

6-3-2024

There are any number of big-picture planning approaches for software teams and the broader organization. Scaled Agile Framework (SAFe). Objectives and Key Results (OKRs). Vision, Value, Methods, Obstacles, and Measures (V2MOM)… Each of these has its own merits and must-haves. But if you were to boil them down to their essence, what are they after?

A reliable way to name, prioritize, size—and then to track engineering progress against—the primary initiatives of the business.

Is this really so difficult? Well, let’s take a look at the primary steps in planning, with an eye toward some best practices for each.

Deciding plan scope

To speak plainly: a plan consists of a short list of objectives to be delivered in an agreed timeframe (a quarter, say). The relevant parts here are:

  • Objective. An objective is anything that adds value for your customer. Objectives give us a single, understandable unit of measure for all stakeholders.

  • Short list. We want a short list of objectives at any given time, stack ranked by priority. We’re not interested in a bottomless backlog packed with every theoretically bright idea we’ve ever had. Backlogs aren’t parking lots. We want to be a value-delivery machine, not an idea-generation machine.

We also don’t want to bite off more than we can chew. At Socratic, our planning follows a quarterly cadence. This means simply that each quarter, we identify the highest priority objectives that can be delivered within the quarter. 

For each quarter, we also create objectives for the “fix and finish” work that’s part of any software application. Usually, we set aside one epic per month (“March fix & finish”, “April fix & finish” etc.) to capture the inevitable bugs and “quick wins” that don’t fit to any of the other quarterly objectives.

Historically, deciding how many of your priority objectives will fit in a given quarter is a guessing game. Engineering is asked to provide an approximate duration for each, which involves trying to size the work as well as deciding how much team capacity exists to deliver it. To call this “back of the napkin” work is an understatement: in a matter of hours or a few short days, we’re trying to decide the work of an entire quarter.

This is crazy.

The old ways of estimating, whether by story point, Fibonacci number, or wet finger in the air, is a titanic waste of engineering time. We use Scenarios, Socratic’s AI-powered forecasting capability, to understand how much work is achievable by when.

This means that in very short order, we have an understanding of how much work we can likely get done for the quarter. With Scenarios, we can easily experiment with different versions of scope, as well as the teams who’ll do the work, to understand what gives us the best chance of hitting our quarterly target.

Plan meets reality

No plan survives first contact with the enemy, as the old military proverb goes. Quarterly objectives that began life as tidy rows in a Google sheet or neat boxes on a slide have now splintered into hundreds of tickets across tens of sprints in Jira. What was agreed to at a high level has met the realities of development—unforeseen scope creep, new dependencies, sudden changes in staff availability, etc. etc. 

The problem isn’t that plans change. The problem is that too often we can’t see where, how, or why the plan is changing. Consider the questions we try to answer in a given day or week…

  • “How long will this take to finish?” Hard to know. We’ll poll some folks for their gut feel.

  • “Are we making good progress?” Well, there are lots of tickets assigned, and plenty of them have been started. Does that count as progress?

  • “What’s at risk and why?” I’ve asked around, and summarized what I heard on this slide. See the red, yellow and green circles. (I hope this actually reflects reality…)

  • “How are teams doing?” Hm. Does story point velocity mean anything?

With Socratic, we use signal data from the work activity in Jira and GitHub to tell the story. This means our plan shows us how each objective (i.e. epic) is progressing. 

When evaluating the health of active work, there are three primary things we want to know from the data. From simplest to most advanced, these are:

  1. Percent of work complete: that is, raw progress. This is straightforward enough. We want to see at a glance the total number of tasks/tickets for a given epic, how many are complete, and how many are actively being worked versus still waiting in backlog.

  2. Momentum: or the rate at which work is getting done. Because progress is only ever a point-in-time value, it doesn’t show you how—or if—progress is changing over time. This is where momentum comes in. Each time a task is added to or completed for a given body of work, we look at how this changes the ratio of completed tasks to open ones. We then compare that new ratio to what the ratio was prior to the change.

  3. Forecast completion date: this one is possible only with AI. Socratic uses Monte Carlo simulation, combined with personalized historical actuals, to render a predictive date range by which all work will be complete. 

Taken together, progress, momentum and forecast give a real-time perspective on the health of every plan objective. In this way, our plan isn’t some point-in-time thing now gathering dust, but a living, breathing reflection of the daily realities of engineering.

Of course, this begs the question: what do you do when any of these signals of progress, momentum, or forecast suggest something is off?

Using data to plan & manage better

When reporting on the committed work of a plan, what you’re really after are the outliers: what’s trending late, or otherwise looks offtrack, and why? The signals described above (progress, momentum, forecast) are a data-driven way to spot outliers. But having spotted potential problem areas, you now want to see why a particular plan objective may be offtrack.

For this, we use Socratic Trends. Trends puts your historical activity data to work for you. As tasks complete, Trends shows how work productivity is changing over time. By surfacing period-over-period changes, alongside benchmarking to historical averages, you can see at a glance what’s improving and what may need attention. The aim of Trends is to use data to answer questions that are hard to answer—otherwise involving lots of digging and manual compilation—or that aren’t answerable at all.

Let’s take an example from our own work. Here, we see this particular epic is forecast to be late. Naturally, we want to understand why.

By clicking the epic, we can see trends in productivity specific to its tasks. Across the top we see:

  • The amount of new work demand (i.e., scope)

  • The average speed at which that work is completed (i.e. cycle time);

  • The total amount of work delivered (i.e. throughput).

In this particular case, the primary culprit for the late forecast is clear. Scope creep. Maybe “scope explosion” is a better descriptor: the data show that we’ve more than doubled the number of new tasks each month over the past three months!

Planning in software development is tough. There are a lot of moving parts. When talking about making the planning and execution process AI-powered or data-driven, we’re not interested in simply grabbing the latest hype phrase. To be data-driven means something much more straightforward. It means using data to answer hard questions. It means using data to get better.

When all is said and done, data is a simplifier!