Building product at Tegus

Brad Hipps


Ram Bolaños is an Engineering manager at Tegus, the leading investment research platform, which streamlines the information investors need to move quickly, build conviction and make better decisions to outperform the market. Ram's Data Integrity team is responsible for three key areas of Tegus solution offering including, Tegus' modeling product, and Tegus' data acquisition product.

I was curious to learn how Ram approaches software engineering generally, and specifically where/how this thinking has informed his team’s work at Tegus. What stood out:

  • Eyeing the number of handoffs in an organization as a predictor of bottlenecks

  • Giving teams the autonomy to own the full delivery of their work to customers

  • Piggybacking on John Kotter’s concept of Big Opportunity to let people work on what naturally drives them

  • Orienting around good flow, and making every team member an owner of it

Thanks again to Ram for taking the time to talk!

Let’s start generally. Pretend you’ve joined a new team. What does Day 1 look like?

My motto is always: start slow, and ask lots of questions. Why do we work the way we work? Why these specific processes? What’s the history behind them? It’s really just trying to get to the deeper Why behind things. Sometimes you’ll find people may not even know why things are done the way they’re done: it’s just always been that way!

A specific example I’ve seen of this is the build process. It’s not uncommon to find that everyone complains about the build taking 30 minutes, an hour, two hours, but nothing gets done about it, right? There’s a fear factor. The build process has become this mysterious, brittle thing that no one wants to crack open. Everyone's petrified to touch it.

And I always ask, Why? Why can't we get training on it? Why can't we expense a few days to understand our build process and to modify it to actually make it work?

You have that relatively short period to come in and be the person who knows nothing, which allows you to look more holistically at what’s going on. Before you’re consumed by whatever the legacy processes, and they just become mechanical for you.

Are there other things that draw your eye?

Yes, interestingly enough. A couple.

One is handoffs. Handoffs, to me, are like “smells.” If you have a software development team, and a DevOps team, and a test team, and a release management team… that smells a little bit to me because it signals an organization with lots of siloes and limited autonomy for each of the participating teams.

Another thing that I often look for is turnover, and how we onboard people. Do we have good onboarding processes, and do we ever exercise that muscle? I've seen teams that become stale with people who are tenured five, ten years. It sounds great, but when it’s time to bring someone new on, there’s just no feel for how to do it. The current team is so steeped in the code base and its history, they can’t even imagine what it means to get a new person up to speed.

Looking at handoffs is interesting. It’s like a risk indicator for bottlenecking, I suppose…

That’s exactly it. Handoffs are a preventer of flow. It’s just an opportunity for things to stall out, or get missed.

It also tends to undermine autonomy. If a piece of work has to pass through five different teams’ hands on its way out the door, who really is responsible? No one team is accountable for getting it into customer hands. Each team could do their part perfectly, but maybe it still takes months to get things delivered. That also means months before you’re getting feedback from customers. It’s like each team is winning their battle, but collectively we’re losing the war.

How does this inform planning and building at Tegus?

I lead the Data Integrity team. For roadmap planning, we have what we call “DI Day.” The whole team comes into the office, and everyone pitches ideas of things they want. They make these pitches to the product manager. All these ideas end up written on a board.

Once we’ve had a chance to get our ideas out, we then talk about the needs of the business. The product manager will give us the context for things like sales, business targets, go-to-market strategy, and so forth. And then we work to align our ideas with the go-to-market strategy.

From there, we come up with a list of the things we want to tackle. A list of objectives, with some initial, coarse-grain sizing: small, medium, large. In our world, small is roughly two to three weeks, medium is three to six weeks, and large is six to eight weeks.

On a quarterly basis, we also set aside what we call “engineering time.” This is time for all the work that doesn’t tie neatly to one of our quarterly objectives—tech debt, defects, etc. Some quarters we budget a fair amount of time for it; other quarters might have little or none. It just makes for a less stressful, more impactful quarter when we’ve got agreement with our product manager on how much time can go towards it.

So you have your prioritized set of quarterly objectives. What happens from there?

We follow the pull model—meaning, people are selecting objectives from the prioritized list.

In his book XLR8, John Kotter talks about what he calls Big Opportunity, which is something that gives a person a real sense of urgency, because you’re passionate about what this thing you’re working on will make possible. That really spoke to me, because I remember my days as an individual contributor where, if I got a chance to learn something new or to work on something I chose, my intensity and energy showed in my work.

So we want people to claim objectives based on their interests. When somebody raises a hand and says, ‘Hey, I really would like to learn more about this,’ or ‘I feel super passionate about this,’ they get to lead that objective. And while I have team leads, while I have seniors, there is no rule that says an objective has to be led by a senior. An objective has to be led by a person. And it doesn't matter your tenure. What matters is your passion for the subject.

Size-wise, we like our objectives to be no more than four weeks in duration. Because you want, again, that principle of flow. That principle of incremental. It just creates the conditions where the product manager gets to see something pretty fast. Three to four weeks is always the timeframe where she can see something real, get something tangible in her hands and say, ‘Yes’ or ‘Not there yet.’

What sort of data are you using to understand how work is going?

The forecast is one of the things I look at in Socratic on a daily and weekly basis. And yes, as we start populating our objectives with tasks, I don't know how you do it, but that forecast feature is pretty accurate! Socratic has learned pretty well how the team works.

We recently had some unplanned work crop up, so we had to go through an exercise of rebalancing some objectives from Q2 to Q3—what could we realistically get done, what would we need to push? I did that exercise in less than a minute on a live call, using Socratic’s forecast data to tell me what was possible, and how to get back on track. And then we moved on from that subject. We didn't have to go into a heavy planning session. We didn't have to have some long discussion.

Data-wise, I also want to know what our flow is, and where is work getting stuck. Cycle time is the other thing I look at on a weekly basis.

Merge request (MR) time is obviously a key component of cycle time.* It can become a big interruptor of flow. If you think about an MR taking three days, for example, what's the developer going to do during those three days? The natural thing is to pick up another task. And if your average is three days, that means that maybe they're done with the second task before that MR is complete. Well now their work-in-progress (WIP) is two. Well, if they're still waiting, their work-in-progress is going to three. All right, where does this end?

When we talk about how long should it take a developer to review code, we like to see something around 15 to 20 minutes. So ideally, your MRs are of a size that they can be reviewed in less than 30 minutes. Again, working incrementally as much as possible, keeping that flow going.

I like empowering development teams to be accountable for taking value to the market. I want my teams to feel accountable to good flow, and that starts with having enough autonomy over the end-to-end process. In my team we’ve created a few norms around how long it should take to ship a small feature, a small bug, and how we create flow and a continuous release cadence. We’ve found success in shipping bug fixes within a day or less, small features within a week, and projects within 3 to 6 weeks.

I want to work with product owners who have identified value for customers, and then have a group of teammates who are responsible for turning that into software. They should have the freedom and the responsibility to work with the product owner to define the idea, build it, and ship it!

*Socratic’s most recent data show merge time is indeed the largest chunk for teams, accounting for nearly a third of total cycle time, on average.