Perhaps you know the following dialog or have even had it yourself:

Product Manager: “We have to get this out as quickly as possible. How do you estimate the effort?”

Engineer: " Tough. The code is really complicated by now. We have to clean it up first, at least [significantly more time than expected]."

Product Manager: “Can’t we make it simpler?”

Engineer: “It could be done somehow, but we have already accumulated so much technical debt…”

The discussion can go on as long as you want and with as many people as you want. In the end, they usually agree to implement the feature in some, sometimes reduced, form and let the technical debt be technical debt for now. Until it becomes too much and the decision is made to rebuild everything.

Dilemma

At its core, the discussion revolves around the following conflict:

  • In order to deliver new features quickly in the future, we must reduce technical debt.
  • In order to deliver new features quickly now, we must not reduce technical debt.

To build a successful business in the long term, we must deliver new features quickly now and in the future. So both perspectives are completely valid. Which leads to both reducing Technical Debt and not reducing it.

As a result, companies often oscillate between the two extremes (‘We’ll clean this up later.’ vs. ‘We’re doing a rewrite right now…’). Others try to help themselves with an X% rule (e.g. ‘20% of our time we use to reduce technical debt.’). But can that really be the solution?

Assumptions

Underlying this dilemma are the following assumptions, among others:

  • If we reduce technical debt now, the code base and architecture are significantly better in the future.
  • We can either reduce technical debt or build new features.

Of course, there are plenty of other assumptions that could be discussed at this point. But we’ll limit ourselves to these two here and put them to a test.

If we reduce technical debt now, the code base and architecture are significantly better in the future.

How many times has the code or architecture not been better or even worse after a major rebuild? Just because one person (or a small group) finds something better or easier doesn’t mean others will. It also doesn’t necessarily mean that the features that come in the future will be easier to implement based on the rebuild.

This is a tough one to swallow for most engineers (including myself): A rebuild doesn’t necessarily improve code quality.

For a rebuild to have the desired, positive effect, there must be an overarching understanding of “better” or the desired target state. Until that exists, any rebuild will remain a gamble.

We can either reduce technical debt or develop new features.

Put another way: ‘Reducing technical debt and feature development are mutually exclusive.’ To challenge this assumption, we can refer to Martin Fowler[1]:

“When I need to add a new feature to a codebase, I look at the existing code and consider whether it’s structured in such a way to make the new change straightforward. If it isn’t, then I refactor the existing code to make this new addition easy. By refactoring first in this way, I usually find it’s faster than if I hadn’t carried out the refactoring first.”

Refactorings are small behavior-preserving transformations [2] and as such part of the daily work. Done correctly, they do not increase the effort in feature development, quite the opposite. However, they are also not done separately from feature development, but are an integral part of it.

Solution

This leads to two key points for resolving the dilemma:

  1. We need a common understanding of the desired state for all engineers
  2. We need a good procedure for continuous refactoring in the context of feature development.

These goals can be achieved with the introduction of Blueprints and the actively lived Scout Rule.

We define Blueprints in-house design patterns and fundamental architectural decisions that describe how recurring problems in software development are solved within the company (the department, the team - whatever scope makes sense here). For example, they can dictate how APIs are implemented in the backend. This includes simple things, like the structure of the URL and the naming, but also basic architectural decisions, on the models for the API and their construction and the partitioning of the logic.

The Scout Rule essentially says, “Whenever you touch a piece of code, leave it in a better state than you found it.” With the introduction of Blueprints, there is now agreement on what exactly “better” means. Each engineer can improve the code they touch in the appropriate direction.

Conclusion

With this combination of Blueprints and the Scout Rule, refactorings can be performed as part of normal feature development without slowing down actual development.

Sounds pretty simple at first and not really new either. At first, we thought so too. However, it is anything but easy to create good and useful Blueprints that all engineers will support. It is also a great challenge to use and improve the Blueprints during everyday work. Following the Scout Rule is anything but a given. There are so many degrees of freedom and uncertainties that always lead to questions and discussions.

So: It’s not as easy as it sounds at first, but it actually brings gradual clarity and improvement. The development gets into a positive upward spiral, where the code base gets closer and closer to the target state over time, instead of moving further and further away from it.

[1]: Refactoring Home Page

[2]: RefactoringMalapropism

Webinar: Scaling software development as a SaaS company without becoming consumed in the growth process.

The webinar addressed exactly the pain points that are occurring as we grow. With an ever-increasing number of employees on the Dev & Product team, we've had to find out for ourselves how difficult it is to set up a sensible structure that brings out the maximum efficiency and performance in everyone, while also leading to a healthy working atmosphere.
Pascal von Briel, Co-Founder & CPO @ exporto GmbH