A Scope Heuristic

A Scope Heuristic

One of my first exposures to TDD was through a code kata. I was mortified.

The performer test-drove his way to a simple function for factoring prime numbers. He added tests one by one:

  • It should factor 1 correctly
  • It should factor 2 correctly
  • ...

To me, the order in which the tests were added did not suggest any pre-meditated approach. It did not suggest any deep understanding of the problem domain. The performer did not seem to think that writing a factorization function required any forethought at all! "If I were to test drive this problem," I thought, "I would at least start by test-driving the helper function that is necessitated by my Algorithm. When the kata was over, nobody voiced any of the criticisms I had. I was upset and confused.

My feelings on TDD have evolved somewhat since then. I have become significantly more enthusiastic about the practice, but I am still trying to understand how exactly it is helpful. In particular, that day's kata illustrated a tension which I have not resolved for myself.

This tension is between thinking far ahead, and focusing on immediate concerns, when test-driving a change. In some cases, it's clear that you need to leave the computer and think through your options. In others, it's clear that you need to start typing and let the software evolve.

Some of my most frustrating software development experiences have been rooted in this tension. I have been burned by my lack of foresight: it has allowed me to let poor design decisions cement. Conversely, I have spent hours fretting about which approach to take, only to find out that my worries were ultimately irrelevant.

Perhaps you are thinking to yourself that the answer is obvious: always put a lot of thought into your changes. If so, I agree with you, but you are missing the point! One of the benefits of TDD is that it can keep me from thinking too far ahead; it does not cause me to think any less. Sometimes, by shortening my sights, TDD will prevent me from writing complicated things that I don't need, or from worrying about non-issues.

I am slowly learning to distinguish the situations where I can reap this benefit from those where I cannot.

I would like to suggest a rough classification of the situations where it makes sense to embrace short-term thinking.

  • When a coding decision is tightly coupled to the structure of existing code, and is cheap to revise, you should try to reap the benefits of short-term thinking.
  • When a coding decision is costly to revise, you should not try to reap the benefits of short-term thinking.

Decisions coupled to the problem domain, for example, often fall into category 2. There is often a high price to pay for misunderstanding the problem domain, because this understanding informs your high-level approach. Moreover, thinking deeply about the entire problem domain can yield high returns.

Refactoring tasks often force you to make decisions which fall into category 1. When you refactor incrementally, it should be cheap to revise any mistakes that you make. Moreover, it is hard to accurately anticipate the outcome of a refactor. You have to interact with the codebase to know what approach to take, and the best way to do this is by test-driving small changes.

Where does this classification hold up?

Suppose, for example, that you would like to write an AI module for a zero-sum two-player board game. This task is tied to the problem domain, and long-term thinking can pay dividends. If you choose the wrong algorithm, you will have to rewrite essentially the entire module; this task therefore falls into category 2.

Without thinking far ahead, you could test-drive your way to a module that works pretty well. Unfortunately, you are liable to write tests haphazardly, producing an inelegant pile of control flow statements. Once you choose to use an algorithm based in game theory, your approach crystallizes. It's easy to start working methodically towards a solution, with tests along the lines of,

  • The AI looks 0 moves into the future.
  • The AI looks 1 move into the future.
  • The AI looks arbitrarily into the future.

Now, suppose you anticipate duplicating code in the next feature that you add to this two-player game. Maybe you think that the "game over" function in your main game loop will resemble the "stop searching into the future" function of your AI module. The anticipated refactor will require you to think about the structure of the existing code. This refactor will also (hopefully) be cheap to revise; it therefore falls into category 1.

It would be a bad idea to jump ahead and spec out a function which eliminates duplication. It makes more sense to focus on immediate concerns, adding the feature so that the duplicated code exposes itself. Even as I write this article, I am creating and eliminating duplication.

Where does this classification fail?

This classification essentially falls apart when it comes to deciding how information should travel through a program. I will refer to this as software "workflow". Decisions about software workflow do not fall cleanly into either side of my classification, and I am not sure how to approach making them. Why don't decisions about workflow fall cleanly into either side of the classification?

First, workflow decisions are very costly to revise. Changing existing workflow can be extremely difficult because it involves updating interfaces; this can break your dependency inversion measures, requiring you to make cascading changes [1]. Workflow, therefore, fits into category 2.

Second, workflow is tightly coupled to the code that you write. Your workflow decisions dictate which modules exist and what their interfaces look like. Consequently, you cannot think very deeply about workflow without getting tripped up by complications in the existing code. So workflow decisions also partially fit category 1.

What can we learn from our inability to classify decisions related to workflow? I think we have gained an understanding of why workflow is hard to get right. Workflow is complicated, so it demands incremental evolution. Existing workflow is also extremely hard to alter, so incremental evolution can be costly and difficult.

Conclusion

It is unfortunate that my heuristic does not apply to every challenge a developer faces. I think that my heuristic will offer some value to developers who, as I once was, are confused about the benefit of test-driven development.

[1] For more information on why workflow is costly to revise, see the discussion of the StableDependencies principle in PPP. In the nomenclature of the text, changes to workflow often result in changes to "stable" pieces of software.