What makes a software project a joy to work on? Conversely, what can make working on a software project excruciating? For me, I'm the happiest when I feel like I'm able to deliver real value, and I've found that having good practices that are followed strictly by the team allow me to accomplish this goal. In particular, I've found that a project where all of the developers are committed to doing Test Driven Development to be radically easier to work with.
An untested codebase is like a game of Jenga; changes become harder and harder to make as time goes on, which results in the entire application turning into an unstable, precarious mess that requires extra special care and attention.
A Sense Of Progress
Manually verifying that our code works can be a relatively time-consuming task. For example if we're writing a web application, verifying some new functionality may mean doing a page refresh. Moreover, getting our web app in a working state so we can reload the page can require a sufficient amount of time spent coding until we can run it again.
TDD enables us to break down our work into very small/manageable chunks, where code is only in a "broken" state for a short period of time. This allows us to focus on only one aspect of the problem at any one time, and not having to deal with the complexity of anything else. I find that this makes software development far easier and more satisfying to undertake, and seeing failed unit tests pass indicates that we're making progress. Furthermore, these unit tests give us much faster feedback for the code we are writing. With the web app example, there is no need to open up a web browser and refresh the page—instead we can run tests with a keyboard shortcut instead.
Software maintainability is arguably the most important goal that we as developers must strive for. How easy is it to add new features or to fix issues when they arise? The key difference I've found in projects that lack TDD is just how difficult and tedious minor tasks can be.
When I want to understand what some piece of code does (for example a class or a function), I usually use the unit tests as a source of documentation. Well-written unit tests should have the following characteristics: briefly summarizes what the intent is, what the inputs/outputs are, and possibly what collaborators it depends on. This allows me to quickly get the gist of what it is all about.
Conversely when there are no tests, you now have the burden of investigating what the functionality of a particular class/function is all about. You need to go through the code, line by line, to see what it is doing. From what I've experienced, code that has not been test driven tends to lead to overly coupled code, which requires more mental concentration to unravel. I may also need to see how it's actually used across the application. This can take a substantial amount of time, more so than just peeking at a set of unit tests.
When a bug does emerge, this body of unit tests plays a critical part in helping to diagnose the problem. For example, let's say that we have observed some specific user input into our web application results in a failure somewhere on the backend—how can we go about diagnosing this? With a well tested system, we can find the unit test that deals with user input and experiment with the input of those tests to see if we can replicate the issue. We are able to run isolated tests in a sandbox environment, and can go to the root of the problem more directly.
And if we don't have this body of unit tests? Under these circumstances, your choices are limited to a few options; primarily using a debugger or "print line" statements—and then to repeatedly run the entire system to investigate the problem. This can be a cumbersome ordeal.
Changing code with a safety net
When it comes to change, one thing that we need to ensure is that we do not break any existing functionality. A comprehensive set of unit tests acts as a safety net for change. I know if I change something and make a mistake along the way, I expect the unit tests in place to notify me of this.
Untested codebases lack this safety net, and changes have to be painstakingly added. We need to manually verify if the new change is correct and that we haven't broken any existing functionality. The latter can be very difficult if the codebase is large and complex.
Encourages Loosely Coupled Design
When we're writing code without tests and we need to depend on a database or external API, there's nothing really stopping us from just adding those low level dependencies in the code we are writing. Separating out these responsibilities into separate components may not seem entirely beneficial at first. After all, if all we're doing is manually testing the entire system, then the net result will be the same.
However when we are letting the tests inform the design, the issues with this approach manifest themselves into hard to write/slow unit tests. If we're test driving some application logic, and during each test we're calling out to an external API, then those tests are going to become slow and painful to deal with. We soon discover that it's easier to pull the external API calls into its own class and to dependency inject a test double when testing the application logic instead. Fast unit tests means faster feedback, and a more optimal work-flow.
When code is split up into logical/cohesive components, it becomes more obvious which task each component is doing. If in our example above we discover a bug in database persistence, all database persistence calls are now in one place and not scattered all over the place.
In general, I find systems that have been designed with testability in mind are easier to work with and remain more maintainable throughout the lifetime of the product. Software is inherently complex, and being able to make things easier for ourselves as developers is a crucial part of our role to deliver high quality software.