How do you evaluate the health of your test suite?
Here are three common symptoms that indicate that your test suite is sick, dead, or dying.
$ ./run-tests Test Results: 0 Tests Found 0 Tests Failed $
So, you may not have to worry about reviving your test suite. But before you leave today, don't forget to submit your budget sheets for 2015. And make sure you put on there a few hundred-thousand dollars you'll need to pay for all the human capital that will be running the manual tests through all of 2015. May want to include another one or two hundred grand for all the bugs that will pay your software a visit due to lack of an automated test suite. Happy New Year!
It's 2015 people! Are you seriously still writing large scale systems without any tests? Yes, you may have developers who are brilliant and very talented, but the complexity of today's software is way beyond any one person's mental capacity. Start the new year off right, with some tests for your code.
$ ./run-tests Test Results: 283 Tests Found 105 Tests Commented Out 0 Tests Failed $
Congratulations! All the problems in the test suite have been swept under the rug.
Question: why do those 105 commented out tests exist? Were they once useful but they no longer are necessary? Were they broken by developers who didn't know how to fix them, so they commented them out? Are they "TODO: write this test" tests that never got implemented?
If the tests really are useless or no longer necessary, then just get rid of them (by deleting them, not commenting them out). You have source control, so you can easily get them back if necessary. Don't make other developers sift through the hundreds of lines of useless, commented out tests. Just dump them. It's going to be ok.
If the tests were broken by developers who didn't know how to fix them, then ask yourself, "What are those developers doing working on this codebase?" A developer who doesn't know how to fix tests that run against a system he's working on probably shouldn't be working on that system—at least, not without close supervision. Would you want someone who doesn't know how to troubleshoot a pacemaker performing open-heart surgery on your loved ones? I certainly wouldn't.
Now, ok, so maybe they were commented out because of a hotfix implemented two weeks ago and you just haven't had time to fix it yet. "Yes, officer, I am perfectly capable of operating a motor driven vehicle. I just haven't had time to go into the DMV office to pick up my license." If you hadn't found time to get them back running in two weeks, then chances are really slim that you'll get them running again anytime soon, if ever.
So how about this proposition: if you break them, fix them—now. If it's 6 p.m. and your brain is toast and you just want to get some rest that's fine, but fix them first thing tomorrow. If it takes 2 days to fix them, then take two days to fix them. Wash the dishes after dinner, not after tomorrow.
$ ./run-tests ^C Uncaught Exception: UserInterrupt Tests not run. User cancelled after waiting 5 minutes for them to start. $
Tests that take a long time to start (or run) are very sick tests. As you may know, people have gotten used to doing things and getting quick responses. Developers are people and they're no different. They want answers now, they want test results now.
So what happens when your test suite gets slower and slower and slower? Simple: developers won't run it. And then this will happen:
- They will commit code that causes tests to fail.
- Then other developers will pull down that code.
- Other developers (who care about the code) will start seeing those failures. This will waste time for those developers.
- The developers who care will start to believe that it was their changes that caused those failures, and will begin looking in the wrong place for a fix. This will be another waste of time for these developers.
- They will eventually kindly ask the developer who broke the tests to fix them.
- The developer who broke them will, by this point, be in the middle of another task. This developer will now have to switch context. This wastes yet more time.
One day later ...
- By the time that developer will have fixed the broken tests, someone will have committed conflicting code.
- More time will need to be spent fixing those conflicts.
Moral of the story? Keep your tests short and fast. Even if you have wonderful coverage, but your tests are slow, you're wasting time. Even if everyone on your team is disciplined enough to not commit code before running the entire test suite, they're wasting time before each commit waiting for that test suite to finish.
Disclaimer: fast tests aren't free and they don't come with ease. As your test suite grows, as your codebase grows, your tests will get slower and slower if you don't spend the time to keep them fast. Yes, it is extra effort, but definitely well worth the investment. Be on the lookout for future blogs—I may cover a few tips and tricks for keeping your test suite fast as it gets bigger.
So what now?
Get to work! If your project has broken tests, then fix them. If your project has commented out tests, then delete them or revive them. If your project has no tests ... tsk, tsk, tsk ...