The True Metrics of Unit Testing
What is the Return on Investment of becoming Agile
Well, we don’t really know. We all know it’s good in the long run. Usually we can answer how much it’s going to cost, now. And to make matters worse, the worth of a unit test can be found out only if the test breaks. So it’s like gambling: You write tests for the sake of sometimes in the future find the bug.
When we started developing Typemock Tracker we thought about metrics that will help us prove that unit testing works for us. It can answer by comparison – compare a team doing unit testing to one that doesn’t. At the beginning we thought that number would be bug-fix time. But after some dogfooding ,we found out that it’s not that. So we’ve done some re-thinking.
The first thing that came out was the bugs-caught metrics. This is an absolute number, you can actually calculate the saving on. If one collected metrics (not everyone does, by the way), you can estimate the time spent in QA reproducing the bug, the amount of time the developer reproduces it on his machine, then the fixing (assuming 1 iteration between the QA and the developer. It seems that the more iterations you have, you know the cost in better precision). Then QA need to retest. So take all this time, slap a cost on it (taken right out of the QA and developer paycheck), and you’ve got savings. The number in dollars you’ve saved because a test you wrote in the past caught it. We also thought of taking it a bit further, by giving more weight to the age of the test. But we’re not there yet.
The next metric is about reducing waste. And the biggest differentiator between a person doing manual testing and unit testing is the waste of debugging. Unit tests run faster, they are more focused and saved you a lot of time in reproduction, debugging and fixing the bug. And we decided that if we were to compare them, it would be by this factor. And once again, we thought that once this is measurable, you can put a price on the saved waste.
So there you have it. Tracker is out (in beta form) for your evaluation. And this is where you come in. Your mission, should you decide to accept it is to start working with it. It’s an easy install, and quite graphical. For the first time, you now have a way to measure the savings of unit tests, as well the speed of adoption (track the ratio of production code/manual testing – improvement means saving). Play with your numbers and see if the calculations make sense. And let us know if you think other metrics can help.
And if you find out you’ve saved mega bucks – don’t keep it to yourself. Tracker helps sell the idea of unit testing, using real numbers. Use it.