Most self respecting teams have a software product to track defects. So do we, but it's another of those things that just smells a little fishy to me - but I have accepted it and didn't really think much more about it. Until tonight.
Following Paul's excellent response to my last post, with a link to Kent Beck's statement of what really matters, I was looking around Ward Cunningham's site and came across another interesting article. Why do we feel the need to track bugs.
Defects are transitory in nature, all we really want to do with them is fix them and move on - surely. Well, I suppose we could measure something about bugs or use the information to blame others - but neither of these things helps us build software of higher value to our clients and more importantly - doesn't get to the root cause.
Everyone makes mistakes, we're all human, but if we use the novel idea of fixing things as we go, then the need for recording and tracking (really an unnecessary, time consuming task) goes away. Aha, you say, but what if I have lots of defects and they will swamp the team? We have to record them so we can remember what they are. I used to subscribe to this argument, but if you think about it, this is a symptom of a deeper issue. Quality is not built into your process from the start. Note that I am deliberately distinguishing between defects and requirements changes. When this pattern occurs, it is often due to lack of tests (assuming you have a good team of programmers). This is one of the reasons user stories are phrased in terms of tests, to improve quality.
This is a very contentious subject, and one thing is for sure, these tools are not going to disappear. However, I hope it provided a little food for thought and will encourage more appropriate thinking in terms of root cause rather than symptoms.