Monday, May 5, 2008

Agile Acceptance Testing

If you haven't already, read the post on infoq about automation test tooling. This is an interesting post and I think there is a lot more mileage on this subject.

In a project I have been involved with recently, we employed a commercial heavyweight record/playback style of acceptance test tool and something did not smell right about using this solution, but I did not give it enough consideration at the time. I don't want to repeat points in the original post by Elisabeth Hendrickson but rather try to add or confirm Elisabeth's findings for myself.

If user interface elements are changed, the team is always tempted to recreate all their tests, why? The majority of test code should remain unchanged, but the perception is that the tool provides all the code via test generation and guides people towards simple regeneration, and hence longer turn around and less of the refactor quickly mentality when the user interface changes. In the acceptance testing world, there is still a large contingent of practitioners who tend toward manual testing and there are far too many who do not understand how to use scripting languages such as python or ruby which both offer a simple approach to writing tests - either manually or via the use of a library such as PAMIE or WATIR.

Unfortunately I cannot think of other great reasons why I see these scenarios as particularly bad, its just that this activity does not seem able to keep up with the rate of development and that just isn't right. I want to see very small specific pieces of test artifacts built alongside the feature, and many of these tools seem to bring a bunch of extra baggage along that just seems overkill.

No comments: