Saturday, May 3, 2008

TDD and User Interfaces

This is more a discussion point than me being my normal opinionated self. I buy into the TDD/BDD thing in a big way - I believe in it- although I would not claim to be as proficient as I would like to be. There are only two options -

1. View a UI as a thin layer on top of the code that matters and therefore don't bother doing anything, its a fast changing, disposable asset
2. Treat user interface elements as first class units and find a way to test them

Personally, I'm not sure I can buy into point 1) as I believe the user interface is every bit as critical to the application as any other part, however, test driving domain code can lead to design improvement, can we gain similar benefits for user interfaces? 

Automated acceptance tests should be the norm in almost any project (i.e. I can't think of any exceptions), but can/should we unit test user interfaces or pieces/components of the user interface?

I really don't know what the right answer should be (if there is one) for this and I would love to hear from people on this subject.

1 comment:

Brad Wiederholt said...

My gut feel is that you can go a little down this path, but you need a human or a model to do so. I am going down the path of assuming we are wanting to test UIs from the point of view of "do they work, are they valuable, can users understand them?"

First off, I was involved in building a system for NASA called KADD - Knowledge-Aided Display Design. Allowed UI designers to draw out screen displays and then there was an expert system component that evaluated the display against some human factors rules (e.g., the tick marks are too close together on this gauge, this component should be vertical instead of horizontal, etc.). The rules were very basic but based for the most part on human perception and cognition experiments. That project was over 20 years ago, but I don't know of any commercial product that does this sort of evaluation (maybe there is, just haven't looked).

Second, and this is definitely old school thinking, is that some sort of human information processing modelling framework like GOMS (Goals, Operators, Methods, Selections) could be used as the basis of 'simulating' particular human behaviors and then seeing how your system would serve them. This would involve some pretty heavy duty task and info processing analysis, way more than what is done in typical business environments. For example, I was involved with Intelligent Tutoring Systems research thru most of the 90s, and I do remember a fellow named Kurt Van Lehn who was trying to build a 'student model' in the sense that he was trying to mimic the way student's learn, and then use this model to test various computerized teaching techniques. I am not sure what came out of it, but I think building these models are extremely hard and in the early research stages and may have some potential. Again, I have not kept up with all that research, but I'm not aware of commercial products that do such things.

That probably leaves us with humans then to be involved with testing of the UIs, which brings us back to different levels of usability testing, from Expert Review, to simply user review, to full blown usability experiments with control groups and the lot. I have seen some innovations here recently in the areas of folks having multiple versions of websites (different in very minor and controllable aspects), serving these different versions to different users, and collecting performance data as actual users use the sites over the course of time. A sort of live experiement. This is for the bigger and more trafficked sites.

Now back to the first paragraph, if we are talking about testing UIs from the point of view of being able simulate a user clicking thru a workflow and checking results and maybe timing, then sure this could be done and it a lot easier than any of the above things. It addresses functional issues and might be a good place to start. I thing the more interesting questions though are "is it useful, appealing, worth my time, etc." and today we need to ask real people (or maybe their nascent model replacements) those questions.