Wednesday, April 30, 2008

XP and Scrum

There is an interesting thread going on right now on the yahoo XP group - XP and Scrum. Before introduced to a real agile project, I had read up on Scrum and it made a lot of sense at the time, but I certainly did not understand it. I suspect it's possible to figure out what agility actually means by reading about it - but for many, including me, I had to experience it before I 'got it'.

Both approaches use what I like to think of as common sense - but common sense is actually a misnomer, its not that common. They share many ideals, but the real difference is in the XP technical practices. For me, this is absolutely key. I believe that many of the failings of the numerous practices and methodologies out there are due to the lack of column space dedicated to great techniques to apply to programming. Usually methods specify what documents and pictures should be created and who needs to sign them off - but on many occasions these artifacts can be viewed as waste. As Kent Beck puts it, testing, programming, listening and designing - thats all there is to it - anything else and someone is trying to sell you something.

So, by no means do I view Scrum as a bad thing, but I do think that you stand a better chance of success by following the XP alone rather than Scrum alone - which is purposely vague when it comes down to programming.

Of course, the point is moot, because you don't have to adopt a single approach, you can have both.

Monday, April 28, 2008

Defensive Programming

In a post a few years ago Offensive Coding, Michael Feathers discusses the usefulness of so called defensive coding practices. People are often taught to code defensively, so that the program is more robust - right? What if the problem is addressed the other way around, - why not have the caller check that he is doing the right thing, then the need for such behavior disappears, reducing clutter, complexity and increasing readability.

I have been through this exercise many times, creating objects that return some meaningful state even though nothing, or an error, has occurred. A great example of this is to return an empty list, or an empty string - no need to check for null.

Null object is an interesting pattern, which can be used very effectively. I used to believe in defensive programming, but when you consider the effect of doing it, null checks multiply throughout code very fast. Contrast this approach by checking information at (often) a single point of entry, and I think you will agree, that these checks are unnecessary.

Read Michael's post for more on the subject, but it does irritate me when I see lots of checks for null in code - in 2008.

We need more teaching of good programming practice. I am looking forward to uncle Bob's book - Clean Code. I would like to think that any self respecting programmer would love to understand how to put together nice, clean code - in whatever language - unfortunately, I believe they are in the minority.

Sunday, April 27, 2008

One Thing at a Time

I was thinking about this subject the other day, as the pressure was really on to deliver several projects at once. Some team members were being asked to work on several projects at once to deliver functionality in the very near future, but I would rather have them focus on a single project, deliver that and then move on to the next priority in line.

Initially I thought that it was acceptable, since sometimes we have slower times than others, when we are waiting on a dependency beyond our control - I know it shouldn't happen but many times things are just outside your sphere of influence.

Then I thought back to the classic book 'The Goal' by Eliyahu Goldratt. If anyone has not read this book, I thoroughly recommend it. The book talks suggests that we shouldn't try to suboptimize parts of the system, rather optimize the whole system. It also says that any system where all its resources are busy 100% of the time will suffer, its ok to have some resources idle some of the time - as long as its for the good of the system as a whole.

Some iterative methodologies deal with these issues by laying down ground rules based on the iteration length - nothing can change during that window of time. Its odd though, how these things can creep up on you in an unsuspecting manner and you find yourself context switching so much, you feel you're not doing a good job of anything.

Oh well, I've done my bit to demotivate for the day.

Thursday, April 24, 2008

More on Value Added Tools

Today, I briefly worked with a colleague on a part of the system that uses a code generator. Such tools are sold primarily with a productivity spin, so I was quite interested to see for myself how it worked. Based on my previous posts, you will expect me to lambast the product and I certainly don't want to disappoint. Of course it did not deliver on its promises, but I want to think about why.

First of all, the language - it uses familiar languages, but not in a familiar way. Pieces of code are joined together using graphical tools, which is an alien metaphor for most developers, so it takes some time to figure out even the simplest thing. Even if you are used to other graphical tools, they are all built for specific purposes and share nothing in common.

Oddly enough, it would have been quicker to put code together with a very basic IDE using an editor only, than using allegedly simpler techniques. Perhaps that's unfair comparison though, because I would be relying on previous experiences.

Another issue I noticed was that there was no code completion or help inside a code block. This is a feature that I consider basic in any programming environment and I felt quite lost without it. Therefore much time was spent in web based documentation pages trying to figure out how to make it do what I wanted.

Then there are those little idiosyncrasies, how do I access a value in a field on a form was not quite so straightforward as one would think - in the context of our problem.

I did not spend as much time today as I would have liked to explore it a little more, so I will probably dive deeper tomorrow. For now though, I think my current feelings could be summed up as uncomfortable and clunky. If my opinions change drastically tomorrow I will report more.

Maybe non-programmers would be better suited to such graphical code generating tools? Anything is possible, but I doubt this - I had to call on my experience to figure out how to do things, so I think non-programmers would find it very difficult - then again that's just my opinion - as ever.

This is definitely something that I want to talk about further, I have a much better case in mind - just wanted to get some feedback on thoughts/experiences from others.

Tuesday, April 22, 2008

In the mood for more history...

As I'm in the groove, thought I would briefly cover another subject that crosses my mind regularly. Over 20 years ago Fred Brooks' landmark paper was published No Silver Bullet, Essence and Accidents of Software Engineering. Brooks argues that there are two types of complexity, accidental which is man made, and thus largely of our own making and essential - programming is just plain hard.

It is these ideas that I keep revisiting in my mind with some of the products that we work with today. We have all seen the slick marketing droids in action representing large software organizations with their promises to provide miracle solutions to save you money with fast time to market and a 'dumbed down' developer community. Always amuses me how they only want to talk to managers who have long since forgotten how hard it is to deliver a product for today's fast paced and high expectation user community. There's a reason for that.

No matter what someone tries to sell you, if it sounds too good to be true - IT IS!

Revisiting Brooks' original point, I actually believe that a many of these tools or products aimed at increasing productivity and downgrading brain power have a contradictory and negative effect by increasing accidental complexity. Keep it simple, get the best developers you can afford, a light weight, simple tech stack and forget (often very costly) gimmicks.

All the ingredients for a much more palatable and productive programming experience are in place, as a few diligently embrace some of the values, principles, practices and products out there that can have a significant impact on accidental complexity. However, the large corporations are constantly on the lookout, trying to discover the next big thing to destroy mediocre IT budgets in one fell swoop. I fear many will fall prey.

There is no silver bullet.

Lost Our Way

Its amazing what you can discover out there on the interweb. Was just looking at a blog that had a link to the design principles behind Smalltalk. This is some really powerful stuff, and I am completely speechless - how could we have lost our way so badly. According to the preamble, the paper was published in Byte magazine in 1981. So really great thinking about what a language should be was present 25-30 years ago! I know very little about Smalltalk, the principles described in the paper were (and in many respects still are) revolutionary.

What a shame we have ended up in a world with so many disappointing languages. Wish I could have been involved with Smalltalk.

Monday, April 21, 2008

Gettin' Back in the Game

Just had a nasty experience and feel duty bound to report it. For some time now, I have been a little too detached from every day business as far as development is concerned. A good opportunity has recently presented itself for me to get involved with the team, which can both help me understand better how the tech stack works and also understand the pain points for the team.

As far as the latter is concerned, I soon realized that 'pain' is an understatement. Central to the gargantuan (for those Tarantino fans - yes I rarely have the opportunity to use that word in a sentence) stack is the portal - I hasten to add that this was not a lifestyle choice for the team, rather a corporate constraint.

Now the normal development environment is an IDE supplied by the portal vendor and is painfully slow at starting up its built in server, which means that the flow of development - well, doesn't. If you make a quick 1 line code change and then try to test it - well, go get yourself a coffee and come back later.

We have been investigating an opportunity to use Flex inside a portlet at an attempt to be able to deliver business value much faster (believe me I'm not reaching for the sky here), using Java to serve up data over HTTP courtesy of JSON.

Because deployments are not consistent, we need to restart every time we redeploy, to make sure it works first time, every time. Since the aforementioned portal takes upwards of 5 minutes to start, (on a good day, with a tail wind) we are considering using Tomcat and plain Eclipse as a local development environment. Sounds ok so far, until you consider that the portal uses an old version of Java JVM, not only that, but rather than use a standard Sun JVM, they use their own. To try to get some consistency, we downloaded the JVM, but it wouldn't install on Windows XP for some reason. We then decided to use a Sun version, which reaches end of life later this year, but no matter.

Summing up, because the stack is so heavyweight, we cannot iterate quickly, so we make a pragmatic choice to enable us to move at a bearable pace. The cost of doing this though, is an inconsistent deployment environment and inconsistent JVMs. In addition, our development and target deployment procedures also have to be completely different.

Its definitely been a learning experience.


Thursday, April 17, 2008

Defect tracking

Most self respecting teams have a software product to track defects. So do we, but it's another of those things that just smells a little fishy to me - but I have accepted it and didn't really think much more about it. Until tonight.

Following Paul's excellent response to my last post, with a link to Kent Beck's statement of what really matters, I was looking around Ward Cunningham's site and came across another interesting article. Why do we feel the need to track bugs.

Defects are transitory in nature, all we really want to do with them is fix them and move on - surely. Well, I suppose we could measure something about bugs or use the information to blame others - but neither of these things helps us build software of higher value to our clients and more importantly - doesn't get to the root cause.

Everyone makes mistakes, we're all human, but if we use the novel idea of fixing things as we go, then the need for recording and tracking (really an unnecessary, time consuming task) goes away. Aha, you say, but what if I have lots of defects and they will swamp the team? We have to record them so we can remember what they are. I used to subscribe to this argument, but if you think about it, this is a symptom of a deeper issue. Quality is not built into your process from the start. Note that I am deliberately distinguishing between defects and requirements changes. When this pattern occurs, it is often due to lack of tests (assuming you have a good team of programmers). This is one of the reasons user stories are phrased in terms of tests, to improve quality.

This is a very contentious subject, and one thing is for sure, these tools are not going to disappear. However, I hope it provided a little food for thought and will encourage more appropriate thinking in terms of root cause rather than symptoms.

Wednesday, April 16, 2008

What to do, what to do

I think it must be me, because most people don't seem to see anything in it. How can you possibly have any idea how to build something before you know what to build? It's like walking into a store and the assistant hands you a pound of apples, without waiting to hear that you actually wanted a loaf of bread. Almost every project I get involved with, someone seems to know that we're going to need a 'cluster of this' or an XML schema for that - before we even know what it is that our customer wants. Do we have some kind of psychic powers that we're honing ever so carefully now, which allow us technologists a previously unknown level of insight into our customers needs.

Of course, sometimes your customer tells you what technology they want as well - this is interesting and may or may not be bad - it depends. Thing is not to take anything for granted, question everything - blindly accepting that you have to modify your product to fit a single client's unique needs will often be madness for you - and might not even be best for the client.

Call me old school, but I believe that there is much truth in the saying 'the customer is always right', so why don't we want to listen closely to what they have to say, and then actually try to understand it before we start thinking about possible solutions. The real downside with making assumptions about solutions is that it stifles the thought process, limiting your options before you've even started. Choice is a wonderful thing and delaying technology decisions making until the last minute - sort of a just enough/just in time thing - then you're not closing down potentially interesting avenues too early.

Dealing with requirements is one of the most complex things that developers have to cope with. Its so easy to introduce a subtle point, the implications of which could be huge in terms of time and cost. For this reason I like developers to be involved every step of the way as far as eliciting needs from customers is concerned, so that they can suggest the value in doing very costly features vs less costly ones. Clients are almost always unaware that option B might cost them half of option A, and B is only marginally less optimal than A.

So, should developers be involved with any client dialog - absolutely. When deciding what to build, its necessary, so that the client can make informed decisions and hopefully gain better value for their investment.

I have a pet hate of analysis paralysis, and its certainly easy to end up on that road - but that doesn't mean don't understand the problem - or part of it - before diving in with a solution. Requirements are really only a mechanism to promote a shared understanding of the business problem that we want to solve - but this process is invaluable - attempting to skip this process will end in disaster.

Monday, April 14, 2008

Introducing Agility

Many blogs and articles have been written that cover this subject and I just wanted to add my own two cents. Why is it so hard to introduce agility in the workplace. There are many reasons, but I have to say that top of the list is that its just plain hard to change people. Most folk have a comfort zone beyond which they simply don't feel happy going. Agility is so much more than just another process, its much more of a culture change than anything else, and its very hard to bring about culture change.

Whatever you're trying to change, you're always going to face resistance, because change could affect someone's role in a way that alters their stake and they (sometimes justifiably) fear the unknown will land them in a less desirable situation than the one they're currently in. Before doing anything in your organization though, analyze the situation, don't introduce something just for the sake of it, there has to be a good reason.

The implications of introducing agility will be far reaching and very uncomfortable for many at least initially. Also consider whether or not you actually have the raw materials to enable agility to happen. For example - are you going to have ready access to your customer? If not, this is a huge problem for any agile method - which works on the premise that frequent face-to-face communication is one of the best ways of loading the dice in your favor. With this example a good idea (I believe) would be to dip your toe in the water and see if you can run a small project with all the customer support you need to see if the idea will be welcomed. If not, don't even bother trying to introduce agility yet, your company is simply not ready to make the commitment. Most organizations, are in this stage and most of those who claim that they are agile are not, whether they think so or not.

I used to believe that it was an all or nothing proposition - and as far as declaring 'am I agile' is concerned, then it definitely still is. However, when it comes to introducing it into organizations, its far too much for most people to stomach at one sitting. So is it possible to introduce elements of agility? I think so, as long as you don't blame the principles and practices if they don't work for you, because most are designed to work cohesively together to produce results. Of course there are elements of danger breaking these up, because if you don't understand how things work together you could be staring trouble in the face. For example, refactoring and test driven development go hand in hand, try refactoring without tests and its like walking a tight rope without a net. Ideally, principles and practices should be used as they are so that you can learn how to crawl and then walk before you run, but its tough to introduce things in a big bang fashion.

Sunday, April 13, 2008

Business Managers and IT

In a recent article in Information Week, the issue of business managers bypassing IT managers to get things done is discussed. This is an interesting piece and something that I have also witnessed, and got me thinking. It seems to be a trend that is happening more often - but I question whether this is right, wrong or neither.

Part of the reason that we have arrived at this situation is that development teams/departments are seen to have consistently under-delivered on business expectations. This is sometimes true, and very often a perception.

However, there is the counter argument, that business heads have unrealistic expectations of what it takes to build software, which leads to even more negative perceptions.

My belief is that both of these arguments are true, but this situation is not going away any time soon. If IT departments cannot better meet the needs of the business, then look at the reasons why - I have seen strategy or architectural choices choke the ability of programmers to deliver anything. Of course the business manager doesn't care why so he's not going to wait for an explanation, he just wants his projects now!

Conversely, when business departments choose a poor IT partner to bypass internal groups, it can be a lottery - partner with the wrong guys and its going to be a nightmare. Integration may be impossible, maintenance very costly etc.

It's incumbent on managers on both sides to meet in the middle to get things done. Technology managers could be much more effective and take on more of a coaching and advisory role. Business managers need to be more open minded to work with people who understand how to make IT work - the trouble is, they may not have such a people in their organization - and business manager wouldn't know either way.

This is a tough one - opinions anyone?

Wednesday, April 9, 2008

What's an object anyway?

In his interesting post a year ago (Objects, I know that already), Paul discussed objects. Back when I studied for my computer science degree, we were taught about the object oriented paradigm - yet I feel quite strongly that some things cannot be understood by classroom teaching alone. My journey has been one of apprentice and I find that most good people I have worked with over the years, also have a level of humility that accepts that we must always be willing to learn from others.

So I see myself as disadvantaged, because I didn't begin to understand them until years later. Exposure to Smalltalk may have helped. In Smalltalk, everything is an object, messages are sent to objects, promoting loose coupling, in fact any message can be sent to a target object, and its up to the object to decide, at runtime, whether or not it understands how to handle the message. This subtle change in the thought process has a profound effect. Viewing an object from the outside as a consumer, allow us focus on things in a different way.

So back to classes and objects. Classes should be defined to represent things that relate to your problem domain. So a class of object used for one project could be completely different to a class of the same name used in a different project - it depends on context.

Classes provide us with services - image a service that can be consumed that's provided by a third party - you don't know how it works. In fact I try to apply the same thinking when I put classes together, I don't want to know how it works and it should be self contained and when I ask to use its services, I don't want to have to change the way I use it, if the class should change internally. This property is called encapsulation.

Hierarchies of classes can be put together based on an OO property - inheritance. This enables us to send a message to similar objects (as long as they are part of the same hierarchy) and the response could be radically different depending on it type. This is a powerful technique. Image a collection of graphical entities that you want to render on a page. As long as they are all members of a parent class PageComponent, you can send each item in the collection the 'render' message and each item will duly render itself.

Some years ago, a friend told me the real power of OO is delegation, not inheritance (thanks Rob). On a practical level, this means do only one thing well in a class and for anything else, delegate to other classes. Following this idea will lead to simpler code that is easier to read and understand, is less likely to go wrong, easier to maintain, more loosely coupled and easier to extend. For me, simplicity means everything. Ironically, it takes more commitment and effort to get there - but its worth it.

There is no magic to beginning to understand the usefulness objects - the more you work with them the higher the chance you'll start to understand. It took me years to get to a level of understanding (and I'm still learning) but it was definitely worth it. For you, it might take weeks - if you're really blessed with genius. One of the simplest pieces of advice I can offer is simply to think. Think about what an object should do, be responsible for - use a technique such as CRC cards and try to work with team members who have a level of understanding. Never stop learning.

There are a number of code smells associated with the misunderstanding of objects. Watch out for lots of setters and getters, big classes, having several responsibilities and a lack of collaboration with other classes. Always look at built in types with suspicion, this may indicate breaking encapsulation.

Don't look to OO as a panacea for anything, and as ever, what you get out of it, is only as good as what you put in. However, all things being equal it -

Can help understand the problem domain and can be a useful communication mechanism
Can help classify things in the problem so that we can deal with similar things in similar ways
Encourage thinking in terms of very small loosely coupled parts

In conclusion, I guess I am from the old school - the shopping list of skills I see on resumes these days counts for very little to me. I am far more concerned with depth of understanding of objects and other solid development practices than with current skill trends. The real benefits that can be gained from objects are only realized by the thought process of the diligent programmer who has an understanding of objects.

Tuesday, April 8, 2008

Thoughts on TDD

Having just gone back and read an old article by Michael Feathers entitled 'Emergent Optimization in Test Driven Design' (found at I have been rethinking the whole test driven development argument.

I first started using a TDD approach a few years ago and quickly realized that (at least for me) the 'test' part of TDD was actually a very nice secondary effect. The real power behind the technique is its ability to allow the programmer to work 'from the outside in' as Michael Feathers puts it - leading to better design. His paper actually focuses on the optimization argument, but for me, the primary effect is that it helps me to design an application from a consumer's perspective, as if I were making a library or API for other programmers' consumption.

Using my design argument, I then thought about the traditional order of development tasks - design comes first, then write the code - so if TDD could be viewed as part of the design process, it might gain more widespread acceptance. One of the things inhibiting the practice of TDD is the old stigma about testing and how some programmers have learned to despise the very idea, often citing time and cost constraints as justification. Wouldn't it be great to change the terminology and eliminate the word 'test'. Then I had a realization - that's probably what the BDD movement was all about.

Up until now, I have largely ignored the BDD thing, but I decided that it's time I take a more serious look at it. So I now return to writing this post having watched Dave Astels' video on Google about behaviour driven design - and yes, this is exactly the intent of BDD. I would like to see this approach to programming become more widely accepted, but I wonder how easy it is to change old habits. The enlightened and inquisitive will probably accept, and indeed move on with these ideas, for the many, I fear nothing will change.

At least I learned a valuable lesson, I need to do more research - I slept on this one for too long!

Monday, April 7, 2008

Traditional Project Roles

I have often thought that there is a glut of staff crowding a project. This idea was recently brought to the forefront of my mind when I looked at the costs associated with a project. Now I had worked on this project at its inception, and with one other developer, the two of us had written all the requirements, produced a software architecture document, written tests and developed much of the software.

The project was dogged with issues, I moved into a different role on another project, the other developer moved on to a new company and other developers came and went, all over the period of about a year. Additionally, we were dependent on a number of third party vendors, all of which had legal agreements and other time consuming hurdles which had to be overcome before we could proceed - the project largely dealt with the integration of external systems. Together with a couple of QA staff, the project proceeded at a slow, steady pace until it finally went into production a few months ago.

It was around this time that I happened to stumble on the financial figures for the project, which I don't normally bore myself with, but someone had been talking costs on this and I couldn't figure why it would ever be an issue. Turned out that around 30 people had booked a considerable amount of time to the project, thus causing a high and somewhat disturbing bottom line report.

Even with full possession of the facts, I could not avoid jumping to the conclusion that is was a costly project, providing poor value for money to the client. But it did not seem like it at the time. I looked more closely at the staff on the report.

There were a number of BA staff, QA, PM's, process analysts, numerous managers from different departments, and a host of other people, some of which I had never heard of. With the sole exception of several QA staff, I couldn't remember one of these people actually having made a contribution to the project. Granted, the Project Manager had a 30 minute 'is it done yet?' meeting once a week.

What would have been the outcome of the project if just the two developers and one or two QA staff had been the only ones working it? I suspect exactly the same, only the company would have save itself hundreds of thousands - another small project.

Why is it that we are convinced that we need people with these different titles on a project? Perhaps it makes organizations feel comfortable that we follow some age old process ideas that dictate we must get smart people who can tell the working class 'coder' what to do - because everyone knows they can't do it on their own. I think this is related to same disease we have built in to our social fabric in the western world - command and control, tell the 'workers' what to do. This attitude seems to still be prevalent 100 years after Frederick Taylor's scientific management was first put together. The sad irony here is that most good developers I know, could actually do a better job of each of the other roles than those whose role was actually their full time job. Developers have to step up to the mark however and play the various roles - oftentimes, I see developers who don't want to get engaged with the client and understand that we are actually 'building a cathedral' and not 'cutting stone'.

Why is it that being a developer on a project always seems to take second place to all other roles? What is the most important thing a project produces
a) Documents
b) Diagrams
c) Timescales
d) Cost projections
e) Working software

For those who need it spelt out, working software is the ONLY thing that matters. Now I am not advocating that we do none of the other things, merely that given limited resources available to us, we choose our priorities carefully.

Sunday, April 6, 2008

Application Server Value Proposition

Some years ago, when I first got into the Java/J2EE programming game, I started to learn about application servers, what they were and why I would want to use them. This was in 2001-2003 era, when vendors such as BEA and IBM dominate this lucrative market and open source solutions were not quite ready for prime time.

Now the whole premise of the application server was sold on the basis that, as a developer, it was there to make your life easier and as a manager, hey you could save costs by hiring less expensive developers (dumbing down) you don't need super smart guys because our product almost writes the hard parts for you!

Bear in mind that my background was one of mostly rich client development in a C++/Sybase background - traditional two tiered architecture. In this environment I felt productive and could turn on a dime when changes were requested of me.

When I emerged from the wreckage at the end of the project, I felt battered and bruised and hadn't felt as unproductive since I had graduated some 8 or so years previously. The simplicity sell had not materialized, delivering anything had been tough. This is often the point at which a consultant is brought into the mix and their advice will often be - you need more of our very expensive consultancy because your staff don't have strong WebWhatsit 'Firkin' skills. Much later I learned that this is merely the standard consultant response to anything - make more money. In some ways, I can't even blame them.

Why had the project really been so tough? I had been asking myself this question constantly. We had smart people working on it, they had attended training; but it hadn't been enough. Complexity does not just go away in the development game, it simply moves somewhere else. In the context of this project, one of the places it had moved to was configuration. Integration with application servers is also more complex. In the process of trying to offer more choice (most people see this as a good thing) with 'best of breed' vendors' solutions being pluggable at every point in the stack - all that happens it more complex integration points are introduced, none of which seem to work as advertised.

Also ripe for consideration - and I now see these as more serious contributors that I had at the time, was the seemingly small things such as hefty development environment, expensive toolsets, slow and hard to use. Testing was much harder with container/code deployment model, starting the container and testing inside it requires some considerable time/effort and is a much slower process than testing outside the container.

All in all, configuring the numerous descriptors, server configuration files etc in order to make the thing work, was a bit of an ordeal, and I (perhaps unbelievably) quite enjoyed it and viewed it as a personal challenge at the time. I was stupid - that is not what software development is all about - would I say my company got value from that project - absolutely not.

Unfortunately for me, I had not yet learned my lesson. We progressed with the next, much bigger project at the company, management convinced that our learning curve now put us into position for success. I will save this one for the next blog.