Monday, December 29, 2008

The culture of success

It seems to be accepted as the norm these days that software development projects are a big gamble. You may succeed, but all the planets have to be in perfect alignment to achieve it - and what is meant by success is also open to interpretation.

I find myself in disagreement with the status quo however, the dice can be loaded in favor of a form of success, as long as the right attitude and culture is in place – and that ‘success’ is understood by all to mean the same thing.

There are many different paths that can be chosen during development of a new product. Once a decision is made to walk down a less desirable path, every other step from that point forward, no matter how sure footed, is at a disadvantage because it is still headed toward the bad side of town. The only way to correct this situation is to double back (i.e. throw work away) and take a more appropriate path.

Success can be as hard or as easy as you want it to be, but if you don’t instill and install the right culture, you just won’t get it.

Agile values and principles help like minded individuals to understand the common parts of what make up the right culture, but the agile label itself is under threat from misinformation and misunderstanding. The lowest common denominator that underpins any successful project is working code – and in order to understand what a healthy culture means, you have to understand what it means to be responsible for delivering it. If you have this prior context, agility speaks for itself.

Friday, December 19, 2008

Last comment on the subject - maybe

Seemingly at risk of degenerating what I view as an interesting and valid debate, I have decided to move here to comment further, rather than offend.

Paul Beckford has just posted his latest comment on the discussion and I strongly recommend reading it, there are some very profound points in this post. He referred to a great quote by Konosuke Matsushita the founder of Panasonic. My search for the origin of Konosuke's speech led me to a slightly different version, and I don't know which is correct, if either, but the words are very pertinent so I thought I would show them here -

"We will win and you will lose. You cannot do anything about it because your failure is an internal disease. You firmly believe that sound management means executives on one side and workers on the other. On one side, men who think; on the other side, men who only work. For you management is the art of smoothly transferring the executives' ideas to the workers' hands. We, in Japan, are past that stage. We are aware that business has become terribly complex. Survival is very uncertain in an environment filled with the unexpected and complications. Therefore, a company must have the commitment of the minds of all its employees to survive. For us, management is the intellectual commitment by the entire work force, without self-imposed functional or plastic barriers."


Reading around a little further, I found another great quote -

"The untrapped mind is open enough to see many possibilities, humble enough to learn from anyone or anything, forbearing enough to forgive all, perceptive enough to see all things as they really are and reasonable enough to judge their true value."


I am only recently beginning to realize and appreciate just how many great people have distilled knowledge and wisdom down to just a few very meaningful and powerful words and thoughts.

Building on some of Paul's comments, there must be many great examples of creative craftsmanship in other disciplines out there. Thinking of a chef, I love watching Gordon Ramsey in action in his 'Kitchen Nightmares' TV show. His knowledge of business and holistic thinking about his craft is what makes him so successful. A chef cannot be someone who entrusts others to locate the best farms, cut, prepare and season all his ingredients and place them in front of him so he can just cook them.

Changing the focus a little, could it be that even management itself is actually just a label for the collective responsibility? This is a literal interpretation from the speech above, but I think perhaps it is. I question its validity as a profession in its own right as much as that of software architect. In fact, anything you cannot easily define is questionable. If someone tells me they are a pilot, I know what they do. Ask a manager and you will get a very hazy definition and each manager will say something different. What is the job of a manager? Its wide open to interpretation but from my perspective, it is to support your team and remove the problems that stand in their way. Stopping to ask yourself why the problems got there in the first place is more valuable and root cause is often some other organizational anti-pattern much of which could be avoided by having committed minds act as a collective force. In a small startup there is little room for these additional roles, its all hands on deck to produce business value so you can start to turn a profit as soon as possible.

Some time back, I read an excellent book called "The Goal" by Eliyahu M. Goldratt. The one lesson to take from this book above all else, is that you are in business to make money. Whenever I stop to think about all the extra 'busy' work people in an organization can create, I stop for a minute and think - does this contribute to the goal of the company? It is surprising how often the answer is no.

Saturday, November 29, 2008

Does Agile Mean Anything Anymore

Having revisited Steve Yegge's post way back about Good and Bad agile, it got me thinking again.

Steve is absolutely right that its much more about *being* agile (with a small A) rather than blindly *following* any one of the Agile methods. Somehow, I suspect that many of the early thought leaders would agree. However it is valuable to use names, monikers, labels what ever you want to call them to describe concepts so that like minded folk can also share the good ideas. Back in the day before the moniker was invented, many different kinds of methods were being used to successfully deliver software. At the heart of the successes were developers who had strongly held beliefs (values) that tie in with being agile, oftentimes, bending and breaking the rules of the incumbent process to deliver in spite of it.

There have been a few discussions about the value of SCRUM recently, possibly as a reaction to the marketing and certification hype that SCRUM seems to be generating. As a part of what you do, its not inherently good or bad - but will it make you agile in its own right - of course not. All methods, tools etc are ways to make money. What really makes a difference is the wet-ware between the ears that you employ to sit behind a desk and craft working software. There is money to be made in all the processes, methods and tools out there. The next time someone visits your company to sell you the latest fad, ask yourself one thing - why don't they want to present it to a developer audience? Because developers will see straight through the marketing hype and dismiss it for the costly distraction it probably is. Not to dismiss all products, but let the developers make the choice - after all they are the ones responsible for delivering working software.

People over process - its all about state of mind, values, culture. Its much harder to change these things than to just pick a method off the shelf then blame it when it adds to the long line of project failures.

I intend to focus on the promotion of agility as a way to share its culture, values, principles and practices, rather than specific methodologies. Its not that I think there's anything wrong in any of the methods per se, its just that they're not the things that really distinguish agility from other approaches.

Even if people agreed with this viewpoint, I don't think it would change a damn thing. Many people don't want to change, cultures are built up over years and centuries and entire disciplines have grown around doing things that don't directly contribute towards the goal of creating valuable working software.

Monday, September 22, 2008

Product Owner Responsibility

This article posted on Infoq talks about the problems faced by a SCRUM/XP team and unfortunately, it all sounds too familiar.

One of the biggest problems facing the agile community is convincing the potential customer or product owner to take responsibility for their actions. This is partly a case of lacking education, most product owners don't come from a software development background and development teams need to be understanding and helpful in these cases.

Another major contributor however, is that they have no motivation to collaborate. Most of these people are used to working on failed/failing projects and as soon as they see where the process is headed realize that they are actually supposed to do something, and doing something means opening themselves up to potential blame when things go wrong. I think there are numerous reasons - including lack of knowledge of the product itself, lack of vision of where they want it to go and fear of blame in the event of failure. In many ways, I cannot blame such individuals, its human nature and one of those situations I can't think of an easy way out of.

Oh well, not all problems can be solved!

Friday, September 5, 2008

Google Maps Flash/Flex API

Been playing around with Google's flash API with a simple flex example. I am very impressed at just how simple it is to get something going. I started off with a simple panel and added a child map container -

<mx:UIComponent id="mapContainer" initialize="init(event);" resize="resizeMap(event)" width="100%" height="100%"/>

Then, in the init method, I created a new Map object and added it to the container just defined. Note that you have to get a key from Google's map API web site.

public function init(event:Event):void {
map = new Map();
map.key="your_gmap_key_goes_here";
map.addEventListener(MapEvent.MAP_READY, onMapReady);
mapContainer.addChild(map);
}

When the map has been rendered, the MAP_READY event handler registered above is called. This is the clever part. Google's geocoder allows you to enter an address or partial address, and it magically converts it into its best attempt at a latitude/longitude.

public function onMapReady(event:MapEvent):void {
geocoder = new ClientGeocoder();
geocoder.addEventListener(GeocodingEvent.GEOCODING_SUCCESS, onGeocodingSuccess);
geocoder.setBaseCountryCode("US");
geocoder.geocode("Los Angeles, CA");
}

Finally, tying it all together -

public function onGeocodingSuccess(event:GeocodingEvent):void
{
var placemarks:Array = event.response.placemarks;
if (placemarks.length > 0)
{
map.setCenter(placemarks[0].point);
var marker:Marker = new Marker(placemarks[0].point);
map.addOverlay(marker);
marker.openInfoWindow(new InfoWindowOptions({title: "Address", content: placemarks[0].address}));
map.setCenter(marker.getLatLng(),15,MapType.NORMAL_MAP_TYPE);
}
}


Yes, I know its a very simple contrived example, but I can't get over how quick and simple it was to get this all going. There is a geocoder web service so that any language can use it. Going to try this with another language and then maybe explore some more APIs.

Saturday, July 26, 2008

Reusability Revisited

Reading an article just now, something sparked off a thought process. I remember talking my manager years ago about things in general and he came out with something like 'object technology never lived up to its promise of reuse'.

Apart from the fact that I disagree that object technology has anything to do with the reuse argument, his comments about reuse were (and still are) a sore point with some folk. Reuse has long since been a holy grail of software development. We long to be able to plug together components like those clever electronics engineering types do, so that we can drastically reduce the time it takes to build software.

To attempt to achieve reuse take time and investment, something most projects cannot afford. On every project I have worked, there was a targeted deliverable and on those where the IT staff lost sight of this, the results were disastrous. But even if the project was will to spend the extra time and money investing in an attempt to build reusable parts, what is the return on investment? I have no doubt it can be made to work - but when I have seen this, there has always been a price to pay - code is very often in worse shape after refactoring changes required to make it work in a second context.

So, even if we could (i.e. we have the funding and the commitment from the project) should we do it? There are only two scenarios here I can think of -
1. We build something to fulfill an expected need and try to design for all occasions
2. We build something that fulfills the needs of a real product and try to adapt it to other solutions

The problems with 1 are that we're in an endless assumption mode, we don't know when we're done because we have no sound business reason for building something and its impossible to foresee what real projects actually will need. While I favor item two because its grounded on reality and emerging needs, it's still often impossible to reuse that part on another project. The more coarse grained the component the hard it will be to reuse it - because business rules are always different across projects.

This brings me to the next point I believe that level of abstraction is key to reuse. If we build some software to manipulate lists, it is much more likely that we can reuse it than if we build a class to calculate commission for a salesman. Why should this be? There are basic building blocks in life that can be use no matter what. Bricks and mortar and other building materials can be used to build a large array of different structures, but a house or a kitchen cannot be used to build a shopping mall. The closer to a business domain software gets, the more specific to that particular solution the code becomes, because most people in business actually want to do things differently to their competitors - in the belief that they can gain advantage by doing so.

Reuse takes many forms. Software especially libraries and fine grained (typically technology oriented) APIs, tooling and platforms and skills. The single most import item from which reuse can be gained, is the brain, knowledge and learning should be foremost and everything else will fall into place.

Thursday, July 10, 2008

TDD and User Interfaces Revisited

In my original post on the subject I considered the role of TDD in user interface development. There is an interesting post by Dean Wampler that also considers the subject and its interesting to get his perspective. The best thing to do when not sure about something in this business is to think for yourself - keep an open mind. Dean postulates that one should consider what to test and what NOT to test when test driving UIs. I would be interested to hear what Uncle Bob thinks about this, and whether it fits with his strong views on TDD, but I feel that much UI development should be left as nimble as possible - in fact some UI could be considered throw away - yet still built to production quality - but very easily changeable.

While business rules can and do change, the user presentation should be able to change with a quick click of the mouse in front of the customer. In fact, why not be able to have multiple flavors of UI, for different types of user/customer? I know this will be somewhat provocative, but I just don't see the practicality of test driving everything in the presentation layer. Of course, I still believe in automated acceptance testing - and this can still be test driven - just perhaps removing the emphasis on unit. What is a UI unit anyway?

I am playing devils advocate a little here, but it would be interesting to hear more opinions.

Erlang and the Object Oriented Viewpoint

In a blog post some time ago, Robert's post pointed me to
Ralphs Johnson article talking about the object oriented properties of Erlang.

This really intrigues me as I am a big (in every sense) OO fan. Getting a little more serious about learning Erlang recently, I am thinking more about the points Ralph put forward. Traditional OO thinking states that a language is object oriented if it supports inheritance, encapsulation and polymorphism. However, I don't know where this originated, or from whom. Is it supposed to be taken literally or is there a wide berth for interpretation of this loose definition. Message passing is also a central tenet which is generally overlooked by many supposed OO languages (Java, C++) preferring other mechanisms to achieve similar results, which I feel somewhat miss the point.

So, you cannot write an Erlang process that 'derives' from another process - but does that even matter? As Robert once said to me, the power of the OO paradigm is delegation, not inheritance, and that is something most people don't get. I also feel that OO systems help me to understand how things work and hence translate from the real world to that of computers. Encapsulation and message passing are easy to achieve and I could argue that polymorphism is possible - what about inheritance?

I really like the way message selection works with pattern matching and the total reliance on recursion, matching and higher order functions to eliminate need for many control constructs such as conditions or iterations. This is central to the incredible alleged reliability for the language.

Sunday, July 6, 2008

Actor and Object Models

Please bear with my ignorance with concurrency models, but I am very new to this. So, I am just reading up on the actor model, which I believe both Erlang and Scala are based on. According to Wikipedia --

"The Actor model adopts the philosophy that everything is an actor. This is similar to the everything is an object philosophy used by some object-oriented programming languages, but differs in that object-oriented software is typically executed sequentially, while the Actor model is inherently concurrent."

I did not think that there was any such limitation of the object model paradigm.

Is this correct?

Scala/Erlang Differences

I recently read this article at infoq and wondered if anyone could shed any more light on the debate?

Having looked at Erlang for a while, it seems like a refreshing approach to the accelerating needs for concurrency, but I confess to neither being confident with Erlang, nor having done much research as yet with Scala. I am not convinced that the fact that Scala is built on the JVM is actually an advantage, but it is a differentiator. Perhaps it will just be a case of paying your money and making a choice.

Monday, June 23, 2008

Silo Driven Development (SDD - an antipattern)

In Johanna Rothman's latest post, handoffs don't work, requirements hand off to development teams is discussed and I think she makes a great point. I believe that this behavior is part of a larger issue of silos in development organizations. Unless organized along the lines of product development, silos are anti-patterns whether BA's working on requirements, DBA's working as part of a database team, a tester working for QA team or an architect working in an architecture/strategy group.

So, lets take a minute to look at the requirements example because this one is probably the most common problem. Now, I tend to think that business analysis is part of a developer's everyday job, so the role itself is important; but why do we use a business analyst?

- Do they have better domain knowledge? Often not, they are usually skilled in business analysis, so like a developer, their knowledge of the domain is secondary. Additionally, developers tend to ask many more questions than their BA counterparts.

- Are they better communicators? I don't think there is any difference (in my experience)

- Do they save developers time? Well undoubtedly if the developer starts work without talking to the customer at all - but is this what we want. This, I think is a serious problem and leads to the developer missing a key opportunity to pick up on the domain, the more they can talk directly to the customer the better, having a proxy in between is just a recipe for miscommunication and misunderstanding.

- Don't they get it right first time? No, unfortunately not. Johanna's post describes this much better than me. Often subconsciously, customers use empirical feedback to steer their requirements along the journey. Expecting them to sign their name in blood on a requirements document and it will never change has long since been dismissed as naive at best.

So, why do I seem to be committed to the development perspective and not feel for the poor BA? Quite simply its a matter of responsibility. A business analyst has to deliver a paper document, proving how good or otherwise that is proves to be quite subjective. A developer on the other hand takes responsibility for delivering working code. Did you ever hear of a BA getting in trouble for not delivering a document on time? Do documents fail testing or cause null pointer exceptions during use? Many customers read requirements documents and think to themselves 'I have no idea what this all means - but I am confident everything will come out exactly as I expect it to.'. 

Even if you work with a BA, cutting out the silo and having the BA work with the development team - and the team all working with the customer, (all other things being equal) should result in a better product.

Saturday, June 21, 2008

Distributed Development Disappointment

In this post from last year, Mark Levison discusses some of the issues that arise from having a distributed development team. This really hit home for me, since I completed a project where we had an unusual case of working out of two distributed locations.

I knew that it was going to be a challenge and I did not like what I was getting myself into, but there was some history behind it, and before going further I should try to provide a little background to explain why a distributed strategy was chosen. Sometimes it sounds like common sense, unless you have a real deep understanding of software development, so I can see how this situation arose.

One team at location 'A' were strong in specific skills and had built up a product over a number of years. Our strategy was to change the direction of the product, introducing a new technology, which was familiar to developers at office 'B'.

The project was extremely tough but did manage to deliver a working product, but the teams at the different locations never really gelled. Its not surprising really -

From team 'A' perspective -
1. Felt threatened
2. Left out of important decision making
3. Couldn't hear many of the meetings on the phone call - effort led from team 'B' location (many more staff and office space available)
4. Didn't understand why some decisions were made when they had much better background knowledge


But, there were also some other reasons from the other team's perspectives.

From team 'B' perspective -
1. Team left to figure out integration issues on their own
2. Little help was forthcoming
3. Only one who wanted the project to succeed
4. Couldn't understand the resistance

Now, you're saying, so why not ship the folk at office 'A' to office 'B' - well family life and other commitments. Why didn't we do something about the communications problem - it was raised, a new phone ordered and then promptly turned down based on cost.

In my opinion, there is no simple one size fits all remedy for this kind of thing. It simply doesn't work. Yes, we managed to succeed, against the odds, but it was like pulling teeth and I wouldn't wish it on anyone. I do not blame or take sides at all in this, it was just circumstances.

Of course there are consultancies out there that claim they can make it work for you, and maybe they truly can make the experience less painful, but I'm guessing the real reason is because they want to bill you for their time.

Retrospectively, I would have ensured that all the information was in the hands of one location or the other, but I was not involved in the earlier stages of the project and there are a few uncomfortable hurdles to overcome before that suggestion could be a reality. As usual, everything comes down to people problems and how they communicate. Excessive use of email, IM and even the phone as poor second cousin to face to face communication has done more damage to human relations than anything else - especially in the corporate world. Having said that, the way my daughter miscommunicates over the web, maybe its really lousy in the social networking world as well. Guess I'm just from a different generation.

Tuesday, June 17, 2008

What No Getters?

I was intrigued reading an article by Michael Feathers recently. Although the main topic of conversation was flawed thinking in the TDD world, I was struck by the point about writing OO code with no getters. This sounded like an interesting idea to me, as I have long since thought that setters and getters are very much an OO anti-pattern, exposing the details of an object unnecessarily much of the time. 

Many blogs are covering the subject right now, and I am still reading through them and trying to remain open minded. Here are a few examples -

http://peripateticaxiom.blogspot.com/2008/06/tdd-mocks-and-design.html
http://moffdub.wordpress.com/2008/06/16/the-getter-setter-debate/

Martin Fowler has a slightly different perspective -

http://martinfowler.com/bliki/GetterEradicator.html

Sunday, June 15, 2008

Who's to blame?

What happened in software development in the last 50 years? Can we say our business has improved or worsened over time? 

I would say that we are at stalemate. 

PERCEPTION
In years gone by, computers were used heavily for scientific and mathematical problem solving. As time moved on, business domain users became more the focal point and with it a large shift in one of the most important attributes of a developer - to be able to communicate with average, non-computer-literate humans effectively. We could look back over the years and it would seem that more is achieved now through more advanced human user interaction models but I don't really buy this. Great ideas in HCI have been around for decades - yet core computer science problems are just the same as they ever were.

SKILL
Many put far too much emphasis on very specific language or framework skills or understanding a particular API. This subject brings me back to my earlier post referencing 'prefer design skills'. General skills in the key areas such as design, business analysis, working with customers, understanding of what it takes to build quality into a product and a good sense of architecture - these are the only things we should be focusing on. Specific skills come naturally and can be picked up by the right type of people. 

ROLES
We have fragmented roles for BA's, QA, Architects etc, and I for one find it difficult to reconcile development, the creative production of code, with some of these other roles. I don't feel that great results can be achieved when these are viewed from a separated, isolationist standpoint. Most in these roles have never had to deliver software and don't understand what it takes to do so. Developers need to intimately understand these various perspectives though in order to deliver a great product. The best scenarios are generally found when experienced developers have sufficient experience and knowledge and can integrate all these perspectives in their everyday activities.

EDUCATION 
It would not be acceptable for law, medical or business leaders to enter their chosen fields without passing an applicable exam - typically at undergrad or postgrad level. Why is it commonplace that graduates of other disciplines can become computer scientists? Don't misunderstand, I am definitely not bigoted here, I know one or two very good people who don't have any formal computer science background, but they are incredibly motivated individuals - exceptions rather than the rule. However, I have met many more from a physics, math or other degree discipline who just don't get it, and haven't been motivated to get it. So is this a problem with education, or a misunderstanding in general that it really doesn't matter how well someone understands computer science in order to do the job?

Sometimes, education itself can contribute to the problem, but it largely depends on the curriculum and teaching staff. Generally speaking I don't feel they are guilty of any bad intent - their main purpose should always be to encourage open mindedness and my experience here was a good one with my college.

BUSINESS
The big players in the business of selling software, hardware and services influence us more than we would admit, but I can't say I blame them, after all, its just business. More fool on us for paying through the nose for a product that isn't a good fit - more often than not, its because the wrong people are involved in the decision making process; marketing rather than development. Smart marketing is at the heart of big business's approach - playing to the fears of senior managers is not hard when development track records are highlighted. Many desperately want to believe in a silver bullet, a giant slayer that solve all our problems, but there simply isn't one - will there ever be? I doubt it. 

ROOT CAUSE
This is pure speculation, but my feeling is that software development is an incredibly complex mix of social, creative and technical abilities and it is very hard to find great people with this combination. It's just a case of massive widespread misunderstanding and underestimation of the complexity involved in creating a software product. Great products are built by people who have this understanding and generally such people spend most of their time trying to make it simpler and easier to produce something - often by changing the rules of the game by trying to simplify the inputs.

There was a stigma associated with the discipline, the real techie geek type being locked in a big server room with thick plastic glasses taped together - really through to the 1990's. Then it became more socially acceptable and HTML hit the mainstream, and all of a sudden even little Johnny could put together a web site in his bedroom in 5 minutes so it must be easy this IT stuff - right?

Software is undoubtedly viewed by most as pretty much a blue collar, pass it along the production line type of work activity. Many managers still look for ways to reduce costs and replace more expensive, valuable staff with cheaper one who have a painting by numbers mindset. The best people in the business are not a commodity, they think creatively and are not generally constrained by the ideas of the masses. One of these people is often worth 10 cheaper staff, yet is often only paid 20% more. 

IN CONCLUSION
Enough of all this waffle. Is there a hard and fast answer to the question - no. How can things be changed - I have no idea. All I know is that good people are out there who can identify with many of the things I have mentioned in this post and they know the path to tread through the minefield to achieve a good degree of success.

Tuesday, June 10, 2008

Estimation

I had an interesting discussion some time back about the relative merits of attempting to give a SWAG (silly wild ass guess) on level of effort to build some features for a product. It was the very early stages of product definition and a rough level of effort was required from some participants. 

My first reaction was the suggest that we put more meaning around the two or three word features that were listed out because I did not even know what the words meant, let alone how long it would take a team to build them. If I do not know what these things actually mean, how can I provide any form of estimate - there was not enough information. Yet, despite my stance, there was still a general insistence that the information be provided.

This got me thinking more deeply about estimates and estimation in general. There are many estimation techniques out there such as cocomo, wideband delphi, function point analysis, some of which I have tried, some I have not. But I ask myself now, is there any value in pursuing any of these? Are they any more accurate than a 'gut feeling' (I guess that is synonymous with a SWAG)?

So, can estimation techniques provide any kind of reasonable output? I would say the answer to that is a guarded - it depends. There are many factors that govern predictability.

- PEOPLE - team size, mix, skills, talent, effectiveness bonding, business domain knowledge
- PROCESSES - how the team works, rigidity, willingness to change, working environment
- TECHNOLOGY - equipment and tools, choice of libraries, languages, frameworks

Sure, there are many more than I have listed here, point is - the more that is known, or understood, the more likely that those involved will be able to provide meaningful estimates. 

So, for example, if a team is asked to provide estimates to build something in the business domain that they understand very well, with technology stack they have prior knowledge of, with a good team mix and the right input, environment, tools etc. they can provide something meaningful. However, I would still consider their output with a healthy dose of skepticism, because users/customers/product owners are prone to changing their minds. Even developers change their minds during the course of a project.

Where to now? The problem with the above estimation ideas is that they are based on the assumption that things remain static during a project. But projects are a creative activity, not a production line. Its like asking an artist to provide an estimate on how long it will take him to paint a picture - he may get half way, and clear his canvas and start again from scratch if he doesn't like the way its taking shape. He has that prerogative - its an artistic process. 

Now, can teams at least commit to a specific capacity? To a degree I think they can - a team who have worked together before may know that all things being equal, they can deliver 20 story points per iteration. Does this mean anything in terms of estimates - I would say a qualified yes - if everything remains unchanged. If circumstances change (and they will) then the team should be able to use that information to advise customers/users/product owners that project parameters have changed and that scope or time to market adjustments should be made. The more work a team delivers on a project, the more they learn about themselves, the tools and the domain and the more accurate their estimates become over time. In my opinion, this places an even higher emphasis on open/honest communication channels with executives and stakeholders to allow them to make sensible funding decisions.

Monday, June 9, 2008

Generic Agile

In this post on Generic Agile, Rachel Davis talks about the idea of mixing up different types of agile methods to arrive at something useful that fits the organization's style and timeframe. Rachel presents some great points and for the most part I agree and it is in fact what I am trying to do myself. 

The interviewer asked what advice and recommendations Rachel would give teams that are looking to change their processes. In response, Rachel said that they should read around many different flavors and not to get too hung up following practices exactly. While I agree that reading around various flavors is a good thing, I can't help but think that during early adoption it would be better to call in a coach who has worked successfully with lightweight processes to help the seed to grow. If a team experiments with different pieces of various processes without first having a working knowledge of them, it might risk pulling apart harmonious practices that are not so effective individually.

In my judgement I am possibly over harsh on the ability of others to pick up on this - but I doubt I would have so easily grasped it myself had I not been fortunate enough to work with a great coach.

Sunday, June 8, 2008

The Essence of Agility

I am trying hard to avoid the 'A' word these days and I am not alone in this endeavor. There are so many examples of misinterpretation its really quite sad. Of course there is nothing different here to almost any other major phenomenon in the tech world, as soon as any label/buzzword reaches a critical mass, everyone wants to get on the bandwagon. 

Yet agility is one of the more interesting examples - it isn't a technology that can simply be learned and I don't think it is a case of looking at a set of steps in a book on your favorite flavor - XP, Scrum, DSDM, Crystal etc. It is extremely hard to understand the essence without first witnessing it by working in a team.  

I will say this only once - the essence is really down to mindset and attitude.

One of the easier ways to understand it is to have experienced what development should not be, then move to a team with agile values to instantly see the difference - it can be a real 'road to Damascus' experience. Unfortunately there are many in the business who have not been in the position of direct responsibility for delivering working software so it is much harder for these people to understand the mind shift required to adopt an agile way of working/thinking. 

A simple mind adjustment then is all that is required, a good reading list and practice is all that remains to complete the transition. Note the difficult part, its incredibly hard to change attitudes, especially those entrenched with ideas taught, read and practiced over years. Couple this with a little ego and discomfort with change and the odds are really stacked against.

There were many out there that were agile before the 'A' word and there are many out there now who practice it now without giving it a label.  Smart people who want to deliver maximum business value have always been around, but the addition of some good writing under the 'A' label adds some great principles, values and practices to the smart team's arsenal. For me, I just like to remove the label to avoid any preconceived baggage.

Thursday, May 15, 2008

Defensive Programming

Enough negativity after the last post, let me get on to a more interesting topic. When I was in college, I learned how important it was to code defensively. I'm sure many folk who studied computer science was also exposed to this way of thinking. In case you don't know what I mean, I am referring to the practice of checking values, typically parameters that are passed into methods, so that the method does not blow up if it encounters a null, for example. 

void renderShapes(Shape s) {
if (null != shape)
s.draw();
}

So this is a very trivial example, but when this effect is compounded, adding all the checks for validity, generates a considerable percentage of extra code, non of which is actually necessary, in fact, just the opposite, it adds complexity and makes programs harder to read and maintain. Its all about trust, the correct place to check for validity is at the entrypoint to your system, whether it be via computer-computer or human-computer interface. Doing this, results in data validity being checked only once, but its then valid all the time.

Consider an object's contract. If the contract contains a method that states give me a valid X, and I will produce you a valid Y, then anything less than a valid X and the caller is to blame.

I found this great description from years back -

3.13 Do not program "defensively"

A defensive program is one where the programmer does not "trust" the input data to the part of the system they are programming. In general one should not test input data to functions for correctness. Most of the code in the system should be written with the assumption that the input data to the function in question is correct. Only a small part of the code should actually perform any checking of the data. This is usually done when data "enters" the system for the first time, once data has been checked as it enters the system it should thereafter be assumed correct.

Example:

%% Args: Option is all|normal
get_server_usage_info(Option, AsciiPid) ->
Pid = list_to_pid(AsciiPid),
case Option of
all -> get_all_info(Pid);
normal -> get_normal_info(Pid)
end.


The function will crash if Option neither normal nor all, and it should do that. The caller is responsible for supplying correct input.

Wednesday, May 14, 2008

Tired of doing the right thing

Some days, I just get weary of trying to keep things simple and honest. Don't know how others feel about this, but two or three mad phone calls or meetings with folk who simply want to add to the problem, rather than focusing on a solution makes my blood boil.

This is nothing specifically to do with software development, there are people like this in all walks of life. I would wager a pint or two however that great organizations stamp on this kind of thing, and that's exactly what makes them great! Sure, its human nature, protectionism etc, but its still painful to deal with

When a day contains one too many conversations where
- waste over sufficiency
- confusion over clarity and comprehension
- speculation over conversation/resolution
- complexity over simplicity
- procrastination over immediate action

have been prominent, I lapse into a state of indifference.

As someone once said to me, just relax and take the money.

Pass me my drink :)
Ahhhh. I'm calm.

Wednesday, May 7, 2008

Agile Management

Is there such a thing? Is this a misnomer? Is there a need for managers in agile teams? I don’t think there is a simple answer to this one, and coming from an agile development background, I have since converted to be a development manager. What does that mean exactly? There is no hard and fast definition of what a dev manager is, but I think there is a need for someone who comes from a development background and understands the real issues that development teams face – many PMs that I have worked with simply don’t understand the problems. Obviously, these job titles are simply labels, but they do carry baggage and assumptions with them.

So, what do I do as a development manager? Well, firstly, I try to act as a communication bridge as much as is possible, and like it or not, there is definitely a gap here within most teams. Often development, business teams, and leaders don’t communicate enough – probably for seemingly valid reasons, but this communication is crucial to the success of a company. Business leaders don’t understand (and niether would I expect them to) how complex software development is, so it’s the responsibility of people like me to work with them and help them understand all about value and prioritization. Of great importance to me is also vision and custmer assignment on a project, without these key pieces in place, developers feel like they are in the dark, they want to know they are building a ’cathedral’.

Most of all, the development team manager must remain humble and remember that they are there to facilitate great team working – you are only as good as your team. My goal is to have my team reach its full potential, and currently, I consider myself priviledged to be working with a very talented team. Its early days for me, and I am far from where I want myself and my team to be, but I am excited about the future and know that I can create a fun and safe atmosphere within which team members can excel.

Of course, there is an irony here, the longer I am away from the harsh realities of the coal face of software development, the more I forget about how tough it is to deliver something. So, its my goal to work on home projects, where possible with friends to keep me ‘honest’ but I see it as an almost impossible conundrum. Its almost as if I would like to work for a little time as a manager and then get back to development for a time and so on.

I do believe however, that there is a valid role for a manager. If nothing else, to remove the pressure on the team from outside influences to allow them to do the right thing without fear – thus getting the best and most creative talent to reveal itself in the team members.

Monday, May 5, 2008

Aloof Agilty

Yes, I suspect this might be a contentious post, however, I think its a subject that affects many people these days with offsite and offshore working habits on the increase. On a number of occasions, I have heard arguments that remote teams can work very well together and the reason I bring up this topic is to gauge opinion out there. There is no need for the sake of this argument to even consider language or cultural barriers, I would like to keep the discussion purely down to the communication aspects geographically dispersed teams.

It is harder to communicate with individuals or parts of a team who are dispersed - when I am deeply engrossed in a problem, I want to be able to quickly convene and discuss the items round a whiteboard or go to Starbucks - where many great ideas have been born. I am prepared to concede that some of the items listed below are due to my lack of understanding of how to make them work - I genuinely want to know if folk out there think it can succeed. Indeed can we call it agile if a team is not collocated?

Following are a few of the things I consider potential problems with distributed teams -
1. Try to call them - they can easily ignore the call
2. IM them - can easily be ignored
3. Post on a message board - delayed communication
4. Video link - can easily be switched off, ignored
5. Can't easily share ad-hoc conversations
6. Much harder to pair up for work or coach people
7. Tends to lead to less of a team and more of a rivalry culture
8. Much harder to use low-fidelity, low-tech approaches such as information radiators and story cards
9. Almost any form of agility puts the customer at the center - if the customer is only on one site, how can the people working away from that site be working in an agile manner?
10. Phone and video systems are prone to technical problems

My feeling is that all things being equal, a completely collocated team using big visible charts and other low-fi practices stands a greater chance of success than a distributed team who are forced to use hi-tech, computerized and error prone forms of communication. Simple techniques such as whiteboards, post it notes, index cards and flip charts help to short circuit communication and get the point across in a faster, simpler and less ambiguous way.

I have persevered with many different techniques with offsite personnel and non quite seem to fit the bill for me - maybe I am just not trying hard enough. Perhaps over time its possible to learn to be much more effective and communicate as well using complex communication mechanisms as we do face-to-face.

Thoughts?

Agile Acceptance Testing

If you haven't already, read the post on infoq about automation test tooling. This is an interesting post and I think there is a lot more mileage on this subject.

In a project I have been involved with recently, we employed a commercial heavyweight record/playback style of acceptance test tool and something did not smell right about using this solution, but I did not give it enough consideration at the time. I don't want to repeat points in the original post by Elisabeth Hendrickson but rather try to add or confirm Elisabeth's findings for myself.

If user interface elements are changed, the team is always tempted to recreate all their tests, why? The majority of test code should remain unchanged, but the perception is that the tool provides all the code via test generation and guides people towards simple regeneration, and hence longer turn around and less of the refactor quickly mentality when the user interface changes. In the acceptance testing world, there is still a large contingent of practitioners who tend toward manual testing and there are far too many who do not understand how to use scripting languages such as python or ruby which both offer a simple approach to writing tests - either manually or via the use of a library such as PAMIE or WATIR.

Unfortunately I cannot think of other great reasons why I see these scenarios as particularly bad, its just that this activity does not seem able to keep up with the rate of development and that just isn't right. I want to see very small specific pieces of test artifacts built alongside the feature, and many of these tools seem to bring a bunch of extra baggage along that just seems overkill.

Sunday, May 4, 2008

BPM Frameworks

Business process modeling is something I always thought of as simply a knowledge sharing exercise that takes place amongst a group of folk who simply want to understand either how things operate in an organization, or how they would like things to operate - I guess there are thousands of shades of gray in-between.

Back in 2003, I began working on a project where a specific type of technology was employed where the programmer used business flow charts to actually build software. This particular product was supplied by a major provider of application server software, but was by no means unique to them. 

During that project, I had a problem reconciling my understanding and beliefs on system construction and this BPM/code generation model. I have often thought back to this project and struggled with it ever since, and I would like to hear from others who have experienced this feeling.

It just didn't sit well with me because I like to believe that objects are a reasonable way to model to world in order to understand what a business wants - but I am willing to accept that I might be completely wrong here (if there is a right or wrong answer). However, I temper this last statement with the fact that I worked with a few other good people, all of whom also had some very serious reservations concerning the viability of such approaches to solving problems.

Since it was some years since I last tried to use this approach to building software, I reiterate that I feel I am not as qualified those who have more recently been through this, so please let me know your experiences. I find it hard to put into words some of the reasons why this way of working seemed so alien and ineffective to me, but here goes.

1. Modeling only in terms of the dynamic aspects of a business flow precludes the static view of the system which is a very powerful tool to allow us to consider relationships between things in the problem domain. Yes, we can do this a separate exercise, but (certainly at that time) there was no mechanism to tie this view of the world with the strictly flow chart oriented view.

2. This approach led to lots of code duplication because no real world objects could be generated using a purely dynamic view of our business world.

3. One of the arguments for introducing such tools was that less experienced developers could produce working code less expensively. Actually the converse was true. With good, experienced developers working on the project, we struggled to deliver working software, simply because of the scale of complexity involved. Many mechanisms did not work out of the box and lots of tweaking was required by very smart people to do even the simplest of operations.

4. Another argument was time to market. Due to the largely visual tools, it was claimed that time to market could be reduced, because developers could be many times more productive. Just because tools are visual doesn't mean that things can be produced more rapidly. 

At the time I felt that viewing software as simply flow charts was not taking the many different viewpoints of software construction into account and that the sheer complexity of such tools (even though I don't think they have to be so complex) is actually a developers worst enemy. 

When you put good developers on a problem and they struggle, I know something is wrong. Non-developers would have quickly given up in desperation. Call me cynical but I really saw this a nasty marketing exercise - convince me I was wrong.

Saturday, May 3, 2008

TDD and User Interfaces

This is more a discussion point than me being my normal opinionated self. I buy into the TDD/BDD thing in a big way - I believe in it- although I would not claim to be as proficient as I would like to be. There are only two options -

1. View a UI as a thin layer on top of the code that matters and therefore don't bother doing anything, its a fast changing, disposable asset
2. Treat user interface elements as first class units and find a way to test them

Personally, I'm not sure I can buy into point 1) as I believe the user interface is every bit as critical to the application as any other part, however, test driving domain code can lead to design improvement, can we gain similar benefits for user interfaces? 

Automated acceptance tests should be the norm in almost any project (i.e. I can't think of any exceptions), but can/should we unit test user interfaces or pieces/components of the user interface?

I really don't know what the right answer should be (if there is one) for this and I would love to hear from people on this subject.

Wednesday, April 30, 2008

XP and Scrum

There is an interesting thread going on right now on the yahoo XP group - XP and Scrum. Before introduced to a real agile project, I had read up on Scrum and it made a lot of sense at the time, but I certainly did not understand it. I suspect it's possible to figure out what agility actually means by reading about it - but for many, including me, I had to experience it before I 'got it'.

Both approaches use what I like to think of as common sense - but common sense is actually a misnomer, its not that common. They share many ideals, but the real difference is in the XP technical practices. For me, this is absolutely key. I believe that many of the failings of the numerous practices and methodologies out there are due to the lack of column space dedicated to great techniques to apply to programming. Usually methods specify what documents and pictures should be created and who needs to sign them off - but on many occasions these artifacts can be viewed as waste. As Kent Beck puts it, testing, programming, listening and designing - thats all there is to it - anything else and someone is trying to sell you something.

So, by no means do I view Scrum as a bad thing, but I do think that you stand a better chance of success by following the XP alone rather than Scrum alone - which is purposely vague when it comes down to programming.

Of course, the point is moot, because you don't have to adopt a single approach, you can have both.

Monday, April 28, 2008

Defensive Programming

In a post a few years ago Offensive Coding, Michael Feathers discusses the usefulness of so called defensive coding practices. People are often taught to code defensively, so that the program is more robust - right? What if the problem is addressed the other way around, - why not have the caller check that he is doing the right thing, then the need for such behavior disappears, reducing clutter, complexity and increasing readability.

I have been through this exercise many times, creating objects that return some meaningful state even though nothing, or an error, has occurred. A great example of this is to return an empty list, or an empty string - no need to check for null.

Null object is an interesting pattern, which can be used very effectively. I used to believe in defensive programming, but when you consider the effect of doing it, null checks multiply throughout code very fast. Contrast this approach by checking information at (often) a single point of entry, and I think you will agree, that these checks are unnecessary.

Read Michael's post for more on the subject, but it does irritate me when I see lots of checks for null in code - in 2008.

We need more teaching of good programming practice. I am looking forward to uncle Bob's book - Clean Code. I would like to think that any self respecting programmer would love to understand how to put together nice, clean code - in whatever language - unfortunately, I believe they are in the minority.

Sunday, April 27, 2008

One Thing at a Time

I was thinking about this subject the other day, as the pressure was really on to deliver several projects at once. Some team members were being asked to work on several projects at once to deliver functionality in the very near future, but I would rather have them focus on a single project, deliver that and then move on to the next priority in line.

Initially I thought that it was acceptable, since sometimes we have slower times than others, when we are waiting on a dependency beyond our control - I know it shouldn't happen but many times things are just outside your sphere of influence.

Then I thought back to the classic book 'The Goal' by Eliyahu Goldratt. If anyone has not read this book, I thoroughly recommend it. The book talks suggests that we shouldn't try to suboptimize parts of the system, rather optimize the whole system. It also says that any system where all its resources are busy 100% of the time will suffer, its ok to have some resources idle some of the time - as long as its for the good of the system as a whole.

Some iterative methodologies deal with these issues by laying down ground rules based on the iteration length - nothing can change during that window of time. Its odd though, how these things can creep up on you in an unsuspecting manner and you find yourself context switching so much, you feel you're not doing a good job of anything.

Oh well, I've done my bit to demotivate for the day.

Thursday, April 24, 2008

More on Value Added Tools

Today, I briefly worked with a colleague on a part of the system that uses a code generator. Such tools are sold primarily with a productivity spin, so I was quite interested to see for myself how it worked. Based on my previous posts, you will expect me to lambast the product and I certainly don't want to disappoint. Of course it did not deliver on its promises, but I want to think about why.

First of all, the language - it uses familiar languages, but not in a familiar way. Pieces of code are joined together using graphical tools, which is an alien metaphor for most developers, so it takes some time to figure out even the simplest thing. Even if you are used to other graphical tools, they are all built for specific purposes and share nothing in common.

Oddly enough, it would have been quicker to put code together with a very basic IDE using an editor only, than using allegedly simpler techniques. Perhaps that's unfair comparison though, because I would be relying on previous experiences.

Another issue I noticed was that there was no code completion or help inside a code block. This is a feature that I consider basic in any programming environment and I felt quite lost without it. Therefore much time was spent in web based documentation pages trying to figure out how to make it do what I wanted.

Then there are those little idiosyncrasies, how do I access a value in a field on a form was not quite so straightforward as one would think - in the context of our problem.

I did not spend as much time today as I would have liked to explore it a little more, so I will probably dive deeper tomorrow. For now though, I think my current feelings could be summed up as uncomfortable and clunky. If my opinions change drastically tomorrow I will report more.

Maybe non-programmers would be better suited to such graphical code generating tools? Anything is possible, but I doubt this - I had to call on my experience to figure out how to do things, so I think non-programmers would find it very difficult - then again that's just my opinion - as ever.

This is definitely something that I want to talk about further, I have a much better case in mind - just wanted to get some feedback on thoughts/experiences from others.

Tuesday, April 22, 2008

In the mood for more history...

As I'm in the groove, thought I would briefly cover another subject that crosses my mind regularly. Over 20 years ago Fred Brooks' landmark paper was published No Silver Bullet, Essence and Accidents of Software Engineering. Brooks argues that there are two types of complexity, accidental which is man made, and thus largely of our own making and essential - programming is just plain hard.

It is these ideas that I keep revisiting in my mind with some of the products that we work with today. We have all seen the slick marketing droids in action representing large software organizations with their promises to provide miracle solutions to save you money with fast time to market and a 'dumbed down' developer community. Always amuses me how they only want to talk to managers who have long since forgotten how hard it is to deliver a product for today's fast paced and high expectation user community. There's a reason for that.

No matter what someone tries to sell you, if it sounds too good to be true - IT IS!

Revisiting Brooks' original point, I actually believe that a many of these tools or products aimed at increasing productivity and downgrading brain power have a contradictory and negative effect by increasing accidental complexity. Keep it simple, get the best developers you can afford, a light weight, simple tech stack and forget (often very costly) gimmicks.

All the ingredients for a much more palatable and productive programming experience are in place, as a few diligently embrace some of the values, principles, practices and products out there that can have a significant impact on accidental complexity. However, the large corporations are constantly on the lookout, trying to discover the next big thing to destroy mediocre IT budgets in one fell swoop. I fear many will fall prey.

There is no silver bullet.

Lost Our Way

Its amazing what you can discover out there on the interweb. Was just looking at a blog that had a link to the design principles behind Smalltalk. This is some really powerful stuff, and I am completely speechless - how could we have lost our way so badly. According to the preamble, the paper was published in Byte magazine in 1981. So really great thinking about what a language should be was present 25-30 years ago! I know very little about Smalltalk, the principles described in the paper were (and in many respects still are) revolutionary.

What a shame we have ended up in a world with so many disappointing languages. Wish I could have been involved with Smalltalk.

Monday, April 21, 2008

Gettin' Back in the Game

Just had a nasty experience and feel duty bound to report it. For some time now, I have been a little too detached from every day business as far as development is concerned. A good opportunity has recently presented itself for me to get involved with the team, which can both help me understand better how the tech stack works and also understand the pain points for the team.

As far as the latter is concerned, I soon realized that 'pain' is an understatement. Central to the gargantuan (for those Tarantino fans - yes I rarely have the opportunity to use that word in a sentence) stack is the portal - I hasten to add that this was not a lifestyle choice for the team, rather a corporate constraint.

Now the normal development environment is an IDE supplied by the portal vendor and is painfully slow at starting up its built in server, which means that the flow of development - well, doesn't. If you make a quick 1 line code change and then try to test it - well, go get yourself a coffee and come back later.

We have been investigating an opportunity to use Flex inside a portlet at an attempt to be able to deliver business value much faster (believe me I'm not reaching for the sky here), using Java to serve up data over HTTP courtesy of JSON.

Because deployments are not consistent, we need to restart every time we redeploy, to make sure it works first time, every time. Since the aforementioned portal takes upwards of 5 minutes to start, (on a good day, with a tail wind) we are considering using Tomcat and plain Eclipse as a local development environment. Sounds ok so far, until you consider that the portal uses an old version of Java JVM, not only that, but rather than use a standard Sun JVM, they use their own. To try to get some consistency, we downloaded the JVM, but it wouldn't install on Windows XP for some reason. We then decided to use a Sun version, which reaches end of life later this year, but no matter.

Summing up, because the stack is so heavyweight, we cannot iterate quickly, so we make a pragmatic choice to enable us to move at a bearable pace. The cost of doing this though, is an inconsistent deployment environment and inconsistent JVMs. In addition, our development and target deployment procedures also have to be completely different.

Its definitely been a learning experience.

Sigh.

Thursday, April 17, 2008

Defect tracking

Most self respecting teams have a software product to track defects. So do we, but it's another of those things that just smells a little fishy to me - but I have accepted it and didn't really think much more about it. Until tonight.

Following Paul's excellent response to my last post, with a link to Kent Beck's statement of what really matters, I was looking around Ward Cunningham's site and came across another interesting article. Why do we feel the need to track bugs.

Defects are transitory in nature, all we really want to do with them is fix them and move on - surely. Well, I suppose we could measure something about bugs or use the information to blame others - but neither of these things helps us build software of higher value to our clients and more importantly - doesn't get to the root cause.

Everyone makes mistakes, we're all human, but if we use the novel idea of fixing things as we go, then the need for recording and tracking (really an unnecessary, time consuming task) goes away. Aha, you say, but what if I have lots of defects and they will swamp the team? We have to record them so we can remember what they are. I used to subscribe to this argument, but if you think about it, this is a symptom of a deeper issue. Quality is not built into your process from the start. Note that I am deliberately distinguishing between defects and requirements changes. When this pattern occurs, it is often due to lack of tests (assuming you have a good team of programmers). This is one of the reasons user stories are phrased in terms of tests, to improve quality.

This is a very contentious subject, and one thing is for sure, these tools are not going to disappear. However, I hope it provided a little food for thought and will encourage more appropriate thinking in terms of root cause rather than symptoms.

Wednesday, April 16, 2008

What to do, what to do

I think it must be me, because most people don't seem to see anything in it. How can you possibly have any idea how to build something before you know what to build? It's like walking into a store and the assistant hands you a pound of apples, without waiting to hear that you actually wanted a loaf of bread. Almost every project I get involved with, someone seems to know that we're going to need a 'cluster of this' or an XML schema for that - before we even know what it is that our customer wants. Do we have some kind of psychic powers that we're honing ever so carefully now, which allow us technologists a previously unknown level of insight into our customers needs.

Of course, sometimes your customer tells you what technology they want as well - this is interesting and may or may not be bad - it depends. Thing is not to take anything for granted, question everything - blindly accepting that you have to modify your product to fit a single client's unique needs will often be madness for you - and might not even be best for the client.

Call me old school, but I believe that there is much truth in the saying 'the customer is always right', so why don't we want to listen closely to what they have to say, and then actually try to understand it before we start thinking about possible solutions. The real downside with making assumptions about solutions is that it stifles the thought process, limiting your options before you've even started. Choice is a wonderful thing and delaying technology decisions making until the last minute - sort of a just enough/just in time thing - then you're not closing down potentially interesting avenues too early.


Dealing with requirements is one of the most complex things that developers have to cope with. Its so easy to introduce a subtle point, the implications of which could be huge in terms of time and cost. For this reason I like developers to be involved every step of the way as far as eliciting needs from customers is concerned, so that they can suggest the value in doing very costly features vs less costly ones. Clients are almost always unaware that option B might cost them half of option A, and B is only marginally less optimal than A.

So, should developers be involved with any client dialog - absolutely. When deciding what to build, its necessary, so that the client can make informed decisions and hopefully gain better value for their investment.

I have a pet hate of analysis paralysis, and its certainly easy to end up on that road - but that doesn't mean don't understand the problem - or part of it - before diving in with a solution. Requirements are really only a mechanism to promote a shared understanding of the business problem that we want to solve - but this process is invaluable - attempting to skip this process will end in disaster.

Monday, April 14, 2008

Introducing Agility

Many blogs and articles have been written that cover this subject and I just wanted to add my own two cents. Why is it so hard to introduce agility in the workplace. There are many reasons, but I have to say that top of the list is that its just plain hard to change people. Most folk have a comfort zone beyond which they simply don't feel happy going. Agility is so much more than just another process, its much more of a culture change than anything else, and its very hard to bring about culture change.

Whatever you're trying to change, you're always going to face resistance, because change could affect someone's role in a way that alters their stake and they (sometimes justifiably) fear the unknown will land them in a less desirable situation than the one they're currently in. Before doing anything in your organization though, analyze the situation, don't introduce something just for the sake of it, there has to be a good reason.

The implications of introducing agility will be far reaching and very uncomfortable for many at least initially. Also consider whether or not you actually have the raw materials to enable agility to happen. For example - are you going to have ready access to your customer? If not, this is a huge problem for any agile method - which works on the premise that frequent face-to-face communication is one of the best ways of loading the dice in your favor. With this example a good idea (I believe) would be to dip your toe in the water and see if you can run a small project with all the customer support you need to see if the idea will be welcomed. If not, don't even bother trying to introduce agility yet, your company is simply not ready to make the commitment. Most organizations, are in this stage and most of those who claim that they are agile are not, whether they think so or not.

I used to believe that it was an all or nothing proposition - and as far as declaring 'am I agile' is concerned, then it definitely still is. However, when it comes to introducing it into organizations, its far too much for most people to stomach at one sitting. So is it possible to introduce elements of agility? I think so, as long as you don't blame the principles and practices if they don't work for you, because most are designed to work cohesively together to produce results. Of course there are elements of danger breaking these up, because if you don't understand how things work together you could be staring trouble in the face. For example, refactoring and test driven development go hand in hand, try refactoring without tests and its like walking a tight rope without a net. Ideally, principles and practices should be used as they are so that you can learn how to crawl and then walk before you run, but its tough to introduce things in a big bang fashion.

Sunday, April 13, 2008

Business Managers and IT

In a recent article in Information Week, the issue of business managers bypassing IT managers to get things done is discussed. This is an interesting piece and something that I have also witnessed, and got me thinking. It seems to be a trend that is happening more often - but I question whether this is right, wrong or neither.

Part of the reason that we have arrived at this situation is that development teams/departments are seen to have consistently under-delivered on business expectations. This is sometimes true, and very often a perception.

However, there is the counter argument, that business heads have unrealistic expectations of what it takes to build software, which leads to even more negative perceptions.

My belief is that both of these arguments are true, but this situation is not going away any time soon. If IT departments cannot better meet the needs of the business, then look at the reasons why - I have seen strategy or architectural choices choke the ability of programmers to deliver anything. Of course the business manager doesn't care why so he's not going to wait for an explanation, he just wants his projects now!

Conversely, when business departments choose a poor IT partner to bypass internal groups, it can be a lottery - partner with the wrong guys and its going to be a nightmare. Integration may be impossible, maintenance very costly etc.

It's incumbent on managers on both sides to meet in the middle to get things done. Technology managers could be much more effective and take on more of a coaching and advisory role. Business managers need to be more open minded to work with people who understand how to make IT work - the trouble is, they may not have such a people in their organization - and business manager wouldn't know either way.

This is a tough one - opinions anyone?

Wednesday, April 9, 2008

What's an object anyway?

In his interesting post a year ago (Objects, I know that already), Paul discussed objects. Back when I studied for my computer science degree, we were taught about the object oriented paradigm - yet I feel quite strongly that some things cannot be understood by classroom teaching alone. My journey has been one of apprentice and I find that most good people I have worked with over the years, also have a level of humility that accepts that we must always be willing to learn from others.

So I see myself as disadvantaged, because I didn't begin to understand them until years later. Exposure to Smalltalk may have helped. In Smalltalk, everything is an object, messages are sent to objects, promoting loose coupling, in fact any message can be sent to a target object, and its up to the object to decide, at runtime, whether or not it understands how to handle the message. This subtle change in the thought process has a profound effect. Viewing an object from the outside as a consumer, allow us focus on things in a different way.

So back to classes and objects. Classes should be defined to represent things that relate to your problem domain. So a class of object used for one project could be completely different to a class of the same name used in a different project - it depends on context.

Classes provide us with services - image a service that can be consumed that's provided by a third party - you don't know how it works. In fact I try to apply the same thinking when I put classes together, I don't want to know how it works and it should be self contained and when I ask to use its services, I don't want to have to change the way I use it, if the class should change internally. This property is called encapsulation.

Hierarchies of classes can be put together based on an OO property - inheritance. This enables us to send a message to similar objects (as long as they are part of the same hierarchy) and the response could be radically different depending on it type. This is a powerful technique. Image a collection of graphical entities that you want to render on a page. As long as they are all members of a parent class PageComponent, you can send each item in the collection the 'render' message and each item will duly render itself.

Some years ago, a friend told me the real power of OO is delegation, not inheritance (thanks Rob). On a practical level, this means do only one thing well in a class and for anything else, delegate to other classes. Following this idea will lead to simpler code that is easier to read and understand, is less likely to go wrong, easier to maintain, more loosely coupled and easier to extend. For me, simplicity means everything. Ironically, it takes more commitment and effort to get there - but its worth it.

There is no magic to beginning to understand the usefulness objects - the more you work with them the higher the chance you'll start to understand. It took me years to get to a level of understanding (and I'm still learning) but it was definitely worth it. For you, it might take weeks - if you're really blessed with genius. One of the simplest pieces of advice I can offer is simply to think. Think about what an object should do, be responsible for - use a technique such as CRC cards and try to work with team members who have a level of understanding. Never stop learning.

There are a number of code smells associated with the misunderstanding of objects. Watch out for lots of setters and getters, big classes, having several responsibilities and a lack of collaboration with other classes. Always look at built in types with suspicion, this may indicate breaking encapsulation.

Don't look to OO as a panacea for anything, and as ever, what you get out of it, is only as good as what you put in. However, all things being equal it -

Can help understand the problem domain and can be a useful communication mechanism
Can help classify things in the problem so that we can deal with similar things in similar ways
Encourage thinking in terms of very small loosely coupled parts

In conclusion, I guess I am from the old school - the shopping list of skills I see on resumes these days counts for very little to me. I am far more concerned with depth of understanding of objects and other solid development practices than with current skill trends. The real benefits that can be gained from objects are only realized by the thought process of the diligent programmer who has an understanding of objects.

Tuesday, April 8, 2008

Thoughts on TDD

Having just gone back and read an old article by Michael Feathers entitled 'Emergent Optimization in Test Driven Design' (found at http://www.objectmentor.com/resources/publishedArticles.html) I have been rethinking the whole test driven development argument.

I first started using a TDD approach a few years ago and quickly realized that (at least for me) the 'test' part of TDD was actually a very nice secondary effect. The real power behind the technique is its ability to allow the programmer to work 'from the outside in' as Michael Feathers puts it - leading to better design. His paper actually focuses on the optimization argument, but for me, the primary effect is that it helps me to design an application from a consumer's perspective, as if I were making a library or API for other programmers' consumption.

Using my design argument, I then thought about the traditional order of development tasks - design comes first, then write the code - so if TDD could be viewed as part of the design process, it might gain more widespread acceptance. One of the things inhibiting the practice of TDD is the old stigma about testing and how some programmers have learned to despise the very idea, often citing time and cost constraints as justification. Wouldn't it be great to change the terminology and eliminate the word 'test'. Then I had a realization - that's probably what the BDD movement was all about.

Up until now, I have largely ignored the BDD thing, but I decided that it's time I take a more serious look at it. So I now return to writing this post having watched Dave Astels' video on Google about behaviour driven design - and yes, this is exactly the intent of BDD. I would like to see this approach to programming become more widely accepted, but I wonder how easy it is to change old habits. The enlightened and inquisitive will probably accept, and indeed move on with these ideas, for the many, I fear nothing will change.

At least I learned a valuable lesson, I need to do more research - I slept on this one for too long!

Monday, April 7, 2008

Traditional Project Roles

I have often thought that there is a glut of staff crowding a project. This idea was recently brought to the forefront of my mind when I looked at the costs associated with a project. Now I had worked on this project at its inception, and with one other developer, the two of us had written all the requirements, produced a software architecture document, written tests and developed much of the software.

The project was dogged with issues, I moved into a different role on another project, the other developer moved on to a new company and other developers came and went, all over the period of about a year. Additionally, we were dependent on a number of third party vendors, all of which had legal agreements and other time consuming hurdles which had to be overcome before we could proceed - the project largely dealt with the integration of external systems. Together with a couple of QA staff, the project proceeded at a slow, steady pace until it finally went into production a few months ago.

It was around this time that I happened to stumble on the financial figures for the project, which I don't normally bore myself with, but someone had been talking costs on this and I couldn't figure why it would ever be an issue. Turned out that around 30 people had booked a considerable amount of time to the project, thus causing a high and somewhat disturbing bottom line report.

Even with full possession of the facts, I could not avoid jumping to the conclusion that is was a costly project, providing poor value for money to the client. But it did not seem like it at the time. I looked more closely at the staff on the report.

There were a number of BA staff, QA, PM's, process analysts, numerous managers from different departments, and a host of other people, some of which I had never heard of. With the sole exception of several QA staff, I couldn't remember one of these people actually having made a contribution to the project. Granted, the Project Manager had a 30 minute 'is it done yet?' meeting once a week.

What would have been the outcome of the project if just the two developers and one or two QA staff had been the only ones working it? I suspect exactly the same, only the company would have save itself hundreds of thousands - another small project.

Why is it that we are convinced that we need people with these different titles on a project? Perhaps it makes organizations feel comfortable that we follow some age old process ideas that dictate we must get smart people who can tell the working class 'coder' what to do - because everyone knows they can't do it on their own. I think this is related to same disease we have built in to our social fabric in the western world - command and control, tell the 'workers' what to do. This attitude seems to still be prevalent 100 years after Frederick Taylor's scientific management was first put together. The sad irony here is that most good developers I know, could actually do a better job of each of the other roles than those whose role was actually their full time job. Developers have to step up to the mark however and play the various roles - oftentimes, I see developers who don't want to get engaged with the client and understand that we are actually 'building a cathedral' and not 'cutting stone'.

Why is it that being a developer on a project always seems to take second place to all other roles? What is the most important thing a project produces
a) Documents
b) Diagrams
c) Timescales
d) Cost projections
e) Working software

For those who need it spelt out, working software is the ONLY thing that matters. Now I am not advocating that we do none of the other things, merely that given limited resources available to us, we choose our priorities carefully.

Sunday, April 6, 2008

Application Server Value Proposition

Some years ago, when I first got into the Java/J2EE programming game, I started to learn about application servers, what they were and why I would want to use them. This was in 2001-2003 era, when vendors such as BEA and IBM dominate this lucrative market and open source solutions were not quite ready for prime time.

Now the whole premise of the application server was sold on the basis that, as a developer, it was there to make your life easier and as a manager, hey you could save costs by hiring less expensive developers (dumbing down) you don't need super smart guys because our product almost writes the hard parts for you!

Bear in mind that my background was one of mostly rich client development in a C++/Sybase background - traditional two tiered architecture. In this environment I felt productive and could turn on a dime when changes were requested of me.


When I emerged from the wreckage at the end of the project, I felt battered and bruised and hadn't felt as unproductive since I had graduated some 8 or so years previously. The simplicity sell had not materialized, delivering anything had been tough. This is often the point at which a consultant is brought into the mix and their advice will often be - you need more of our very expensive consultancy because your staff don't have strong WebWhatsit 'Firkin' skills. Much later I learned that this is merely the standard consultant response to anything - make more money. In some ways, I can't even blame them.

Why had the project really been so tough? I had been asking myself this question constantly. We had smart people working on it, they had attended training; but it hadn't been enough. Complexity does not just go away in the development game, it simply moves somewhere else. In the context of this project, one of the places it had moved to was configuration. Integration with application servers is also more complex. In the process of trying to offer more choice (most people see this as a good thing) with 'best of breed' vendors' solutions being pluggable at every point in the stack - all that happens it more complex integration points are introduced, none of which seem to work as advertised.

Also ripe for consideration - and I now see these as more serious contributors that I had at the time, was the seemingly small things such as hefty development environment, expensive toolsets, slow and hard to use. Testing was much harder with container/code deployment model, starting the container and testing inside it requires some considerable time/effort and is a much slower process than testing outside the container.

All in all, configuring the numerous descriptors, server configuration files etc in order to make the thing work, was a bit of an ordeal, and I (perhaps unbelievably) quite enjoyed it and viewed it as a personal challenge at the time. I was stupid - that is not what software development is all about - would I say my company got value from that project - absolutely not.

Unfortunately for me, I had not yet learned my lesson. We progressed with the next, much bigger project at the company, management convinced that our learning curve now put us into position for success. I will save this one for the next blog.