Sunday, December 6, 2009

Design By Contract

Wanted to post on this subject for some time. There is a general lack of awareness of this powerful concept which is one of most profound ideas the professional developer can embrace along with sound object oriented principles. Next time you see a really clean piece of code - look carefully and you will probably notice DbC in action.

To start off this post, I will attempt to briefly talk about defensive programming, the antithesis of professional software development. At college along with most other computer science folk, I was taught how important it was to program defensively. This means that if a parameter can be null, then it needs to be checked to ensure that condition otherwise the program could fail. In the case of the following example, if ToUpper message is invoked on the string object and the string is nil, then what happen to the code - typically it will crash but that depends on the language.

string Convert(String str) {
// This would crash if str is NULL
if (str.ToUpper() == "XYZ")

So to counter this, developers typically do the following -
string Convert(String str) {
if (str == null)
throw Exception("bad string parameter");

if (str.ToUpper() == "XYZ")

To the best of my knowledge, it was "Bertrand Meyer" who formally introduced the software development community to the idea of design by contract and published it in his book "Object-Oriented Software Construction". There is an entire lengthy chapter dedicated to the subject, so I will try my best to paraphrase.

Code written to perform run time checks on parameters is unnecessary and more dangerous than the very issues it attempts to prevent. It leads to duplication with the same values being checked multiple times in different methods, it adds to cyclomatic complexity to the program which in turn decreases reliability, readability and comprehension.

Meyer - In relations between people or companies, a contract is a written document that
serves to clarify the terms of a relationship. It is really surprising that in software, where
precision is so important and ambiguity so risky, this idea has taken so long to impose

To achieve reliability and hence quality, simplicity is crucial. If there are one or two of these 'null' checks across thousands of methods in a system, then the overall complexity of the system has been increased massively. with thousands of new potential failure points being introduced.

So, in essence, design by contract says, do not be concerned with performing validity checks at the beginning of a method (the most common example being to check an object reference for null), rather let the caller know that your method expects a valid object. If the caller sends the message to Convert with a null parameter, the client is at fault, not your program! Remember that this is a contract and the customer is involved just like any other type of contract. This may seem like a simple statement, but at the most basic level, it's pretty much all there is to know. Its implications though are huge.

Looking at the earlier example in a new light -
string Convert(String str) {
if (str.ToUpper() == "XYZ") // This will still crash if str is NULL
return; // - caller is the problem, our program
... // expects a valid string object.

Several points on this. Applying this style tends to push validation code as high up in the system as it can reach
- don't allow bad data into your system in the first place
- don't pass null values around
- don't allow users to select invalid values

If there is a case where processing can happen without that object being present, overload the method and write one that doesn't require that object.

Non-Redundancy Principle.

"Under no circumstances should the body of a routine ever test for the routine's precondition."

In the following example, the difficulty arises with the error handling step. What is it that needs to happen - only the client knows and typically the client would have to either handle an exception or check for an error result anyway. DbC says, why not just state the contract that 'p' is required to be valid and then the checking can be done one time, by the caller, before it even reaches our code.

if (p == NULL) then
Error handling, logging, exception ...
do something with p ...

Tolerant or Demanding Preconditions.

There are two ways to view preconditions, one way is tolerant of any customer calling on them, and will try to guess at the right way to behave when unexpected input occurs. The alternative demands that input criteria are met by the customer and therefore does not need to guess at what to do with incorrect input.
It is the client who knows what to do when there is not enough information to supply the correct inputs to us and what it really means. If we are trying to decide how to handle a null reference, do we log, raise an exception, ignore the issue or what?

Sidenote on complexity

In 1976 Thomas McCabe formalized a software metric called cyclomatic complexity. Basically, it counts the edges and nodes of a control flow graph through source code with any condition (if/then/else/switch) or iteration (for/do/while/iterate) counting as additional complexity. I don't want to go into the details of this so check out Wikipedia for a more complete explanation.

Unlike this post, Meyer's chapter on Design By Contract is very detailed and I recommend you read it with an open mind. Contrary to popular belief the DbC mindset makes complete sense if a little thought is applied. You have a choice - try to recover from bad things, or don't allow bad things to occur. Hope I have managed to invoke a little interest.

Friday, December 4, 2009

Getting Started with Cappuccino


So you can get a prepackaged version of the code from the Cappuccino Download Page and grab the latest Cappuccino Tools zip file. Alternatively, you can use -

git clone git://

I recommend using github which will give you everything you'll need. While I appreciate all the hard work thus far on Cappuccino, it is a beta and as such a little patience is required.

Once you have the code there seems to be two ways to get going with a project.
1. Download the Cappuccino Starter zip (on the same download page) which contains a standard application that you can edit
2. Use 'capp' to generate a bolierplate application structure

Using CAPP

If you chose the Github route you will have the 'capp' utility available to you.
Type capp with no parameters to get some help on usage. To create a new application, type

capp gen MyTestApp

After running the command, capp should have generated a project directory structure like the following -

drwxr-xr-x@ 10 340 Dec 4 19:41 .
drwxr-xr-x@ 3 102 Dec 4 19:41 ..
-rw-r--r--@ 1 984 Dec 4 19:41 AppController.j
drwxr-xr-x@ 7 238 Dec 4 19:41 Frameworks
-rw-r--r--@ 1 373 Dec 4 19:41 Info.plist
-rw-r--r--@ 1 770 Dec 4 19:41 Rakefile
drwxr-xr-x@ 3 102 Dec 4 19:41 Resources
-rw-r--r--@ 1 3578 Dec 4 19:41 index-debug.html
-rw-r--r--@ 1 3481 Dec 4 19:41 index.html
-rwxr-xr-x@ 1 299 Dec 4 19:41 main.j

Open up 'index-debug.html' in a browser and you will see the text "Hello World!".

Thursday, November 26, 2009

Introduction to Cappuccino

For many years I have been waiting for real web application software toolsets to arrive. Widespread abuse of HTML, CSS and scripting has created a slew of messy web applications written in technologies that were never designed to be used for such a purpose. They are well suited to web content, but not to building rich applications.

In the 90's, I was building rich client applications for the desktop that were far more usable and responsive than many modern web applications, but the reach and simplicity of deployment for the web model was too great and (almost) everyone jumped on the bandwagon.

Well, the guys over at 280north have been working hard on Cappuccino - a web development framework with a difference. This one provides a rich, object-oriented way of developing applications for the web that run in the browser without the need for plug-ins.

It has its own language called Objective-J the syntax of which is similar to Objective-C. It adds the use of classes instead of relying on Javascripts pure prototype based approach to OO. Unlike Objective-C, it only runs in garbage collected mode, and loses the reference counting model. Other changes includes the dropping of the pointer syntax and moving source code to one .j file instead of requiring a separate header. Classes in Objective-J have a 'CP' prefix instead of 'NS' used by Objective-C. See this tutorial for a quick overview. Since the language is dynamically typed, unit testing and the discipline of keeping code clean are a must to retain one's sanity - but these are good things regardless IMHO.

Programs written in Objective-J are preprocessed into pure Javascript which can be executed in the browser or on the desktop - more on this in a later post.

If you're interested, go over to the cappuccino web site and look into it more detail.

Although there are some incomplete features at this time, it's a lot of fun to work with - I have never been this excited about writing applications for the web before.

Friday, September 4, 2009

Objects 101

I have heard some interesting stories recently and am beginning to comprehend where my assumptions are going awry. My time spent in academia included lecture theatre discussions about the object oriented paradigm any why many believed it was a major step forward. Yes I am going back in time a little here, but in my opinion, objects are still a very powerful way to understand both the problem and solution domains and I very much believe in them - until something better comes along. Having said all that - I am interested in functional languages as well - although I’m not sure that they have to be classified under a completely different paradigm.

It has only recently struck me how fortunate I have been to have worked with many talented people. My ‘apprenticeship’, under the wing some very experienced mentors, provided me with a solid grounding and a varied toolbox of techniques, approaches and strategies for delivering software – but the more experienced I become, the more I realize I have to learn.

Object technology is one of the most misunderstood concepts in computer science, while simultaneously one of the most simplistic. There are really very few key ideas behind them, but it is a subject that will only become apparent to open minded developers over time. I do not claim to be an object expert, just open minded enough to keep learning about them. Part of the problem is unlearning the procedural habits of the past, most modern languages apparently encourage that way of thinking.

1. Objects have the great property that they try to model things of interest in the real world. As such, they can be used to model the problem domain (forget about computers for a minute) as well as the solution domain. Use objects to understand the problem domain first. Get up to the white board and ask your customer about the key abstractions in the problem domain and their relationships to one another
2. Their key advantage lies in their ability to break down complexity and talk to one another to get the job done. When an object talks to another, it sends a message – if the receiving object understands the message, it selects an appropriate method to execute and sends the answer back to the caller. The caller does not need to understand what is going on though, only that the message it has sent gets a response.
3. You can speak to a number objects in the same way, e.g. if you have a number of sprite objects that all require drawing, you don’t have to know which is which, you can just send them all the ‘Draw’ message and if they respond to the same protocol, they will do as asked. This is usually achieved by inheritance or interfaces in most statically typed languages.
4. Objects should focus their efforts on doing on thing well – if you find an object doing a calculation, formatting some text and rendering some shapes on a screen, its doing too much.

A number of enlightening books on the subject have on the subject from the last 30 years contain more than enough information to make most developers’ lives far less complicated. In my experience, the majority of issues developers face are man-made and using objects sensibly can go a long way to minimizing this ‘accidental’ complexity. I have heard it said that the promise of object technology didn’t deliver – in reality, it never had a chance in a world where many didn’t seek to understand and influential organizations downplay it for financial advantage.

It seems as if the diligent use of the OO paradigm is regressing, perhaps to be viewed as a fad, while we slip further into a dark and distant procedural past.

Monday, August 24, 2009

Just Venting...

Its been a while, but tonight I feel the need to vent. What is it about the need to automate everything so much that the customer now has less say in what happens with their service or product than before the so called 'progress' that computerization has brought.

Tonight, I ordered something online. During the process of ordering, I was asked to sign up and create an account. After doing this, the system then had me go back to my inbox, click on the confirmation email and then sign in to the web site. On doing so, I noted that the system had dutifully remembered the state of my cart - so I proceeded to the payment stage. After I had completed the sale, I read the confirmation email and noted with dismay that I had ordered the Spanish version.

When I called the customer service line (at least a human answered) I was told that I have to let the system dispatch the package and ship it across America to my home address so that I can refuse to accept it - because she had no way to stop the transaction!

What ever happened to a quick call to hand over you card details - over and done with.

Is this really progress - I'm not so sure.

Saturday, April 18, 2009

Bridging the gap

This subject is somewhat related to an earlier post on prior context, but I like the metaphor, so here goes.

I am currently involved in an effort to get a team of young, eager developers to migrate to a new, object oriented technology stack. This exercise will be fraught with difficulties, apart from the obvious paradigm change (procedural to objects), there is also a new language and a 'better-practices' oriented way of working. My focus is really on encouraging learning from sources that I know and trust, in terms of books, blogs and pair programming with experienced folk who have already been there. In many respects I am fortunate to be working with enthusiastic people who want to change, but on the other hand, it really is mammoth task that I cannot do for them - they must find a way to do it for themselves with the support of others.

A while back I was trying to explain something and used the metaphor of a bridge, because helping people learn new things is never as simple as it seems, it is loaded with prior context, some good, some less so. It is impossible to bridge a large gap without building supports periodically along the way. Mentoring/teaching someone with a large gap in their knowledge of the history of computer science assumes very high expectations on the student and is also unfair. Recently I have realized that not only is it unfair, but it too is impossible. I find myself wanting to skip much of the knowledge I have gained over the years - to go straight from A-Z with these students, but I cannot. The span is too large. BTW - A-Z does not assume there is a specific endpoint or that I have reached one, I am merely referring to a learning process of 'just starting out' to 'knowing something useful'.

In my case, I learned a lot in the 90's with the whole OO boom including methodologies, notations and a bunch of other stuff, most of which I don't even think about now. That learning has helped me to understand how I do what I do now and without this, I don't think I could have bridged the gap personally. I guess my point is, without much of my earlier learning which provided my supports for the future, I don't how easy it would have been for me to bridge the knowledge gap.

Many good people in this business have taught many great ideas and some of the best are buried away in history, waiting to be rediscovered. For anyone serious about software, ignoring the past is perilous.

Saturday, April 11, 2009

Flex/AS Unit Tests

There is something I like about Rich Internet Application technologies. Although I have not looked at Silverlight yet, I find Flex quite appealing. Think it has something to do with nostalgia, taking me back to the good old days of rich client development. Having 'played around' with Flex to discover its capabilities, I now want to take it to the next stage and see if I could build something of production quality. It's clearly obvious this is possible from the applications beginning to emerge on the scene, but I have not yet seen a nice simple way of unit testing with Flex.

In particular, I am looking for the simplest way to mock or stub tests so that units can be isolated. I have been reading around a number of blogs and articles to see if there is a simple tried and trusted way of doing mocks, as with easymock or jmock in java land. Actionscript does not have dynamic proxy features of java which would permit runtime implementation of interfaces, so the current offerings out there look a little cumbersome.

Vaguely recalling that Ruby had a nice clean way of achieving this, I was wondering if a similar concept could be used for Actionscript, since it is also a dynamic language? Will do some more research.

Anyone out there have any ideas or experience on this subject?

Friday, April 10, 2009

Going Backwards?

I am starting to question why are we so backwards in our profession. Don't misunderstand me, I am way too humble to ever assume my own skills should be held in high regard, in fact just the opposite, I have much to learn. For me, I guess I feel my only defense is that at least I realize it and I am more than willing to learn how to improve!

Software development is very much a profession in its infancy, and I am watching the software craftsmanship threads with interest and like what I hear, but alas I fear the masses start to pick up on it as the latest buzzword and screw that up as well. At least the name isn't as downright tempting to misconstrue as agility. Thing is, while the profession is extremely immature, there are plenty of good ideas out there for solving the majority of business problems that most software projects could be categorized within. Excluding most NASA missions that involve anything other than ordering out for a Big Mac, Google's search optimization or how to scale Facebook, most projects still involve simple CRUD with a few business rules thrown in. So why does it seem so hard to get anything done?

My own viewpoint centers on the fact that merely doing mundane stuff well is not thrilling to many developers, who want to use the latest REST based cloud buses (BTW - in case anyone is thinking of using this as the latest hot idea, it's really only my sarcasm shining through - so DON'T).

Oh for the day when the headlines of technology magazines can shout with joy - 'Developers deliver project with good code' - which seems to me to be far too great a challenge for the majority of organizations out there.

Monday, March 23, 2009

Prior Context

Had some conversations recently and it got me thinking about the knowledge and experience I have gained over the years by constantly learning and changing what I thought was a better approach to creating software. How do new graduates in computer science learn to program effectively? Paul Beckford said something to me the other day and I guess its quite obvious if you take the time to stop and think. People who care about what they do will find the time to invest in their chosen discipline. My current thought process had been centered around values and culture and I think that caring is right in the mix - are they interrelated? I am sure they are - if you come from a culture where people care about one another, your values will reflect it.

So for those out there who do care, there is so much noise, even if you do your due diligence, how do you know what information to trust? Especially when some of the biggest technology players are out there with their sales hats on. Who is going to teach the up and coming new folk? This is why I like the idea of the apprenticeship - in England, this was popular until the late 80's - and emphasized learning your craft from a master who had practiced it for years. While I view this as a great way to learn your chosen craft, there are still problems. As far as I know, the western world in particular does not seem to place much value on any career path where one has to learn on the job for years before being set free to earn a living on their own. Today's emphasis is on immediate returns - to take as much as possible without putting in the effort to learn and care about what you do.

Individual motives and values are really the only thing that will dictate how good someone will be in their chosen profession. Do you care about what you do?

My feeling is that there are few who really know what they're doing, and they tend to just get on with it - because they enjoy it and gain fulfillment from it - that's why I entered software development too.

I face the challenge of getting a number of very capable and enthusiastic developers onto the right path of developing valuable software, when they have little experience of doing so. Can one person succeed in achieving this goal? I don't know the answer, but something leads me to think that if they just follow a simple, tried and trusted formula and they care, then I think its possible. However, its more their choice than mine, I see myself as a catalyst to provide support and encouragement and if they care and want to learn I can be there to help or point them in the direction of someone who can. It is an exercise in futility to assume that 15+ years of experience can just be learned overnight, because my experiences were formed by trial and error, and empirical feedback of good and bad things over many years. Also, I have the advantage of learning many different things over the years that all have their place and helped to load up my arsenal of 'tools' to use as and when needed. This prior context is impossible to shortcut, but is that necessarily a bad thing?

Saturday, February 14, 2009

Simplicity in an ever more complex world

Uncle Bob has me thinking about software craftsmanship, some of the simple home truths he talks about in that Infoq presentation are very profound. Every day I look over the headlines on technical news websites and read about the latest distributed-cloud-enterprise-service-buses (sounds like a cool new way of getting to work in the morning).

In my opinion, we are a community that enjoys a challenge a little too much, then of course, there is always the powerful self serving aspect - can I make this fashionable new technology work so I can put it on my resume. Our focus should be on establishing ourselves as a profession first - by mastering the basics and providing business value. Its an old tune, but the basics are done very poorly in software development and its not until we learn that mastery of the most basic skills of software construction will do far more to further the 'professional' aspect of our chosen discipline than any other factor, that we will have truly progressed.

Specialists vs Generalists

I have been doing some background thinking on this subject for some time now. Many projects I have worked on are set up the way of the specialist, yet something just doesn't ring true whenever I work on a project staffed this way.

Esther Derby has this take on the subject and it got me thinking. Johanna Rothman has a different perspective on the subject.

Quite often folk from different specialist backgrounds often set up yet more artificial organizational boundaries, even though they may report to the same technical manager. A user interface specialist's work will be "thrown over the wall" to the business domain logic guy and his work in turn to the database guy. No matter how great these folk may know their chosen specialisms, it doesn't compensate for the lost communications channels which suffer greatly from the adoption of this model. Oftentimes, generalists have good enough knowledge for the project and even if not, working alongside a generalist could be a potential solution to this dilemma. In the agile world, one of the truisms for me is common code ownership - it helps to reduce egos and instill a sense of teamwork as well as more practical considerations such as reducing the "truck number" for a team.

Friday, February 13, 2009

Great Presentation

Really enjoyed the presentation Craftsmanship and Ethics from uncle Bob on infoq. In my opinion, Bob has got it spot on. There are many of us out there who consider ourselves professional, yet can we say that software development can really be viewed as a profession? Especially when we bear in the mind the massive costly failures and mediocre performance of most teams.

We have had the tools to do a much better job for years, thought leaders like Beck, Fowler, Bob and many others have pointed the way to a much more successful future in our industry, yet we still fail to follow good advice - why?