Most, if not all, of the software developers I know, when hearing the name Fred Brooks, immediately, and correctly, identify him as the author of 'The Mythical Man Month'. Of those only relatively few understand that the mythical man month was, albeit incredibly important, only one of several ideas presented in that legendary tome.
Another phenomenally important idea, described by Brooks as Paramount in System design, is Conceptual Integrity, concretely described as a system that reflects a single set of design ideas, as opposed to a collection of independent and uncoordinated ideas.
Conceptual integrity can only be maintained by separating architecture from implementation, that is by ensuring that architectural decisions are made by a small number of people, preferably a single architect, with the authority to approve or deny the inclusion of features if they do not fit seamlessly with the system architecture.
I think anyone, with an IQ higher than their European shoe size, recognizes this as common sense; which begs the question, why do so few systems exhibit this quality?
It all comes down to a lack of authority and the behaviour of inexperienced or perhaps arrogant developers in such an environment.
Having been asked to design a system, one expects to evolve the design as new features are added and deployment environments change, but, it is imperative that the architect have the authority to deny or defer a feature until the system can accommodate it, it is the visibility of these features and environmental conditions that allow the system to evolve in a controlled and consistent manner.
If the authority of the architect is circumvented, the conceptual integrity of the system will inevitably fail and ultimately the system will become intractable and a candidate for yet another rewrite.
Without explicit authority, it is common for the initial design to be seen as the foundation upon which the house is to be built, a static, solid base, never to be seen again. The application is seen as incidental to the system, not part of it.
How does this happen? As you would expect, it comes down to human weakness, lets take a look at a couple of scenarios which I have experienced.
The technical director
In some cases there exists, in the management chain, a beast, the beast will appear generous and the beast will cajole with soft words and mighty bonus checks, but the beast has a secret! The beast used to be you, (s)he used to be a developer, probably a damn good one, but it is this very background that persuades the beast that (s)he understands the system and what it can or can not tolerate, combined with the fact that the beasts authority supersedes the architects and the decent into hell begins. If the beast is amongst you there is very little you can do but look forward to the next rewrite. Note, not all technical managers are the beast, but the beast dwells within them.
Look before you leap
In some cases, the conceptual integrity of a system can be undone by a developer who refuses to become familiar with the system. For instance, if a particular subsystem does not exactly match the needs of the developer it will be abandoned in favour of new code, specific to task. The developer delivers the feature on time, management is happy but the system is now less than it was.
Look what I can do
We have all said or thought, at one time or another, wouldn't it be awesome if the system did THIS.
THIS being some function that is not part of the design but, to you at least, would make the product better.
The right thing to do with THIS is submit it as a feature request, and hope it makes it into the system.
I am not saying that you can not engage in skunkworks projects but unless you have enough experience to ensure that the feature is implemented in a manner that is consistent with the system design, you can guarantee that your must have feature will be the source of significant bugs and instability.
Whether these activities are motivated by a genuine desire to make the system all it can be or they are the arrogant actions of a DS watching* scum-bag, the potential for drama is the same. If you are going to engage on little side projects, it might be worth having a quick chat with the architect about your plans.
* DS Watcher -- Military phrase, implying that a soldier only puts in effort when he knows that his superiors can see him. Commonly found bleeding in the shower block.
The next time you look at a code base and wring your hands, furrow your brow and vilify the original development team, try and remember that the system probably started life with conceptual integrity and was undone by ignorance of that quality; then modify your own behaviour accordingly.
Wednesday, January 30, 2008
Tuesday, January 22, 2008
The Efficient Market Hypothesis
Much of finance theory depends on an efficient market, essentially the condition whereby all participants have access to the same information at the same time, more or less. It is the operation of the efficient market that eliminates arbitrage opportunities.
In my youth I had an exceptional finance lecturer, Eve Hicks, her position on the efficient market was that it clearly didn't exist, but in the absence of anything better, it formed a conceptual framework upon which finance theory could be built. I have respected and subscribed to that point of view ever since, even believing that the fall of open outcry and the emergence of the electronic exchange has made the efficient market that little bit more real.
How then, if indeed the efficient market is to even be considered as a valid model, did the sub-prime collapse catch so many 'experienced' traders off guard, the collective genius at Goldman Sachs excluded of course. Surely, with equal access to the same information every risk department of every bank and fund should have been hedging against the CDO's that have ultimately triggered the current volatility.
The answer lies not in flaws of the efficient market hypothesis but in the plain fact that success flatters to deceive. For many, the CDO proved to be a license to print money and while the good times are that good, one tends to overlook glaring compromises or risk factors that would otherwise be great cause for concern.
And so it is with technology, as a product becomes a run away success, the impetus is to build on that success and the suggestion that the foundation may need shoring up is eschewed by management and marketing in favour of 'more things that made us successful'.
As with CDO's, when reality hits and the wheels come off, do not expect management and marketing to offer a sincere 'my bad', they were simply doing their job. As software engineers, the code base is our efficient market, when we see gathering problems, it is our responsibility to begin hedging against them, re-factoring is our derivative.
Re-factoring needs to become an every day part of your role as a software engineer, you will never be given large blocks of time specifically for re-factoring, it is a day to day responsibility; ignore it at your peril, unless of course there is someone else you can blame.
POUNDY!!!!!!!!!
In my youth I had an exceptional finance lecturer, Eve Hicks, her position on the efficient market was that it clearly didn't exist, but in the absence of anything better, it formed a conceptual framework upon which finance theory could be built. I have respected and subscribed to that point of view ever since, even believing that the fall of open outcry and the emergence of the electronic exchange has made the efficient market that little bit more real.
How then, if indeed the efficient market is to even be considered as a valid model, did the sub-prime collapse catch so many 'experienced' traders off guard, the collective genius at Goldman Sachs excluded of course. Surely, with equal access to the same information every risk department of every bank and fund should have been hedging against the CDO's that have ultimately triggered the current volatility.
The answer lies not in flaws of the efficient market hypothesis but in the plain fact that success flatters to deceive. For many, the CDO proved to be a license to print money and while the good times are that good, one tends to overlook glaring compromises or risk factors that would otherwise be great cause for concern.
And so it is with technology, as a product becomes a run away success, the impetus is to build on that success and the suggestion that the foundation may need shoring up is eschewed by management and marketing in favour of 'more things that made us successful'.
As with CDO's, when reality hits and the wheels come off, do not expect management and marketing to offer a sincere 'my bad', they were simply doing their job. As software engineers, the code base is our efficient market, when we see gathering problems, it is our responsibility to begin hedging against them, re-factoring is our derivative.
Re-factoring needs to become an every day part of your role as a software engineer, you will never be given large blocks of time specifically for re-factoring, it is a day to day responsibility; ignore it at your peril, unless of course there is someone else you can blame.
POUNDY!!!!!!!!!
Sunday, January 13, 2008
The hundred year language
I have just finished reading a very interesting post from Tim Sweeney
http://lambda-the-ultimate.org/classic/message6475.html#6501
While I don't necessarily agree with him regarding the adoption of ML (OCAML / SML / F#) and haskell, I think the salient point he makes is that, in all likelihood LISP will outlive all the popular languages of today. That is not to say that LISP will become the hundred year language, merely that it will still have an enthusiastic user base long after C++ and Java et al are pushing up daisies.
What is it about LISP that is so compelling. They say that unless you have used it you will never know, and that once you have used it you will become a better programmer, regardless of language. From my own experience I would say that their is some truth to this; the barrier to adoption really comes down to syntax. Just as with Haskell, Erlang and OCaml, the syntax does not conform to the majority dialect and therefore faces an up hill battle for wide scale adoption.
This is one of the reasons that Dylan, despite its banishment from Apple, continues to peaks my interest. Its difficult to see a future for Dylan, but every time I decide to move on I find myself drawn back into the fold.
I think with a concerted effort, perhaps a carbon / cocoa back end for DUIM and a Dylan port of Uncommon Web, there may yet be traction. By the way, who is the patron saint of lost causes.
http://lambda-the-ultimate.org/classic/message6475.html#6501
While I don't necessarily agree with him regarding the adoption of ML (OCAML / SML / F#) and haskell, I think the salient point he makes is that, in all likelihood LISP will outlive all the popular languages of today. That is not to say that LISP will become the hundred year language, merely that it will still have an enthusiastic user base long after C++ and Java et al are pushing up daisies.
What is it about LISP that is so compelling. They say that unless you have used it you will never know, and that once you have used it you will become a better programmer, regardless of language. From my own experience I would say that their is some truth to this; the barrier to adoption really comes down to syntax. Just as with Haskell, Erlang and OCaml, the syntax does not conform to the majority dialect and therefore faces an up hill battle for wide scale adoption.
This is one of the reasons that Dylan, despite its banishment from Apple, continues to peaks my interest. Its difficult to see a future for Dylan, but every time I decide to move on I find myself drawn back into the fold.
I think with a concerted effort, perhaps a carbon / cocoa back end for DUIM and a Dylan port of Uncommon Web, there may yet be traction. By the way, who is the patron saint of lost causes.
Thursday, January 10, 2008
A lost opportunity
Hindsight as they say is 20/20.
Clearly, the acquisition of Next, positioned OpenStep and Objective-C as the immediate tool chain for Apple.
Don't get me wrong, I admire Objective-C and have enjoyed programming in it for many years, moreover, Apple has some exceptional people who are trying to drag the language into the 21st Century, kicking and screaming.
But one of the casualties of the merger, and, to be fair, other circumstances, was an alternative programming language which, at the time may have seemed a little odd, but amongst the current crop of scripting and functional languages seems positively normal, Dylan.
Dylan is effectively a LISP, with a combination of C and pascal syntax. It is an object oriented language which uses the same semantics as CLOS.
When Dylan was in its infancy it suffered many of the same problems as Java, at the time, namely, performance and resource usage; Java, however, had marketing muscle and a robust Sun Microsystems behind it. Dylan was merely another project that was not critical to the survival of Apple.
While objective-c continues to serve Apple well, one cannot help but wonder whether, now, when Apple is no longer precariously balanced on the precipice, Dylan might be considered a much more appropriate choice for the future.
Many of the modern features at are being added to objective-c are already present in Dylan and it remains to be seen if objective-c is malleable enough to have those features added in a natural seamless way.
below is a contrived example from the Dylan wiki, that serves to illustrate the sophistication of the Dylan programming language, I ask you only to review and consider how like facilities might be made available in Objective-C, my hypothesis is that you may find some significant holes.
define constant <zero> = singleton(0); // is an integer subclass that encapsulates the value 0.
define method factorial( n :: <zero> ) => ( n! :: <integer> )
1
end;
define method factorial( n :: limited( <integer>, max: -1) )
error( "factorial: Bad Argument");
end;
define method factorial( n :: <integer> ) => ( n! ::<integer> )
n * factorial( n - 1 )
end;
The purpose of the above is not to hold up functional programming as the most appropriate paradigm, clearly we can be more efficient than this, but it is a concise example of how polymorphism can be used to avoid the conditional code that often plagues the behaviours attributed to objects in objecttive-c
Clearly, the acquisition of Next, positioned OpenStep and Objective-C as the immediate tool chain for Apple.
Don't get me wrong, I admire Objective-C and have enjoyed programming in it for many years, moreover, Apple has some exceptional people who are trying to drag the language into the 21st Century, kicking and screaming.
But one of the casualties of the merger, and, to be fair, other circumstances, was an alternative programming language which, at the time may have seemed a little odd, but amongst the current crop of scripting and functional languages seems positively normal, Dylan.
Dylan is effectively a LISP, with a combination of C and pascal syntax. It is an object oriented language which uses the same semantics as CLOS.
When Dylan was in its infancy it suffered many of the same problems as Java, at the time, namely, performance and resource usage; Java, however, had marketing muscle and a robust Sun Microsystems behind it. Dylan was merely another project that was not critical to the survival of Apple.
While objective-c continues to serve Apple well, one cannot help but wonder whether, now, when Apple is no longer precariously balanced on the precipice, Dylan might be considered a much more appropriate choice for the future.
Many of the modern features at are being added to objective-c are already present in Dylan and it remains to be seen if objective-c is malleable enough to have those features added in a natural seamless way.
below is a contrived example from the Dylan wiki, that serves to illustrate the sophistication of the Dylan programming language, I ask you only to review and consider how like facilities might be made available in Objective-C, my hypothesis is that you may find some significant holes.
define constant <zero> = singleton(0); //
define method factorial( n :: <zero> ) => ( n! :: <integer> )
1
end;
define method factorial( n :: limited( <integer>, max: -1) )
error( "factorial: Bad Argument");
end;
define method factorial( n :: <integer> ) => ( n! ::<integer> )
n * factorial( n - 1 )
end;
The purpose of the above is not to hold up functional programming as the most appropriate paradigm, clearly we can be more efficient than this, but it is a concise example of how polymorphism can be used to avoid the conditional code that often plagues the behaviours attributed to objects in objecttive-c
Wednesday, January 9, 2008
Tipping point or tripping point
Yesterday Apple released new base Mac Pro configuration with 8 processing cores. These are truly workstation class machines at, albeit high end, desktop prices.
If we take for granted that Leopard, Vista or Linux can make efficient use of the raw processing power available, any given application should perform better on these machines than on the previous generation, however, I think that many people will be surprised by how modest those performance gains are. Unless the application was developed with concurrency in mind most of the cores will likely sit idle.
To me these machines represent a tipping point, a point in time when concurrency becomes mainstream. While I accept that these machines are targeted at professional users, how long do you think it will be before your laptop has a similar core count?Two years, maybe three?
I hope that means that functional programming languages gain greater momentum and acceptance, but at the very least, we, as a community, need to accept that knowing what a mutex is, does not a multithreaded programmer make. At a minimum we should be looking to trade in the mutex hammer for the Software Transaction Memory rapier.
Alas, for some software engineers this is not a tipping point, but a tripping point, one paradigm shift too many in a career littered with the wreckage of objects, aspects, templates, functors and categories. There is no shame in this, to you I ask only this question, can I have a raise?
If we take for granted that Leopard, Vista or Linux can make efficient use of the raw processing power available, any given application should perform better on these machines than on the previous generation, however, I think that many people will be surprised by how modest those performance gains are. Unless the application was developed with concurrency in mind most of the cores will likely sit idle.
To me these machines represent a tipping point, a point in time when concurrency becomes mainstream. While I accept that these machines are targeted at professional users, how long do you think it will be before your laptop has a similar core count?Two years, maybe three?
I hope that means that functional programming languages gain greater momentum and acceptance, but at the very least, we, as a community, need to accept that knowing what a mutex is, does not a multithreaded programmer make. At a minimum we should be looking to trade in the mutex hammer for the Software Transaction Memory rapier.
Alas, for some software engineers this is not a tipping point, but a tripping point, one paradigm shift too many in a career littered with the wreckage of objects, aspects, templates, functors and categories. There is no shame in this, to you I ask only this question, can I have a raise?
Friday, January 4, 2008
Finding my way
In June 1996 I cleaned my rifle for the last time and said goodbye to my platoon and brother officers to engage in a career in IT.
It was a difficult and probably poor decision but one can not look back to far before the mists of time confuses what was with what one wanted it to be.
One of the most difficult challenges has been in maintaining an enthusiasm for my current pursuit that was taken for granted when one lived on an adrenalin drip.
For me, I found this in the ongoing discovery of programming paradigms and methodologies that in one fell swoop would increase my productivity and, along with it, ability to tackle increasing complex problems.
When I exhausted Visual Basic (not a difficult task) and moved on to C++, the transition to Object Oriented from Object Based was enlightening, exciting and, to a certain extent, empowering, leading me ultimately to Java and then Objective-C. From there i went through a long period of stagnation.
C# was interesting and I enjoyed it far more than Java, but it was nothing new.
Ruby seemed promising and Rails has certainly changed the web development framework landscape but, again, nothing new.
I dabbled with lots of different technologies while plying my stock in trade with various employers, but nothing really excited me until a long time infatuation with Lisp was finally sated with Peter Seibel's excellent Practical Common Lisp and subsequently Paul Graham's On Lisp. My only problem was that I couldn't see a way of making any money with Lisp, however, I had been bitten by the functional paradigm that Lisp espouses but does not necessarily enforce.
Since then I have enjoyed, as time and tide allows, several dalliances with functional programming languages and more significant outings with OCAML and Haskell; it has been so much fun that it has peaked my interest beyond programming languages and encouraged some tentative steps into category theory and its application to computer science.
With the advent of multi-core CPUs and the promise of many core processors in the near future it occurrs to me that my interest in functional programming languages could not of happened at a more opportune time. It is not that imperative programming languages are not as capable, merely that functional programming languages seem to be more natural fit.
More to follow.
It was a difficult and probably poor decision but one can not look back to far before the mists of time confuses what was with what one wanted it to be.
One of the most difficult challenges has been in maintaining an enthusiasm for my current pursuit that was taken for granted when one lived on an adrenalin drip.
For me, I found this in the ongoing discovery of programming paradigms and methodologies that in one fell swoop would increase my productivity and, along with it, ability to tackle increasing complex problems.
When I exhausted Visual Basic (not a difficult task) and moved on to C++, the transition to Object Oriented from Object Based was enlightening, exciting and, to a certain extent, empowering, leading me ultimately to Java and then Objective-C. From there i went through a long period of stagnation.
C# was interesting and I enjoyed it far more than Java, but it was nothing new.
Ruby seemed promising and Rails has certainly changed the web development framework landscape but, again, nothing new.
I dabbled with lots of different technologies while plying my stock in trade with various employers, but nothing really excited me until a long time infatuation with Lisp was finally sated with Peter Seibel's excellent Practical Common Lisp and subsequently Paul Graham's On Lisp. My only problem was that I couldn't see a way of making any money with Lisp, however, I had been bitten by the functional paradigm that Lisp espouses but does not necessarily enforce.
Since then I have enjoyed, as time and tide allows, several dalliances with functional programming languages and more significant outings with OCAML and Haskell; it has been so much fun that it has peaked my interest beyond programming languages and encouraged some tentative steps into category theory and its application to computer science.
With the advent of multi-core CPUs and the promise of many core processors in the near future it occurrs to me that my interest in functional programming languages could not of happened at a more opportune time. It is not that imperative programming languages are not as capable, merely that functional programming languages seem to be more natural fit.
More to follow.
Subscribe to:
Posts (Atom)