For the last two days Judd has been a passenger on the Usability Express here at Intuit. I was in Usability last week for one of my projects, but we just had a handful over two days time. Judd’s group is doing more than that for his project, and I’m currently sitting in the lab observing a subject. The labs here are pretty cool, and I think its great that the programmers get the chance to actually see the end users interact with designs and prototypes.
Anyways, I have to ask any programmers out there the following question: if you were told that by using syntactical language feature X in your code you would run a higher risk of bugs, and if it were logically trivial to NOT use that feature, would you continue to use it? The answer for many programmers is apparently “yes.” The thing that bothers me the most about it is that if one’s opinion is, “Well, feature X CAN be used in bad ways that promote defects, but we’re using it correctly so we don’t have to worry.” In that case, should a bug manifest itself in the described code, how will the programmer quickly establish root cause? How could they, period, if they’re pre-emptively declared their most suspect code as shippable?
I think that component and application design should flow from certain rules. Of course you CAN do X, but if it produces untraceable bugs a nontrivial percentage of the time why would you even bother? Figure out a better way. If you’re chained to an existing codebase, at the very least you could figure out a different way to do it from here on out. I’m not talking about pattern usage or some other overarching architectural component. I’m talking about not using ‘gotos’ in high level languages, or not using for(;;) unless you are a certifiable madman.
I’m sure 99% of the readers won’t understand this, but part of the reason this blog exists is to vent my thoughts, and sometimes my thoughts don’t necessarily track with WWF and stupid, vapid movies like Baby Geniuses.