Is it true that ‘worse is better?’ January 29, 2006Posted by Matsu in Management, Technology.
I’m a slow reader. At least it takes me a long time to finish a book. One reason is I have so little time to devote to reading anything that is not necessary for my job. The other reason is I tend to read a lot of different books at once. It’s not that I lose interest or that one book bores me so I end up switching to another book. It’s not that, at all. Instead, I have many interests and so I end up reading parts of several books when I do have time for elective reading.
Today, I finished reading Hackers & Painters by Paul Graham. I found many parts of it interesting and amusing. I say amusing because he opens with some humorous insights on why nerds are the way they are and how they grow up to become hackers. He also makes some interesting arguments about software development and ends with the characteristics of a good programming language.
I am not a professional programmer, though I have written many programs in my life. In fact, earlier in my career I developed several custom applications for area businesses. I also developed an application that was marketed and sold nationwide to newspaper publishers. But, I have not done a lot of programming in the past 10 years. So, when I read in Graham’s book about the concept of ‘worse is better,’ it caught my attention. This evening, I stumbled across it while I was following a link in a blog. That’s twice in one day I read about the same programming concept. Now, I can’t help but comment on it.
Graham writes that one of the main ideas of the ‘worse is better’ philosophy of programming is if you are creating new software, you get a prototype of the program in front of users as soon as possible. The programmer should not try to have a polished product or conduct full testing and debugging, first. Instead, they should produce a proof of concept and get that in front of the end user for feedback. This is the software developer’s equivalent of a writer’s first draft. I can see merit in such an approach, though I don’t see how it would work for large commercial projects. It would seem that this model would work best for corporate ‘in-house’ projects or maybe for open source projects.
As I mentioned, while I was reading a blog post I ended up going down a rabbit trail that led me to the author of this concept of worse is better. Richard Gabriel first coined the phrase in a paper called, “Lisp: Good News, Bad News, How to Win Big.” Since writing that, he has written and published a separate document on the topic and that can be found here. Gabriel’s arguments are summed up in this Wikipedia page.
Here are the four primary attributes of Worse is Better:
Gabriel makes an interesting argument that completeness can be sacrificed in order to maintain simplicity. In fact, completeness must be sacrificed in order to get the project in front of the users, but the other three attributes should not be sacrificed in the process.
I think this approach has a lot of validity in today’s environment where so much emphasis is on software that can be either accessed or used over the Internet. And, where technology intersects with art in the form of web page design, I would argue that this approach is necessary for projects to be successful and end users to be satisfied with the outcome.
Web design should not be created and refined in a vacuum any more than software development should be produced that way. Both developers of the artistic and the technical aspects of the web page must involve the users to insure the best possible outcome. Where that doesn’t happen, I believe that the outcome is less than desirable. I think of this more of a case of, ‘Sooner is better’ or ‘Rougher is better.’ But, ‘Worse is better’ catches one’s attention because of the apparent contradiction. So, I’ll stick with that.