Software has a real problem. Let me explain.
While a huge gulf separates the novices from the experts in every field, I like to think that the widespread simple knowledge in most fields is like writing as taught to elementary school students. Fifth graders use the same alphabet that I use; while my vocabulary is more extensive and my use of grammar more elaborate, in both cases what I do extends and builds upon what the kids do. Fifth graders don’t need to unlearn the “bad” letters when they get to high school. Books for children can be well-written, and understood and enjoyed by both children and adults. Books for adults are supersets of children’s books in terms of vocabulary, sentence structure and experience. And while professional writers may not make extensive use of notes written on 3 by 5 cards the way English teachers used to make kids do in high school, the principles of organization are the same.
In software, however, the doctrines and received methods taught and practiced in the grades and, worse, among mainstream professionals, are simply inadequate to support doing a really good job; the best people don’t extend them, they ignore them and in various ways violate their principles. The typical methods for performing the quality assurance and testing functions, for example, are counter-productive. They don’t need correction – they need complete replacement. They focus on the wrong issues, they have the wrong concepts, and the most widely used methods and tools simply don’t get the job done!! The relationship between programs and data. Same. By reference and by value, implicit and explicit. Other books in this series go into detail for a couple of these subjects.
Things are so bad in software that the most experienced and accomplished people don’t even agree on what makes a program “good.” Given a set of programs that don’t crash and meet a set of requirements, how do you rank them in order of goodness? Is goodness proportional to how deep the inheritance trees are? How well documented they are? To what extent the evil “go to” mechanism is used? What is the test coverage, or the number of unit tests? How few machine resources they consume? Whether they’ve been written in this language or that?
I propose that there is, with a few common-sense qualifications, a measure of software goodness we can all agree to. I suggest this isn’t a new idea, but one that many people have sensed and acted on. The principle even makes common sense! The only thing it lacks is articulation and discussion. Here is a non-technical explanation of it. Here is one of my attempts to state the principle in a historic and more technical context. Here is a deeper explanation of the context and background.
This is not an abstract, nice-to-have concern. Unless you know what “goodness” is in a program, how can you measure whether you’re going in the right direction? Think about target shooting: without a target, how can you decide how close you are, and therefore how accurate a shooter you are? How can you compare two different shots?
Having an explicit agreement of what makes a program “good” is indispensable to making good software, and making it effectively and efficiently.
Comments