For a wide variety of human-understandable reasons, software is perceived as a science. In academia, it's taught in the Computer Science department, often part of the math department. What could be more precise and scientific than that?
Whatever its pretensions, software is anything but scientific. It's mostly driven by fashions and fads, led by "experts" who promote theories that sound good when described -- but which entirely lack any form of scientific process, testing or evidence.
Software diseases will continue to severely hamper our computer systems until we wake from our long, pre-scientific sleep. We will know we're making progress when software practices at least to the level medicine had achieved 150 years ago. See these examples: the history of scurvy; the history of bloodletting; Yahoo and Hadoop.
The Evolution of Science
Science is not one thing, although the core principles of hypotheses, testing and evidence are always the same. Sciences don't pop up full-grown from nothing. Each field of human endeavor evolves towards being a science (or not) at its own time and pace. There is typically a long gestation period during which resistance is deep and widespread. Physics is a classic example.
Even when an area of science is well-established, the temptation to simply declare and assert the truth of something without careful proof remains strong and happens all too often. Science is something that human beings do, so it's never "perfect." It's also never "done." The resistance of the entire physics establishment to the now-accepted fact that time changes as the speed of an object approaches the speed of light, and that light has no mass but nonetheless exists and travels at a measurable rate, are classic examples of the fits-and-starts evolution of even the well-established science of physics.
It was a long, hard slog for medicine to emerge from its pre-scientific state. While there is loads of room for improvement, as I have often pointed out in this blog, medicine has clearly and explicitly embraced scientific discipline in the large majority of its practices, of course with the occasional embarrassing slippage.
The non-evolution of software towards science
Let's compare the emergence of powered flight as a science to the methods for building software projects. Here are the key points:
- Powered flight was widely recognized as important more than 100 years ago. There were widely accepted experts, and the entire establishment gave them money and support. After a couple spectacular failures by the experts, the obscure people who actually figured out how to create a heavier-than-air flying machine got it done, and their methods were soon universally followed. Here is the story in more detail.
- Building effective software is widely recognized as important today. There are widely accepted experts, whose methods are taught in schools, practiced in all major institutions and mandated by government regulations. After spectacular, repeated failures, everyone says "oh, that's software, what can you do," and moves on. Meanwhile, sometimes obscure people build amazing new software quickly and well most often using unauthorized methods. Their software is widely used and their companies acquired by the big organizations who can't build anything. Nothing changes. Here is an example and details.
To take another example, while there are many ways that the science of medical drug development could be improved, there is little doubt that it is a scientific venture. In terms of science, in spite of its problems, limitations and inefficiencies, drug development is probably a hundred years ahead of "computer science" in general and software development in particular. See this for a comparison.
If software were a science
Think of a list of established principles in software -- if software were in fact a science, the things that would be like the basic equations of non-relativistic motion. What's on the list? I suspect it includes things like: Object-oriented programming, comprehensive test automation, architecting for scalability.
Now think of a list of hot new methods or techniques in software, things that are widely accepted but early in widespread adoption. The list may include, depending on the circles in which you travel, things like micro-services, the Clojure language, Agile methodology with SCRUM, test-driven development.
Which of the items on either list -- your version of the lists, not mine -- went through anything like this process:
There was a bad problem that accepted methods weren't solving. People hypothesized an underlying cause and/or a cure, tested it first on a small scale and then on a larger scale. The evidence was overwhelming in A:B comparison that the cure was effective, so it became accepted.
Or,
There were observations that didn't fit existing theories, data that wasn't explained, or discrepancies that couldn't be accounted for. Someone came up with a theory that made sense out of the rogue data. Others formulated the theory exactly and conducted careful experiments; the results were made public, and maybe there was a period of refinement. Finally, the new theory was accepted, because it was experimentally proven to account for all the measured data, something the old theory could not do.
Anyone with reasonable software experience knows the answer to these simple questions: software doesn't work that way! Not even a little bit! Instead, new practices are invented, promoted and sometimes accepted into common practice. In no case is there a scientific vetting process! People just accept the theory because
- it makes sense to them, or
- it's what they've been taught, or
- it's required by the mandated practice of the group in which they work
- it somehow advances their career or enhances their prestige
Sometimes, happily, software fads just fade away for as little reason as they started. A fairly recent example is pair programming, which I describe and examine here.
In the face of this evidence you may swallow hard and admit that software may not be a science, but it is an established discipline with standards and processes that are widely accepted, as for example you can see in the FDA software regulations. Sadly, that makes it even worse, if it's possible to imagine that. The standards and processes that constitute modern software practice are taken from other fields and jammed onto software. They don't fit and they don't work.
Conclusion
No testing. No hypothesis with controlled experiment. No evidence. No process that resembles either medicine (we know there's a problem, we have a possible solution, let's prove it works before using it widely) or physics (we have this data we can't explain, let's propose a theory that accounts for it and run experiments that will prove it right or wrong) or anything else.
You can say that building software is part of "computer science" until you're blue in the face. You can require CS degrees for your new hires. But the evidence is that software is, without question, pre-scientific.
We need to at least start building towards a true Science of Software.
Software is an engineering discipline, or perhaps an art. Calling it a science just confuses the issue.
Posted by: jeff kenton | 04/14/2019 at 04:41 PM