There is loads of talk about “innovation.” Lots of people want to do it, lots of people think they’re doing it, consultants run courses in how to be innovative, and large organizations claim to promote innovation and be innovative. The assumption behind most of this “innovation” talk is that a wonderful bright idea that will change the world (or at least your organization or startup) can pop into anyone’s head. It’s new! It’s brilliant! We’re going to win big with this great new idea! See this for example.
When you study software evolution, you get an entirely different picture of software-based “innovation.” Software evolution shows you that new ideas that work are extremely rare. Oh sure, there’s a flood of new ideas popping into people’s heads all the time. Mostly, they’re not new, and the new ones rarely work. The software concepts that make it big are, in the vast majority of cases, clear examples of existing patterns of software evolution, and have in most cases already been implemented in a different context.
I first encountered the mystery of software evolution while in my first job programming software.
I started programming in a course I took starting in 1966 in high school, taught by a math teacher who couldn’t himself program, but had convinced the school to let him teach the course, and had convinced a local company, a pioneering rocket engine company called Reaction Motors, to give us computer time on Saturdays. I had a textbook about FORTRAN, a steady stream of programming assignments and a computer on which to test my programs. It was great! I continued programming the following summer, as part of an NSF math camp I was able to attend. As I was nearing high school graduation the following year, I got lucky; Diane, a high school friend, talked about me with her father, who got me connected with a nearby company, and I landed a job there for the summer before starting at Harvard College in the fall of 1968.
The company was EMSI, Esso Mathematics and Systems, Inc. in Florham Park NJ.
They were a service company, one of the about 100 units of the Standard Oil of NJ (Esso) companies, devoted to applying math and computers to improving every aspect of the company. I was immediately thrown into the group that was developing optimization models for oil refinery operation. Our focus was on the giant refinery in Venezuela.
What we had running was an implementation of a classic OR (Operations Research) algorithm called LP (linear programming), solved via the simplex algorithm first devised by George Danzig in 1948. In this kind of model, there is a goal equation and a set of constraints. The goal equation calculated profits, using hundreds of contributing variables, including prices you could sell things for, and the costs of various inputs. The constraints were greater/less than equations, each essentially describing some tiny aspect of how the refinery worked. What the algorithm did was find the values of the variables that maximized the goal equation (profits) while satisfying all the constraint equations.
The model was constantly being modified to make it more precise and applicable to actual refinery operations. I had a variety of jobs, including writing new code, fixing bugs, etc.
Since prices were such a key part of the LP model, we had a separate program to calculate what they were likely to be in the future, a Monte Carlo model. I also made enhancements and fixed bugs in this body of code.
I was fortunate to be able to hitch a ride to work and back with a PhD who worked there and lived in my town. During the ride I would tap his deep knowledge of things. He put me onto the various journals in which advances in various forms of OR were described, which I dove into. The math was often above my head, but I was motivated to teach it to myself on a rolling, as-needed basis.
I thought this was a really cool way of running things. There were all sorts of controls in the refinery, controls that let you create more airline fuel and less heating oil, or any number of other trade-offs. It was amazing that you could compute the setting of all the control knobs that would produce the best mix of products that the market needed. Why would you ever run any operation any other way?? It would be simply ignorant and stupid.
I finished school, learned more about the way the world worked, and searched high and low for implementations of LP. Anywhere! If they were out there, they were doing a great job of hiding.
I was confused. How could this be? LP was math, doggone it. It yielded a provably optimal way of running a business. It’s proven in real-life production at Esso. Any other way of operating a business was clearly seat-of-the-pants, wet-finger-in-the-wind amateur hour; anyone whose operation was sizable enough to justify the effort would have to use this method if they weren’t plain-and-simple incompetent. But no one seemed to be using it! What’s going on here??
This mystery was on my mind while I participated in one of the periodic AI crazes that has swept the world of with-it people. While I was still in college, Winograd published his MIT SHRDLU research, in which an “intelligent” robot would converse with a human in English about a world consisting entirely and solely of blocks. You could ask SHRDLU to “put the red block on top of the blue block” and it would do it; you could ask it questions about block world, and it would answer. Amazing! Super-practical! While this was happening, I wrote and submitted a thesis about how to structure knowledge inside of an intelligent robot. All of it useless compared to LP and the associated OR techniques.
The craze was AI, and the mania lasted a few more years, generating a steady stream of "promising" results, none of them in the same universe of practicality and benefit as LP or any other OR optimization technique, which continued to be used only in "secret" little islands of astounding efficiency and productivity.
Later in the 1970’s I first applied for a home mortgage. A key part of getting the mortgage was the interview with the loan officer. You had to pass muster to get the mortgage! Another 10 years passed before OR-type models started to be used for credit. When I next applied for a mortgage in 1981, I was interviewed by a loan officer. By my next mortgage in 1987 it was at least partly automated.
When I got into venture capital in the 1990’s, I discovered that high-value repair parts were had recently been optimized in terms of inventory levels and locations. I looked in detail at the pioneering company ProfitLogic, which did inventory and sale optimization for retail stores, answering questions like when should which products be put on what kind of sale, questions that had traditionally been answered by the local marketing “expert,” just as oil refineries had previously been run exclusively by experienced experts.
Only in the late 2010’s did exactly the same LP models start being applied to medical scheduling, to optimize the use of things like infusion centers and operating rooms. Just as oil refineries produced much more value from exactly the same crude oil inputs in 1968 as a result of LP models, so are infusion centers handling 30% more throughput using exactly the same capital and human resources using the very same LP models.
Here’s the mystery: why did it take so blankety-blank long for LP models to be applied??? There has been no theoretical break-through. Yes, the computers are less expensive, but given the scale of the opportunity, that was never the obstacle. WHY??!! The answer is simple: there is no good answer. Except of course for the ever-relevant one of human ignorance, stupidity and sloth.
The example of LP optimization and its agonizingly long roll-out through different applications and industries over more than 50 years – a roll-out that is far from complete! – is a prime example of the reality of computer/software evolution. Among other things, it illustrates the point that many of the most impactful "innovations" are really "in-old-vations," things that are just sitting there, proven in production and waiting for someone to apply them to one of the many domains which they would benefit.
Here are a couple cornerstones of computer software evolution:
- Software evolution resembles biological evolution only a little. Not-very-fit software species thrive in broad areas of application, unchallenged for years or decades, while vastly superior ones rule the roost not far away. The only reason why the superior software species don’t migrate to the attractive new place appears to be human ignorance and inertia.
- Software evolution resembles the much-derided theory of “intelligent design” quite a bit, if you make a slight edit and call it “un-intelligent, un-educated design.” A “superior” (HA!) being does indeed do the designing of the software, in the form of highly paid software professionals.
- When software appears in a new “land” (platform, business domain), it most often starts evolution all over again, first appearing in classic primitive forms, and then slowly re-evolving through stages already traversed in the past in other “lands.” This persistent phenomenon supports the “unintelligent design with blinders on" theory of software evolution.
I will explain and illustrate these points in future posts and a forthcoming book.
Comments