When a new “platform” emerges (UNIX, Windows, Web, Apps), if you look at any application area and see how it evolved on prior platforms, the application’s functionality will emerge on the new platform in roughly the same order, though often on a compressed timescale. The functionality that is relevant depends on the particular application area. This concept applies both to system and application software.
The pattern is: functionality emerges on a new platform in roughly the same order as it emerged on earlier platforms. The timescale of the emergence may be compressed; the important aspect of the pattern isn’t the timing but the order. The pattern means that functional steps are rarely skipped – what was next last time is also next this time. The pattern also means that when someone tries to introduce functionality too soon, before the functionality that preceded it on prior platforms is generally available, the market will not accept it.
While this pattern takes a good deal of knowledge and judgment to notice and apply, I have consistently been impressed by its predictive power. By following the pattern, you can be pretty confident that you’re building proven functionality, and that you’re following a pattern of success.
I have noticed a couple of danger points here. When the company is too aware of the pattern, it is easy for them to “get ahead of themselves” and more importantly ahead of the market, by solving problems that certainly will become important, but problems that the market doesn’t know it has yet. The “same order” part of the pattern is important; building ahead of the market appears to win few business benefits.
On the other hand, without knowledge of the pattern, it is easy to make up all sorts of things you think people might want, and build them, only to find out later that you wasted time and money, because the functionality you built is never part of the accepted function set.
Really great companies who have lots of creative people, the ability to execute, and listen closely to market reactions to what they’re doing, don’t “need” to know about this pattern. However, for the rest of us mortals down here, having a “cheat sheet” to what important features the market will be demanding next can prove awfully helpful.
Basic Example: operating systems
IBM famously created an operating system for their mainframe line of computers, OS/360. It had the capability of running multiple jobs at once. Its multi-tasking abilities grew and became more sophisticated through the 1970’s.
Eventually a transaction monitor, CICS, was written and became a de facto part of the operating system for applications with lots of interactive users. As the operating system was used, it became evident that various access methods for storage and communications needed to be separate from the core, and so clear interfaces were created, and the notion of completely self-contained access methods (for example, a file system) as replaceable units was supported. A strong security system was not part of the early versions of the operating system, and the need for one became critical, and so strong external modules were written and support for security was added to the core. While there was a “main” operating system, alternative operating systems were written for various purposes, and a virtual operating system was written to actually run on the “bare metal.” With VM (the virtual OS), you could devote most of a machine’s resources to the production users, while letting some of the users running on a completely different operating system.
While all this was taking place, people were studying and experimenting in university environments, deciding just what an operating system was and was not, what the best ways to build one were, and so on.
Before very long, mini-computers were invented; these were basically mainframes on the cheap, with all sorts of features and functions missing – but they were cheap. And, since each had a unique instruction set, each minicomputer needed an operating system. Programmers were hired, and those programmers, of course, ignored the mainframe operating systems, and built simple, cheap OS’s to go along with the cheap hardware. Surprise, surprise, those cheap OS’s resembled nothing as much as – the first generation of mainframe operating systems! But people quickly discovered the limitations, just as they had before, and set about making the same set of enhancements that the previous generation of pioneering programmers had made. Within ten years, they had re-invented many of the important mainframe OS concepts, and were on the way to building the rest.
With all this knowledge of operating systems floating around and pretty easily available, what do you suppose happened when people took the early micro-processor chips and made them into micro-computers? Naturally, they understood the state of the art of operating systems theory and practice and adapted an existing OS (which were starting to be built in high level languages) or built one that took all this knowledge and applied it with appropriate adjustments to the new environment, right? Bzzzzt! Of course not!
What the kids who were faced with the task did was start from scratch, not only in terms of code, but also in terms of knowledge. They didn’t stand on the shoulders of giants; they didn’t learn from the experiences of the many that preceded them; they built OS’s as though they were the first programmers who ever tried to do such a thing. And the result was pretty much like early mainframe (even pre-360!) operating systems. There was no serious memory protection or address mapping; there was no real concept of multiple users, multiple levels and types of users, or any real security; no multi-tasking; the access methods were hard-wired in, and so on. The limitations and problems emerged pretty quickly, and so did add-on rubber band and baling wire patches, just like in earlier days.
It’s a good thing that IBM came along at this point, and brought commercial order and education to the emerging microcomputer market. When they came out with the IBM PC, they not only legitimized the market, they had deep history with mainframes and minicomputers. They employed true experts who knew operating systems inside and out. They had a research division, where there were people who could tell you what the operating systems of the future would look like. So it makes sense they would get those experts together, and they would create a small but efficient micro-tasking kernel, common interfaces for installable access methods, and many other appropriate variations on all the modern operating systems concepts. The last thing such a smart, educated and astute major company like IBM would do was make an exclusive deal with a small company that had never built an operating system, who had just bought the rights on the cheap to a half-baked excuse for a primitive first-generation OS, and make that the IBM-blessed … Wait! … that’s what they did do! Arrrgggghhh!
Explanation
One might well ask, how can a pattern like this continue to have predictive power? Why wouldn’t the people who develop on a new platform simply take a little time to examine the relevant applications on the older platforms, and leap to the state of the art? Why wouldn’t customers demand it?
It is hard to know for sure, but I think there are a couple main factors at work, and there is evidence for the relevance of each of the factors.
The first factor is the developers. It is well known that most developers learn a platform and then stick with the platform they’ve learned for an extended period of time, basically as long as they can. The reason is simple: they are experts on the platform they already know, and therefore have prestige and make more money than they would as novices on a platform they’re just learning. I speculate that this is one of the many contributing factors to the rapid migration of ambitious programmers into management, where they can advance without being tied to a platform, at least as much. So who learns the new platforms? With few exceptions, new people. If you’re just entering the industry, you are counseled to learn the hot new languages; you tend to be sensitive to where the demands and rising salaries are. Still, you expect and are paid entry-level wages, along with most other people (except managers). Why should the experienced programmers jump to the new platform? They would have to compete with hard-working young people, their knowledge of the older platform will be considered a liability, and on top of everything else, they’d have to take a pay cut.
The result is that no one working on the new platform has an in-depth, working knowledge of the applications on the older platform, and at least in part because of this, everyone considers knowledge and use of the platform to be vastly more important than knowledge of an old application on an obsolete platform. So they ignore it! As a result, they dive in and attempt to automate the application area “from scratch.” Their attempts are usually quite close to every first generation program for that application on past platforms, because it turns out that the determining factor isn’t the platform, it’s the business problem. They proceed to re-discover, step by step, the reasons why the first generation was inadequate and had to be supplanted by a second generation, etc.
The second factor is the buyers. When a new platform emerges, most buyers simply ignore it. Why pay attention? It’s a toy, there are no good applications, etc. The few buyers who do pay attention tend to be crazy, early adopter types who just love the experience of trying new things. Like the programmers, they also tend to care about the platform more than the application – otherwise, they wouldn’t even consider buying what is typically a seriously immature application on the new platform. But they can only buy what’s being sold, and so they choose among the inadequate applications. Because they don’t care about applications as much as platforms, they don’t even ask for features they know could only be present in the mature applications for older platforms – they press the applications vendors for the “next” obvious cool feature, in the narrow universe of the new platform.
The reason why application evolution repeats itself on the new platform, then, is that nearly everyone involved, builder and buyer, is ignorant of the past and wears what amounts to blinders. It’s as though they are building and buying the application for the first time. Therefore, they respond most strongly to the same business pressures that people involved in computer automation tend to see first, and then next, and so on. It’s as though there’s a “natural” most climb-able path up a mountain, and successive waves of climbers approach the climb from the foot of the mountain, completely ignorant of the experiences of those who came before them, but confronted with the same options they tend to make the same choices, and so everyone takes roughly the same route up the mountain.
Why don’t smart groups who know all this leap-frog their way to the most advanced application functionality? I have seen this happen, and it’s the buyers who tend to rain on this kind of parade. The buyers tend to be familiar with the range of applications in a category, and those applications tend to address a highly overlapping set of problems in ways that vary only slightly. The “far-seeing” builder then comes along with all sorts of solutions to problems that the buyers don’t even know they have! The buyers look around in confusion – why isn’t anyone else talking about this? I kind of understand what you’re talking about, but I feel silly thinking that something’s important when no one else does. I think I’ll wait. And they do. So getting too far ahead of the buyers is just as much of a problem as being too far behind the competition. The result: the application evolution repeats itself on the new platform, in roughly the same order each time, and no “cheating” or “running ahead” is allowed.
This all sounds incredibly common-sense when I read it written down, but I have to admit that this particular piece of common sense is not only uncommon, it took me personally decades and multiple failures to finally get this simple thought into my thick skull. The key thought is this one: you may think you know a person has a problem; the problem might be a severe one, and cost the person a great deal of money and trouble; you may even be entirely right in this judgment. However, if the person in question does not think he has a problem, why should he pay for a solution – in fact, why would he go to the trouble of implementing a solution even if it were free? Even worse, why would he even waste time talking with you, once he got the idea that you thought he had a problem he didn’t think he had? Why would he listen to a chimney sweeper’s pitch if he lives in a house without chimneys? The air quality in Los Angeles in 1970 was terrible. But there was no market for catalytic converters on automobile exhaust systems at that time. The problem that converters solve existed, and was getting worse. It was obvious for anyone to see. It was even talked about. But in peoples’ minds at the time, having a car that contributed to air pollution was not a problem most people accepted they had (even though we know that, objectively speaking, they did have it).