I’ve talked in detail about the supposed progress in computer software languages, and explained the two major advances that have been made. All modern software languages are called “high-level languages.” As is typical in the non-science of Computer Science, no one bothers to define exactly what is meant by “high.” This is hilarious, since a single character being missing or out of place can cause a giant piece of software to crash – if there’s anything in this world in which precision is important, it’s in programming computers. But somehow the academic field and the vast majority of the practitioners don’t bother with little details like defining precisely what is “high” about a high-level language.
There was no such lack of precision by the person who invented the first widely used high-level language. John Backus first proposed FORTRAN to his superiors at IBM in late 1953. A draft specification was completed in 1954 and the first working compiler was delivered in early 1957. Wikipedia has it clear, simple and correct:
While the community was skeptical that this new method could possibly outperform hand-coding, it reduced the number of programming statements necessary to operate a machine by a factor of 20, and quickly gained acceptance. John Backus said during a 1979 interview with Think, the IBM employee magazine, "Much of my work has come from being lazy. I didn't like writing programs, and so, when I was working on the IBM 701, writing programs for computing missile trajectories, I started work on a programming system to make it easier to write programs."
According to the creator of the first successful high level language (echoed by his buddies and collaborators), “high level” in the context of software languages means “takes fewer statements and less work” to write a given program. That’s it! My blog post on the giant advances in software programming languages explains and illustrates this in detail.
The whole point of FORTRAN was to make writing numeric calculation programs quicker and easier. It succeeded. But it didn’t enable ANY assembler language program to be written in it. As I explain here, the invention of COBOL filled the gaps in FORTRAN for business programming, and C filled the gaps for systems programming.
How did the HLL’s achieve their productivity?
The early HLL’s were centered on wonderful, time-saving statements like assignments and if-then-else that both saved time writing and increased readability. In addition, the early creators were thoroughly grounded in the fact that the whole point of software was … to read data, do some tests and calculations and produce more data, a.k.a. results. Each of the amazing new languages therefore included statements to define the data to be read and written, and other statements to perform the actions of reading and writing. COBOL, for example, had and has two major parts to each program:
- Data Division, in which all the data to be used by the program is defined. When data is used by multiple programs, the definitions are typically stored in one or more separate files and copied by reference into the Data Division of any program that uses them. These are usually called “copy books.”
- Procedure Division, in which all the action statements of the program are defined. These include Read and Write statements, each of which includes a reference to the data definitions to be read or written.
This way of thinking about things was obvious to the early programmers. They had data; they had to make some calculations based on it and make new data. Job one was define the data. Job two was, with reference to the data defined, perform tests and calculations. For example, reading in deposits and withdrawals, and updating current balances.
After the Great Start...
Of course, things were not sweetness and light from thence forth. Huge amounts of noise and all the attention of software language people were generated by minor variations on the basic theme of high level languages. No one EVER argued or measured how many fewer statements or reduction of work it took. I guess a little birdie deep inside most of the participants would chirp “don’t go there” whenever one of the language fashionistas was tempted to actually measure what difference their new language resulted in.
I’m not going into the claims of virtue made during the last 50 years of pointless language invention work in detail, any more than I would go into the contents of garbage trucks to count and measure the differences in what people throw out. In the end, it’s all garbage. I’ll just mention these:
There’s a large group of inventors who claim that use of their new wonder-language will prevent programmers from making as many errors. This is high on the list of people who make programs HARDER to write by eliminating pointers. Any studies or measurements? Anyone notice even anecdotally a big uplift in software quality? Anyone?
Advocates of the O-O cult like to talk about how objects keep things private and prevent bugs. They suppress all talk of the hoops that programmers have to jump through to make programs conform to O-O dogma, with the resulting increase of lines of code, effort and bugs.
Conclusion
The starting years of computer software language invention were incredibly productive. The original high level languages achieved the vast majority of the "height," i.e., reduction in lines of code and effort to write a given program, that could possibly be achieved. The subsequent history of language invention includes a couple minor improvements and filling a small but important gap in coverage (in systems software). But mostly it's a story of wandering around in the wilderness spiced up by things being made worse, as I'll show in subsequent posts.
Comments