Programmers say language A is "better" than language B. Or, to avoid giving offense, they'll say they "like" language A. Sometimes they get passionate, and get their colleagues to program in their new favorite language.
This doesn't happen often; inertia usually wins. When a change is made, it's usually passion and energy that win the day. If one person cares a WHOLE LOT, and everyone else is, "whatever," then the new language happens.
Sometimes a know-nothing manager (sorry, I'm repeating myself) comes along and asks the justification for the change. The leading argument is normally "the new language is better." In response to the obvious "how is it better?" people try to get away with "It's just better!" If the manager hangs tough and demands rationality, the passionate programmer may lose his cool and insist that the new language is "more productive." This of course is dangerous, because the rational manager (or have I just defined the empty set?) should reply, "OK, I'll measure the results and we'll see." Mr. Passion has now gotten his way, but has screwed over everyone. But it usually doesn't matter -- who measures "programmer productivity" anyway?
But seriously, how should we measure degrees of goodness in programming languages? If there's a common set of yardsticks, I haven't encountered them yet.
The Ability of the Programmer
First, let's handle the most obvious issue: the skill of the programmer. Edgar Allan Poe had a really primitive writing tool: pen (not even ball-point!) and paper. But he still managed to write circles around millions of would-be writers equipped with the best word processing programs that technology has to offer. I've dealt with this issue in detail before, so let's accept the qualification "assuming the programmer is equally skilled and experienced in all cases."
Dimensions of Goodness
Programmers confined to one region of programming (i.e., most programmers) don't often encounter this, but there are multiple dimensions of goodness, and they apply quite differently to different programming demands.
Suppose you're a big fan of object-orientation. Now it's time to write a device driver. Will you judge the goodness of the driver based on the extent to which it's object-oriented? Only if you're totally blindered and stupid. In a driver you want high performance and effective handling of exception conditions. Period. That is the most important dimension of goodness. Of all the programs that meet that condition, you could then rank them on other dimensions, for example readability of the code.
Given that, what's the best language for writing the driver? Our fan of object orientation may love the fact that his favorite language can't do pointer arithmetic and is grudgingly willing to admit that, yes, garbage collection does happen, but with today's fast processors, who cares?
Sorry, dude, you're importing application-centric thinking into the world of systems software. Doesn't work, terrible idea.
It isn't just drivers. There is a universe of programs for which the main dimensions of goodness are the efficiency of resource utilization. For example, sort algorithms are valued on both performance and space utilization (less is better). There is a whole wonderful universe of work devoted to this subject, and Knuth is near the center of that universe.
Consistent with that thinking, Knuth made up his own assembler language, and wrote programs in it to illustrate his algorithms. Knuth clearly felt that minimum space utilization and maximum performance were the primary dimensions of goodness.
The largest Dimension of Goodness
While there are other important special cases, the dimension of goodness most frequently relevant is hard to state simply, but is simple common sense:
The easier it is to create and modify programs, quickly and accurately, the more goodness there is.
- Create. Doesn't happen much, but still important.
- Modify. The most frequent and important act by far.
- Quickly. All other things being equal, less effort and fewer steps is better.
- Accurately. Anything that tempts us to error is bad, anything that helps us to accuracy is good.
That, I propose, is (in most cases) the most important measure of goodness we should use.
Theoretical Answer
Someday I'll get around to publishing my book on Occamality, which asks and answers the question "how good is a computer program?" Until then, here is a super-short summary: among all equally accurate expressions of a given operational requirement, the best one has exactly one place where any given semanic entity is expressed, so that for any given thing you want to change, you need only go to one place to accomplish the change. In other words, the least redundancy. What Shannon's Law is for communications channels, Occamality is for programs. Given the same logic expressed in different languages, the language that enables the least redundancy is the best, the most Occamal.
Historical Answer
By studying a wide variety of programs and programming languages in many fields over many years, it's possible to discern a trend. There is a slow but clear trend towards Occamality, which demonstrates what it is and how it's best expressed. The trend is the result of simple pressures of time and money.
You write a program. Sombody comes along and wants something different, so you change the program. Other people come along and want changes. You get tired of modifying the program, you see that the changes they want aren't to different, so you create parameters that people can change to their heart's content. The parameters grow and multiply, until you've got loads of them, but at least people feel like they're in control and aren't bugging you for changes. Parameters rule.
Then someone wants some real action-like thing just their way. You throw up your hands, give them the source code, and tell them to have fun. Maybe they succeed, maybe not. But eventually, they want your enhancements in their special crappy version of your nice program, and it's a pain. It happens again. You get sick of it, analyze the places they wanted the changes, and make official "user exits." Now they can kill themselves customizing, and it all happens outside your code. Phew.
Things keep evolving, and the use of user exits explode. Idiots keep writing them so that the whole darn system crashes, or screws up the data. At least with parameters, nothing really awful can happen. The light bulb comes on. What if I could have something like parameters (it's data, it can't crash) that could do anything anyone's wanted to do with a user exit? In other words, what if everything my application could do was expressed in really powerful, declarative parameters? Hmmm. The users would be out of my hair for-like-ever.
What I've just described is how a new programming "level" emerges historically. This is what led to operating systems, except that applications are one giant user exit. UNIX is chock full of things like this. This history is the history of SQL in a nutshell -- a powerful, does-everything system at the level of declarative, user-exit-like parameters!
The Answer
In general, the more Occamal the language (and its use), the better it is. More specifically, given a set of languages, the best one has
- semantics that are close to the problem domain
- features that let you eliminate redundancy
- a declarative approach (rather than an imperative one)
Let's go through each of these.
Problem Domain Semantics
A great example of a such a language is the unix utility AWK. It's a language whose purpose is to parse and process strings. Period. You want an accounting system, don't use AWK. You want to generate cute-looking web pages, don't use AWK. But if you've got a stream of text that needs processing, AWK is your friend.
From the enterprise space, ABAP is an interesting example. While it's now the prime language for writing SAP applications, ABAP was originally Allgemeiner Berichts-Aufbereitungs-Prozessor, German for "general report creation processor." In other words, they saw the endless varieties of reporting and created a language for it; then having seen the vast utility of putting customization power in the hands of users, generalized it.
Features that let you eliminate redundancy
This is what subroutines, classes and inheritance are supposed to be all about. And they can help. But more often, creating another program "level" is the most compelling solution, i.e., writing everything that the program might want to do in common, efficient ways, and having the application-specific "language" just select and arrange the base capabilities. This is old news. Most primitively, it's a subroutine library, something that's been around since FORTRAN days. But there's an important, incredibly powerful trick here. In FORTRAN (and in pretty much all classically-organized subroutine libraries), the library sits around passively waiting to be called by the statements in the language. In a domain-specific language, it's the other way round!
A related approach, which has been implemented over and over, is based on noticing that languages are usually designed independent of databases, but frequently used together. The result is monster redundancy! Wouldn't it be a hoot if database access were somehow an integral part of the language! Well, that explains how and why ABAP evolved from a reporting lanaguage (necessarily intimate with the database) into "Advanced Business Application Programming." And it explains why Ruby, when combined with the database-leveraging RAILS framework, is so popular and highly productive.
A declarative approach
In the world of programming, as in life, the world divides between imperative (commands you, tells you how to do something, gives you directions for getting to point B) and declarative (tells you what should be done, identifies point B as the goal). In short, "what" is declarative and "how" is imperative. A core reason for the incredible success of SQL is that it is declarative, as Chris Date has described in minute detail. Declarative also tends to be less redundant and more concise. As well as doesn't crash.
Conclusion
I'm sorry if you're disappointed I didn't name Erlang or whatever your favorite is as the "best programming language," but I insist that it's far more novel and useful to decide on what basis we are to judge "goodness" in programming languages, and in programs. In general and in most domains (with important exceptions, like inside operating systems), non-redundancy is the reigning virtue, and languages that enable it are superior to ones that are not. Non-redundancy is nearly always best achieved with a declarative, problem-domain-centric approach. And it further achieves the common-sense goal of fast, accurate creation and modification of programs.
Occamality is rarely explicitly valued by programmers, but the trend to it is easy to see. There are widespread examples of building domain-specific languages, meta-data and other aspects of Occamlity. Many programmers already act as though Occamality were the primary dimension of goodness -- may they increase in numbers and influence!
Comments