What is most interesting about functional languages is that they strive to be declarative, instead of the imperative orientation of programming languages.
In a prior post I described the long-standing impulse towards creating functional languages in software. Functional languages have been near the leading edge of software fashion since the beginning, while perpetually failing to enter the mainstream. There is nonetheless an insight at the core of functional languages which is highly valuable and probably has played a role in their continuing attractiveness to leading thinkers. When that insight is applied in the right situations in a good way it leads to tremendous practical and business value, and in fact defines a path of advancement for bodies of software written in traditional ways.
This is one of those highly abstract, abstruse issues that seems far removed from practical values. While the subject is indeed abstract and abstruse, the practical implications are far-reaching and result in huge software and business value when intelligently applied.
Declarative and Imperative
A computer program is imperative. It consists of a series of machine language instructions that are executed, one after the other, by the computer's CPU (central processing unit). Each Instruction performs some little action. It may move a piece of data, perform a calculation, compare two pieces of data, jump to another instruction, etc. You can easily imagine yourself doing it. Pick up what's in front of you, take a step forward, if the number over there is greater than zero, jump to this other location, etc. It's tedious! But every program you write in any language ends up being implemented by imperative instructions of this kind. It's how computers work. Period.
Computers operate on data. Data is passive. Data can be created by a program, after which it's put someplace. The locations where data is held/stored are themselves passive; those locations are declared (named and described) as part of creating the imperative part of a computer program.
Data is WHAT you've got; instructions are HOW you act. WHAT is declarative; HOW is imperative. WHAT is like a map; HOW is like an ordered set of directions for getting from point A to point B on a map.
The push towards declarative
From early in the use of computers, some people saw the incredible detail involved in spelling out the set of exacting instructions required to get the computer to do something. A single bit in the wrong position can cause a computer program to fail, or worse, arrive at the wrong results. This detailed directions approach to programming is wired into how computers work. Is there a better way?
There is in fact no avoiding the imperative nature of the computer's CPU. As high level languages began to be invented that freed programmers from the tedious, error-prone detail of programming in machine language, some people began to wonder if there were a way to write a low-level program, a program that of necessity would be imperative, that somehow enabled programs to be created in some higher level language that were declarative in nature.
Some people who were involved with early computers, nearly all with a strong background in math, proceeded to create the declarative class of programming languages, the most distinctive members of which are functional programming languages as I described in an earlier post.
The ongoing attempt to create functional languages and use them to solve the same problems for which imperative languages are used has proven to be a fruitless effort. But there are specific problem areas for which a declarative approach is well-suited and yields terrific, practical results -- so long as the declarative approach is implemented by a program written in an imperative language, creating a workbench style of system for the declarations. The current fashion of "low code" and "no code" environments is an attempt to move in that direction. But I'd like to note that there's nothing new in those movements; they're just new names for things that have been done for decades.
The Declarative approach wins: SQL
DBMS's are ubiquitous. By far the dominant language for DBMS is SQL. Data is defined in a relational DBMS by a declarative schema, defined in DDL (data definition language). Data is operated on by a few different kinds of statements such as SELECT and INSERT.
SELECT certainly sounds like a normal imperative keyword in any language, like COBOL's COMPUTE statement. But it's not. It's declarative. A SELECT statement defines WHAT data you want to select from a particular database, but says nothing about HOW to get it.
This is one of the cornerstones of value in a relational DBMS system. A SELECT statement can be complicated, referencing columns in multiple rows of various tables joined in various ways. The process of getting the selected data from the database can be tricky. Without query optimization, a key aspect of a DBMS, a query could take thousands of times longer than a modern DBMS that implements query optimization will take. Furthermore, table definitions can be altered and augmented, and so long as the data referenced in the query still exists, the SELECT statement will continue to do its job.
If all you're doing is grabbing a row from a table (like a record from an indexed file), SQL is nothing but a bunch of overhead, and you'd be better off with simple 3-GL Read statements. But the second things get complicated, your program will require many fewer lines of complex code while being easier to write and maintain if you have SQL at your disposal. A win for the declarative approach to data access, which is why, decades after it was created, SQL is in the mainstream.
The Declarative approach wins: Excel
I don't know many programmers who use Excel. Too bad for them; it's a really useful tool for many purposes, as its widespread continuing use makes clear.
Excel is a normal executable program written in an imperative language, but it implements a declarative approach to working with data. Studying Excel is a good way to understand and appreciate the paradigm.
An Excel worksheet is two dimensional matrix of values (cells). A cell can be empty or have a value of any kind (text, number, currency, etc.) entered. What's important in this context is that you can put a formula into a cell that defines its value. The formula can reference other cells, individually or by range, absolutely or relatively. A simple formula could be the sum of the values in the cells above the cell with the formula. It could be arbitrarily complex. It can have conditionals (if-then-else). Going beyond formulas, you can turn ranges of cells into a wide variety of graphs.
If you're not familiar with them, you should look at Pivot tables, and when you've absorbed them, move on to the optimization libraries that are built in, with even better ones available from third parties. Pivot tables enable you define complex summaries and extractions of ranges of cells. For example, I have an Excel worksheet in which I list each day of the year and the place where I am that day. A simple Pivot table gives me the total of days spent in each location for the year, something that simple formulas could not compute.
The key thing here is that even though there are computations, tests and complex operations taking place, it's all done declaratively. There is no ordering or flow of control. If your spreadsheet is simple, Excel updates sums (for example) when you enter or change a value in a cell. For more complex ones, you just click re-calc, and Excel figures out all the formula dependencies and evaluates them in the right order. This makes Excel a quicker way to get results than programming in any imperative language, assuming the problem you have fits the Excel paradigm.
The Declarative approach wins: React.js
One of the most widely-used frameworks for building UI's is React.js. Last time I looked, the header page of react.js included this:
It says right out that it's declarative! And that makes it easy! I have found a number of places (example, example) that have nice explanations of how it works and why it's good.
The Declarative approach wins: Compilers
The best computer-related course I took in college by far was a graduate course on the theory and construction of compilers. The approach I was taught all those decades ago remains the main method for compiler construction. First you have a lexical parser to turn the text of the program into a string of tokens. Then you have a grammar parser to turn the tokens into a structured graph of semantic objects. Then you have a code generator to turn the objects into an executable program.
The key insight is that each stage in this process is driven by a rich collection of meta-data. The meta-data contains all the details of the input language and its grammar and the output target. A good compiler is really a compiler-compiler, i.e., the imperative code that reads and acts on the lexical definitions, the grammar and the code generation tables. The beauty is that you write the compiler-compiler just once. If you want it to work on a new language, you give it the grammar of the language without changing any of the imperative code in the compiler-compiler! If you want to generate code for a new computer for a language you've already got, all you do is update the code generation tables! Once you've got such a compiler, you can write the compiler in its own language, at the start "boot-strapping" it.
While they didn't exist at the time I took the course, tools to perform these functions, YACC (Yet Another Compiler Compiler) and LEX, were built at Bell Labs by one of the groups of pioneers who created Unix and the C language. I didn't have those tools but used the concepts to build the FORTRAN compiler I wrote in 1973 while at my first post-college job. Since the only tool I had to use was assembler language, taking the meta-data, compiler-compiler approach saved me huge amounts and time and effort compared to hard-coding the compiler without meta-data.
This is meta-data at its best.
The Declarative spectrum
Excel and SQL are 100% all-in on the declarative approach. But it turns out that it doesn't have to be all one way or the other. If you're attacking an appropriate problem domain, you can start with just programming it imperatively, and then as you understand the domain, introduce an increasing amount of declaration into your program, by defining and using increasing amounts of meta-data. This is exactly what I have described as climbing up the tree of abstraction. Each piece of declarative meta-data you introduce reduces the size of the imperative program, and moves information that is likely to be changed or customized into bug-free, easily-changed meta-data.
Conclusion
Functional languages as a total alternative to imperative languages will perpetually be attractive to some programmers, particularly those of a math theory bent. Except in highly limited situations, it won't happen. No one is going to write an operating system in a functional language.
Nonetheless, the declarative approach with its emphasis on declaring facts, attributes and relationships is the powerful core of the functional approach, and can bring simplicity and power to otherwise impossibly complicated imperative programs.