I have described the concept of automation depth, which goes through natural stages starting with the computer playing a completely supportive role to the person (the recorder stage) and ending with the robot stage in which the person plays a secondary role. I have illustrated these stages with a couple examples that illustrate the surprising pain and trouble of going from one stage to the next.
Unlike the progression of software applications from custom through parameterized to workbench, customers tend to resist moving to the next stage of automation for various reasons including the fear of loss of control and power.
Automation depth in Information Access
Each of the patterns of software evolution I've described are general in nature. I’ve tried to give examples to show how the principles are applied. In this section, I’ll show how the entire pattern played out in “information access,” which is the set of facilities for enabling people to find and use computer-based information for business decision making.
Built-in Reporting
“Recorder” is the first stage of the automation depth pattern of software evolution. In the case of information access, early programs were written to record the basic transactions that took place; as part of the recording operation, reports were typically produced, summarizing the operations just performed. For example, all the checks written and deposits made at a bank would be recorded during the day; then, at night, all the daily activity would be posted to the accounts. The posting program would perform all the updates and create reports. The reports would include the changes made, the new status of all the accounts, and whatever else was needed to run the bank.
At this initial stage, the program that does the recording also does the reporting. Reporting is usually thought to be an integral part of the recording process – you do it, and then report on what you did. Why would you have one program doing things, and a whole separate program figuring out and reporting on what the first program did? It makes no sense.
What if you need reports for different purposes? You enhance the core program and the associated reports. What if lots of people want the reports? You build (in the early days) or acquire (as the market matured) a report distribution system, to file the reports and provide them to authorized people as required.
Efficiency was a key consideration. The core transaction processing was “touching” the transactions and the master files; while it was doing this, it could be updating counters and adding to reports as it went along, so that you wouldn’t have to re-process the same data multiple times.
Report Writers
The “power tool” stage of automation depth had two major sub-stages. The first of these was the separation of reporting from transaction processing. Information access was now a key goal in itself, and was so important and done so frequently that specialized tools were built to make it easy, which is always the sign that you’re into the “power tool” phase.
This first generation of power tools were specialized software packages generally called “report writers.” The power tool was directed at the programmer who had to create the report. Originally, the language that was used for transaction processing was also used for generating the report. The most frequent such language was COBOL. The fact that COBOL was cumbersome for this purpose was reflected in the fact that specialized syntax was added to COBOL to ease the task of writing reports. But various clever people saw that by creating a whole new language and software environment, the process of writing reports could be tremendously enhanced and simplified. These people began to think in terms of reporting itself, so naturally they broke the problem into natural pieces: accessing the data you want to report on, processing it (select, sort, sum, etc.), and formatting it for output.
The result of this thinking was a whole industry that itself evolved over time, and played out in multiple environments and took multiple forms. The common denominator was that they were all software tools to enable programmers to produce reports more quickly and effectively than before, and were complete separate from the recorder or transaction processing function.
At the same time, data storage was evolving. The database management system emerged through several generations. This is not the place for that story, which is tangential to the automation depth of information access. What is relevant is that, as the industry generally recognized that information access had moved to the report writer stage of automation, effort was made to create a clean interface between data and the programs that accessed the data for various purposes.
Data Warehouse and OLAP
Report writers were (and are) important power tools – but they’re basically directed at programmers. But programmers are not the ultimate audience for most reports; most reports are for people charged with comprehending the business implications of what is on the report and taking appropriate action in response. And the business users proved to be perennially dissatisfied with the reports they were getting. There was too much information (making it hard to find the important things), not enough information, information organized in confusing ways (so that users would need to walk through multiple reports side-by-side), or information presented in boring ways that made it difficult to grasp the significance of what was on the page. And anytime you wanted something different, it was a big magilla – you’d have to get resources authorized, a programmer assigned, suffer through the work eventually getting done, and by then you’d have twice as many new things that needed getting done.
As a result of these problems, a second wave of power tools emerged, directed at this business user. These eventually were called OLAP tools. The business user (with varying levels of help from those annoying programmers) had his own power tool, giving him direct access to the information. Instead of static reports, you could click on something and find out more about it – right away! But with business users clicking, the underlying data management systems were getting killed, so before long the business users got their own copy of the data, a data warehouse system.
In a sign of things to come, the business users noticed that sometimes, they were just scanning the reports for items of significance, and that it wasn’t hard to spell out exactly what they cared about. So OLAP tools were enhanced to find and highlight items of special significance, for example sales regions where the latest sales trends were lower than projections by a certain margin. This evolved into a whole system of alerts.
Predictive Analytics
OLAP tools are certainly power tools, but the trouble with power tools is that you need power users – people who know the business, can learn to use a versatile tool like OLAP effectively, and can generate actions from the information that help the business. So information access advanced to the final stage in our general pattern, the “robot” stage, in which human decision making is replaced by an automated system. In information access, that stage is often called “predictive analytics,” which is a kind of math modeling.
As areas of business management are better understood, it usually turns out that predictive analytics can do a better, quicker job of analyzing the data, finding the patterns, and generating the actionable decisions than a person ever could. A good example is home mortgage lending, where the vast majority of the decisions today are made using predictive analytics. Many years ago, a person who wanted a home mortgage would make an appointment with a loan officer at a local savings bank and request the loan. The officer would look at your information and make a human judgment about your loan worthiness.
That “power user” system has long since been supplanted by the “robot” system of predictive analytics, where all the known data about any potential borrower is constantly tracked, and credit decisions about that person are made on the basis of the math whenever needed. No human judgment is involved, and in fact would only make the system worse.
Predictive analytics is the same in terms of information utilization as the prior stages, but the emphasis on presenting a powerful, flexible user interface to enable a power user to drive his way to information discovery is replaced by math models that are constantly tuned and updated by the new information that becomes available.
Sometimes the predictive analytics stage is held back because of a lack of vision or initiative on the part of the relevant industry leaders. However, a pre-condition for this approach really working is the availability of all the relevant data in suitable format. For example, while we tend to focus on the math for the automated mortgage loan processing, the math only works because it has access to a nationwide database containing everyone’s financial transactions over a period of many years. A power user with lots of experience, data and human judgment will beat any form of math with inadequate data; however, good math fueled with a comprehensive, relevant data set will beat the best human any time.
Conclusion
All these stages of automation co-exist today. One of the key rules of computing is that old programs rarely die; they just get layered on top of, given new names, and gradually fade into obscurity. There are still posting programs written in assembler language that have built-in reporting. In spite of years of market hype from the OLAP folks, report writing hasn’t gone away; in fact, some older report writers have interesting new interactive capabilities; OLAP and data warehouses are things that some organizations aspire to, while others couldn’t live without them; finally, there are important and growing pockets of business where the decisions are made by predictive analytics, and to produce pretty reports for decision-making purposes (as opposed to bragging about how well the predictive analytics are doing) would be malpractice.
Even though all these stages of automation co-exist in society as a whole, they rarely co-exist in a functional segment of business. Each stage of automation is much more powerful than the prior stage, and it provides tangible, overwhelming advantages to the groups that use it. Therefore, once a business function has advanced to use a new stage of information access automation, there is a “tipping point,” and it tends to become the new standard for doing things among organizations performing that function.
Comments