There is a general impression that software innovation in one of its many forms e.g. “Digital Transformation” is marching ahead full steam. There are courses, consultants, posters hanging in common spaces and newly-created Chief Innovation Officer positions. What’s new? What’s the latest in software?
The reality is that there are large reservoirs of proven, tested and working software innovations ready to be rolled out, but these riches are kept behind the solid walls of dams, with armies of alert guardians ready to leap in and patch any holes through which these valuable innovations may start leaking into practice. Almost no one is aware of the treasure-trove of proven innovations kept dammed up from being piped to the many places that could benefit from them; even the guardians are rarely fully conscious of what they’re doing.
If anyone really wanted to know what was coming in software, all they would have to do is find the dams and peer into the waters they hold back. In spite of the mighty dams, it sometimes happens that the software finds its way into practice, normally in a flood that blankets a small neighborhood. Sometimes the flood has been held back for decades. There are cases I know of where an innovation was proven 50 years ago, and is still not close to being rolled out.
The dams are built in many ways with many materials. The raw materials appear to include aspects of human nature: ignorance, sloth, greed -- you know, the usual. The really high, solid dams have broad institutional support, in which “everyone” is fine with things as they are, and won’t so much as give the time of day to an amazing innovation that would change many things for the better – except of course for a key interest group.
Here is one of the examples I personally know about. It was one of my introductions to what innovation is all about, and the sad fact that creating a valuable innovation is generally the easy part – the hard part is usually overcoming the human and institutional barriers to deploying it.
Automating Medical Image Reading and Reporting
When a radiologist gets an X-ray, there are two phases of work. The first is to “read” the X-ray and observe anything non-typical that it shows, anything from a broken bone to a tumor. The second is to generate a report of the findings. Most radiologists, then and now, dictate their findings; then someone transcribes the dictated report and sends it as needed. The details of the report can vary depending on the purpose of the X-ray and the kind of person for whom it’s intended.
There has been technology first tested decades ago that appears to show that software is capable of “reading” an X-ray at least as accurately as a human radiologist. I will ignore that work for now, and focus on what should be the less threatening technology, which is translating the doctor's observations to an appropriate report.
While I was in college, I worked on the early ARPAnet with an amazing group of people, one of whom was an MIT student from the San Francisco area who later went on to fame making major advances in integrated circuits, among other things. The summer after we did most of our ARPAnet work, he got involved with a new initiative to transform the way radiologist reports of X-rays were created. He knew that some of my skills in automated language parsing and generation were relevant, so he invited me out to pitch in. I went.
GE, then and now, was a major maker of medical imaging systems. They were seriously experimenting with ways of enhancing their systems to make it easier for radiologists to produce reports of their findings. They created a set of mark sense forms, of the kind widely used at the time for recording the answers to tests, to enable a radiologist to quickly mark his observations of the part of the body in question. Here is the form for a person's guts:
Here is part of the form for the hand, showing how you can mark your observations:
Here is part of the form for the spine, showing how you can customize the report output as needed:
My friends had gotten most of the system together -- all I had to do was build the software that would create the radiologist's report. Because of uncertainty about radiologist's accepting the results, I had to make the report generator easily customizable, so that the radiologist's typical style of writing was created.
Leaving out the details, in a few weeks I created a domain-specific language resembling a generative grammar and rules engine to do the job -- along with the necessary interpreter, all written in PDP-8 assembler language, which was new to me. My friends wrote a clear and compelling report describing our work and included an example of our working software in it. Here was the sample filled-out form:
And here was part of the report that was generated by the software we wrote from the input of that form:
The software worked! And yes, the date on the report, 1971, is the date we did the work.
A major company, prominent in the field, had taken the initiative to design mark-sense forms, incorporating input from many radiologists. A few college kids, in contact with one of GE's leading partners, created a working prototype of customization report-generating software, along with a proposal to bring the project to production.
Just as a side effect, this project would have done something transformative: capture the diagnostic observations of radiologists into fully coded semantic form. This is a form of electronic medical record (EMR) that still doesn't exist, even today! For all the billions of dollars that have been spent on EMR's, supposedly to capture the data that wil fuel the insights that will improve medical care, a great deal of essential medical observation is still recorded only in un-coded, narrative form -- including medical imaging reports!
The bottom line is that this project never got off the ground. Not because the software couldn't be written, but because ... well, you tell me.
See the next post for the continuation of this sad but typical story.