While the success patterns laid out in the prior posts in this series may seem clear in the abstract, applying them in practice can be hard, because nearly everyone who thinks or talks about AI (these sets over overlap very little, sadly) takes a different approach.
http://www.blackliszt.com/2018/03/getting-results-from-ml-and-ai-1.html
http://www.blackliszt.com/2018/04/getting-results-from-ml-and-ai-2.html
http://www.blackliszt.com/2018/04/getting-results-from-ml-and-ai-3-closed-loop.html
So here are a couple examples in healthcare to illustrate the principles.
The spectrum of problems
One useful way of understanding the winning patterns in AI is to understand the range of problems to which it may be applied. It's not difficult to arrange the problems as a spectrum. While there are many ways to characterize the spectrum -- here's a prior attempt of mine to characterize the spectrum in healthcare -- perhaps it's easiest to understand it in terms of the typical salary of the person whose work is being replaced or augmented by the AI technology.
At one end of the spectrum are low-paid people performing relatively mundane, repetitive tasks. These people have relatively little education and minimal certifications compared to those higher on the spectrum. Think back-office clerical staff.
At the other end of the spectrum are highly paid, educated and certified people performing what are understood to be highly skilled and consequential tasks. Think doctors.
The very name "artificial intelligence" tells you at which end of the spectrum AI is normally applied. The popular image, supported by the marketing of the relevant vendors, is that AI is amazingly smart, smarter than the smartest person in the room, just like the way that IBM's Deep Blue (predecessor of Watson) beat the human world champion playing "Jeopardy," and beat the then-reigning world champion playing chess.
To put it plainly, while these achievements of Deep Blue were amazing, they were victories playing games. They were not victories "playing" in the real world. Games are 100% artificial. The data is 100% clear and unambiguous. There are no giant seas of uncertainty, ignorance or unknowability -- unlike the real world, which is chock full of them. Nonetheless, IBM and whole piles of people who self-identify as being "smart," and are widely perceived as being smart, jumped on the "AI does what smart people do" bandwagon.
This was and is incredibly stupid and 100% bone-headed wrong. Not only is it bone-headed in terms of intelligent application of AI, it violates simple common sense. If you knew a talented high school kid who played a mean game of chess, would you drop them into a hospital and give them a white coat? Even after the kid claimed to have read and understood all the medical literature?
The smart thing to do is to apply AI to tasks that are relatively simple for humans, at the "low" end of the spectrum, and see if you can get a win. If you can make it work, by all means graduate to the next more complicated thing. It turns out that replacing/augmenting human tasks that are mundane, "simple" and repetitive is amazingly challenging! Yes, even for super-advanced AI!
IBM Watson in healthcare
I know I've made some strong statements here. It's little old me vs. a multi-billion dollar effort by that world-wide leader in AI technology, IBM. Who's going to win that one? Turns out, it's easy. See this, for example.
IBM claims to get many billions of dollars in revenue from Watson. But everything about getting it to do what doctors can do has proven to be vastly more challenging than anyone thought, and its advice rarely makes any difference, even when it's not wrong. And this, after years of work by top doctors at top institutions doing their best to help IBM "train" it!
Here is a summary of the situation:
Let's note: the Watson effort is built on the most famous "smart computer" technology ever, funded to the tune of billions of dollars, with technology acquisitions and expert help from all corners. The "disappointing" outcomes are not the result of having picked the wrong algorithm or something easily fixed. The failures are a direct result of not following the success patterns described in the earlier posts of this series, combined with applying AI to the wrong end of the job-complexity spectrum described earlier in this post.
Olive in healthcare
If IBM can't manage to pull off a win in healthcare, after years of applying the most advanced AI and spending billions of dollars with the best help that money can buy, I guess it's impossible, right?
Wrong. IBM made a fatal strategic mistake. They used AI to attack the hardest problem of all, at the wrong end of the complexity spectrum. Has anyone done this the right way? Applied modern AI and related automation technology to the right end of the complexity spectrum? Yes! Olive has!
Olive is making a positive difference today (please note the use of the present tense here) in many hospital systems by reducing costs, reducing error rates and getting patient information where it needs to go more quickly and efficiently, saving time and aggravation of medical workers along the way. The money and time it saves in the back office may not seem glamorous or "leading edge," but every minute and dollar it saves is time and money that can go to making patients healthier, instead of disappearing down the "overhead" sink-hole.
Getting a pre-auth for a key procedure so it can be performed. Submitting all the right information so a claim can be paid. Getting information to pharmacies so patients can get the life-saving drugs they need. Getting all the information from incompatible, hard-to-navigate EMR's so doctors have all the information they need to give patients the most appropriate care. These absolutely essential tasks are largely performed in windowless rooms far removed from patient settings by people who work hard at largely thankless jobs that aren't well-paid -- but are absolutely essential to providing care to patients. And they're harder than they look! Anyone who's spend any time with a modern EMR can't help but think of the endless meetings attended by skilled professionals at the software vendor trying to find yet new ways to confuse and confound the users. And anyone who has dealt with getting insurance companies the information they demand can't help but think of a cranky three-year-old who lost emotional maturity when he grew up. Bottom line: this stuff is hard!
Olive gets it done, using an exotic collection of works-today technology, silently learning from the people who do the work today. And gets it done without having to upgrade or replace existing computer systems. Amazing.
The founders at Olive are doing AI the right way, attacking the right end of the complexity spectrum. They follow most of the rest of the success patterns laid out in the prior posts of this series, above all attention to data and detail, working from the bottom up in terms of algorithmic complexity, and using closed loop. It's a hard problem, and it was hard work to get it done. But they did it, without the massive armada IBM fruitlessly assembled.
Disclosure: Olive is an investment of Oak HC/FT, the venture firm at which I'm tech partner.
Comments