AI is on a hot streak. Businesses claim they’re applying the latest AI and/or ML to their operations. There are articles and books about the wonders of AI and how it deeply threatens to replace humans doing various things. Awwwwk! What can we do??
Dilbert to the rescue
When in doubt about any important technology issue, for example about whether a programmer should become a manager, it often helps to turn to the wisdom of Dilbert for help. Here he illustrates how AI will transform HR and management in technology firms.
AI has built a perfect data-driven case on the uselessness of perpetually-lazy employee Wally. How can Wally possibly defend himself?
As you see in the cartoon, Wally pulls off the feat with aplomb. First he points out a typical flaw is logic (how different is Wally from the rest), and second he identifies what in statistics is called a confounding factor to account for his perfect record of failure. The boss’s method of job assignment was not, of course, included in the data processed by the AI algorithm, nor of course the classic error of correlation vs. causation.
The main point of the Dilbert cartoon is that AI made things worse in this case. Without Wally being given the chance to defend himself to a human being, he would have been out the door.
A little history
AI has been on the verge of ousting human beings from important jobs since the 1950’s. First it was checkers and chess. By the mid-1960’s, ELIZA and SHRDLU were having conversations with people in English. ELIZA impressed many with its conversational, interactive abilities, while SHRDLU could not only talk but it could answer questions and perform actions in its world of blocks. By the early 1970’s many experts were talking about how AI would soon rule the world.
I won’t go through all the history, but all the talk about what would happen “soon” faded away and was forgotten. Some event like Big Blue beating human chess masters would capture the news, which would again fill with experts opining about what would happen “soon.”
We’re in a hot cycle again. People in traditional roles are again being cajoled into “thinking outside the box” in order to change what they do to incorporate the wonders of AI. If they do it right, maybe they can dramatically improve things and put boatloads of people out of work! Thinking outside the box is easy enough to say, but making practical results happen with it is another things altogether. It happens less often than you might think.
Funerals for misbegotten AI projects are extremely rare, in spite of the astounding death rate.
There is real value in AI
My undergraduate thesis at Harvard was written about AI. It was an early attempt to design the data structures that would represent real-world knowledge in an intelligent robot. I focused on “simple” things like walking and catching a ball. In contrast, the vast majority of AI efforts then and since focus on the hardest and most uniquely human things people do.
Where AI and ML have made the most headway is performing tasks that most humans think are repetitive. And before you get to that, there is the perpetually challenging job of getting all the required data. Even then it is challenging to get a real-world system working. These posts illustrate the main strategies for getting good results.
Meanwhile, the current furor about the imminent taking over the world by AI is likely to continue making heavy use of the future tense, as it gradually fades as it has in the past, along with the broken promises and failed predictions.
Comments