This post dives more deeply into the issue of Conceptual UI evolution as introduced in this post. Understanding UI conceptual evolution, which in practice is a spectrum, enables you to build applications that have UI's that produce dramatically better results than the competition -- getting more done, more quickly and at lower cost.
Whose perspective?
The least evolved UI concept looks at things completely from the point of view of the computer – what do I (the computer, which is really the person writing the application) need to get my job done? In this concept, the UI job is conceived as the computer getting things from the user, and protecting the computer from the user’s mistakes. This was, at one time, the prevailing concept for user-machine interactions, and remains surprisingly widespread, although few people would admit to thinking this way today.
At the other end of the spectrum, the software designer looks at things completely from the point of view of the human user – what do I (the human user, which is really the person writing the application) need right now and what can I do? In this concept, the UI job is conceived as the human getting things from the computer, directing it to do things, and presenting the user with options, possibilities and help that are as close and immediate as possible to what the user probably is trying to do.
Obviously, the technical side of UI’s has played a role in what’s possible here. In the early days of computers, we were glad to have them, and decks of cards and batch processing were way better than the alternatives. Computer time was rare and valuable; people were cheap by comparison; so it just made sense to look at things from the computer’s point of view.
The equation reversed long ago. Most computers spend most of their time idling, waiting in anxious anticipation for a user to do something, anything, just GET ME OUT OF THIS IDLE LOOP!! Sorry. That was the computer in me breaking out. Normally under control. Sorry.
Now, it’s entirely feasible to construct user interfaces entirely from the human’s point of view.
Who’s in charge?
There are some cases where the purpose of a program (and its UI) is entirely to be at the service of the user, and there are essentially no external constraints or advice to be given to the user. However, in a wide variety of practical cases, there are lots of people whose concerns need to be reflected in the way the computer is used. At one end of this spectrum, the user is in charge. If the user is in charge, and if we want to make sure the user does a certain thing under certain circumstances (think of a customer service call center environment), we give the user extensive training, and all sorts of analysts look at the results, so that certain customers are responded to in certain ways under different circumstances. We monitor the user’s calls, look at what they entered into the computer, and we work on changing what they do using group and individual meetings, training sessions, etc. All our effort is focused on the user, who clearly controls the computer; if we want things to be different, we go to the center of power, the user.
At the other end of this spectrum, the computer is in charge, in the sense that all major decisions and initiatives originate in the software. After basic how-to-use-a-computer type training, the user needs no training – everything you want the user to do is in the software, from what they should work on next to how they should respond to a particular request. Everyone who would have tried to influence the users directly now tries to put their knowledge into the software, which then applies it and delivers instructions to users as appropriate. When this concept is taken to the extreme, the human operator is little more than a complex and expensive media translation device, getting information the computer can’t get directly, and sending information to places the computer can’t send to directly.
So what does this mean in reality? It varies from application to application, but the net effect is always the same – the computer operator needs little training in how to respond to customers under different circumstances, because that information is all in the software. The operator mostly needs to learn how to take his cues and direction from the software, which provides a constant stream of what you might think of as “just in time training.” The user has no way of knowing if what he’s being asked to do or say has been done by many people for years, or is a new instruction just for this unusual situation.
This approach enables a revolution in how organizations respond to their customers. It makes complete personalization possible. It enables you to respond one way to a high-value customer in a situation, and another way to a low-value customer in the identical situation. It also enables you to make nearly immediate, widespread changes to the way you respond to customers because you have a central place to enter the new “just in time” instructions, and don’t have to go through the painful process of building customer service training materials, training the trainers, getting everyone into classes, only in the end to have inconsistent and incomplete execution of your intentions.
While I’ve discussed this concept in terms of a call center application, exactly the same idea applies to the system interacting with people directly.
The system is really in charge
Let’s understand this way of building a UI with the “system in charge” a little better, since many people are unfamiliar with it.
The first step is to put all the real knowledge about how an operator should respond to which situation in which way into the system, and to enable changes to be made at will. The next step is to change operator/user training so that they understand how to map from the unstructured interactions they have to the choices presented by the system; normally, you have to train them to do this and to know how to respond. Finally, you can provide a set of pre-recorded inputs to the operators and capture their responses, to give them practice in applying their training in real life before they are inflicted on actual people.
Instead of thinking about the UI itself, think about the training that is normally required to get people to use an application, monitor their use of it on an on-going basis, and finally to make changes to the application and how people use it. You can start by thinking of the training as being like a wizard mode of the client, but with a training/case-based spin. Your trainers could build a big branching tree of what people on the other side of the phone can say, and how we should respond. All the content would be supplied by the training/customer service group. This would operate as the default mode of the application, until an operator has “passed,” and optionally beyond.
On one part of the screen could be a list of things customers can say to us. For each item, there would be one or more variations, not identical to the text, that would be recorded. In “pure” training mode, the application would randomly pick one, and the PC would play the recording. The operator would pick the item on the list of potential customer sayings that he felt was closest, and the system would then provide a suggested reply for the operator to give, and (if appropriate) highlight a field and give the operator a directive to interact with that field. This would continue cycling until the transaction was completed, abandoned or otherwise ended.
In “assisted” training mode, the list of customer requests, suggested replies and field highlights would remain, with the customer role being provided by a trainer or by real customers doing real transactions. In this case, a recording could be made of the conversation between operator and customer for additional checking or potentially for dispute resolution.
Obviously, the application needs to be extended to provide this framework, and then to download the content. But the advantage is completely realistic, integrated training. If we make changes to the application, we can automatically throw clients into this training mode to give them “just in time” training on the new features.
For what it’s worth, this is not a new idea. For example, operator training is a big issue in large-scale call center environments. Over the years, the best of those environments have evolved from classroom training to videos, to on-screen help to something like what I've described, which is now the state of the art in large scale call centers and is supported by major vendors of call center software. It’s the best because it’s completely grounded in reality and an extension of the actual software they have to use. The power of the approach shows most clearly in post-training changes and updates. Obviously, with it this integrated, it’s pretty easy to direct a client in training to a training server and check the data he’s entering.
Now that voice bots are becoming available, this approach to building UI's is all the more important and valuable. In any case, optimizing the work of the human is always in order, as spelled out in detail in this post. This post gives a detailed description of the huge project at Sallie Mae in which I played a part in the 1990's. It describes the 10X gains that can be achieved by taking optimizing the UI seriously.The main principles of human optimization in the UI are largely ignored by UI designers, making things many whole-number-factors less efficient than they could be in many cases. Amazing but typical. I've gone into just how and why this "I'd rather be stupid and get crappy results" to building software in general and UI's in particular in this post, in which I also describe the personal evolution that led me to the thoughts.
Comments