HUMAN-COMPUTER INTERACTION
SECOND EDITION
Card, Moran and Newell empirically validated KLM against a range of systems, both keyboard and mouse based, and a wide selection of tasks. The predictions were found to be remarkably accurate (an error of about 20%). KLM is thus one of the few models capable of giving accurate quantitative predictions about performance. However, the range of applications is correspondingly small. It tells us a lot about the microinteraction, but not about the larger-scale dialog.
In Chapter 2, we saw that a range of pointing devices exists in addition to the mouse. Often these devices are considered logically equivalent, if the same inputs are available to the application. That is, so long as you can select a point on the screen, they are all the same. However, these different devices -- mouse, trackball, light pen -- feel very different. Although the devices are similar from the application's viewpoint, they have very different sensory--motor characteristics.
Buxton has developed a simple model of input devices [33], the three-state model, which captures some of these crucial distinctions. He begins by looking at a mouse. If you move it with no buttons pushed, it normally moves the mouse cursor about. This tracking behaviour is termed state 1. Depressing a button over an icon and then moving the mouse will often result in an object being dragged about. This he calls state 2 (see Figure 6.1).
If instead we consider a light pen with a button, it behaves just like a mouse when it is touching the screen. When its button is not depressed, it is in state 1, and when its button is down, state 2. However, the light pen has a third state, when the light pen is not touching the screen. In this state the system cannot track the light pen's position. This state is called state 0 (see Figure 6.2).
A touchscreen is like the light pen with no button. While the user is not touching the screen, the system cannot track the finger -- that is, state 0 again. When the user touches the screen, the system can begin to track -- state 1. So a touchscreen is a state 0--1 device whereas a mouse is a state 1--2 device. As there is no difference between a state 0--2 and a state 0--1 device, there are only the three possibilities we have seen. The only additional complexity is if the device has several buttons, in which case we would have one state for each button: 2left, 2middle, 2right.
One use of this classification is to look at different pointing tasks, such as icon selection or line drawing, and see what state 0--1--2 behaviour they require. We can then see whether a particular device can support the required task. If we have to use an inadequate device, it is possible to use keyboard keys to add device states. For example, with a touchscreen, we may nominate the escape key to be the 'virtual' mouse button whilst the user's finger is on the screen. Although the mixing of keyboard and mouse keys is normally a bad habit, it is obviously necessary on occasions.
At first, the model appears to characterize the states of the device by the inputs available to the system. So, from this perspective, state 0 is clearly different from states 1 and 2. However, if we look at the state 1--2 transaction, we see that it is symmetric with respect to the two states. In principle there is no reason why a program should not decide to do simple mouse tracking whilst in state 2 and drag things about in state 1. That is, there is no reason until you want to type something! The way we can tell state 1 from state 2 is by the activity of the user. State 2 requires a button to be pressed, whereas state 1 is one of relative relaxation (whilst still requiring hand--eye coordination for mouse movement). There is a similar difference in tension between state 0 and state 1.
There is a similar difference between states 1 and 2. Because the user is holding a button down, the hand is in a state of tension and thus pointing accuracy and speed may be different. Experiments to calculate Fitts' law constants in states 1 and 2 have shown that these differences do exist [146]. Table 6.2 shows the results obtained for a mouse and trackball.
giving a further revised time for the CLOSE-METHOD of 2.93 seconds using a mouse and 3.91 seconds using a trackball.
Another obvious stopping point is where the task contains complex motor responses (like mouse movement) or where it involves internal decision making. In the first case, decomposition would not be productive; explaining how such actions are performed is unlikely to be either accurate or useful. In the second case, we would expand if the decision making were related to external actions, such as looking up documentation or reading instruments, but not where the activity is purely cognitive. A possible exception to this would be if we were planning to build a decision support system, in which case we may want to understand the way someone thought about a problem in order to build tools to help. However, it is debatable whether HTA is the appropriate technique in this case.
processed in 0.004 seconds
| |
HCI Book 3rd Edition || old HCI 2e home page || search
|
|
feedback to feedback@hcibook.com | hosted by hiraeth mixed media |
|