HUMAN-COMPUTER INTERACTION
SECOND EDITION
TEXT refers to the text of the manuscript that is being edited and CURSOR refers to the insertion cursor on the screen. Of course, these items are not actually located in working memory -- they are external to the user -- but we assume that knowledge from observing them is stored in the user's working memory.
The location (5,23) is the line and column of the typing mistake where the space is required. However, the current cursor position is at line 8 and column 7. This is of course acquired into the user's working memory by looking at the screen. Looking at the four rules above (SELECT-INSERT-SPACE, INSERT-SPACE-DONE, INSERT-SPACE-1 and INSERT-SPACE-2), only the first can fire. The condition for SELECT-INSERT-SPACE is:
(AND (TEST-GOAL perform unit task) true because (GOAL perform unit task) is in w.m. (TEST-TEXT task is insert space) true because (TEXT task is insert space) is in w.m. (NOT (TEST-GOAL insert space)) true because (GOAL insert space) is not in w.m. (NOT (TEST-NOTE executing insert space)) ) true because (NOTE executing insert space) is not in w.m.
The rules in CCT need not represent error-free performance. They can be used to explain error phenomena, though they cannot predict them. For instance, the rules above for inserting a space are 'buggy' -- they do not check the editor's mode. Imagine you had just been typing the 'cognitive' in 'cognitivecomplexity theory' (with the space missing), you think for a few minutes and then look again at the
On the positive side, the conceptual framework of goal hierarchies and user goal stacks can be used to express interface issues, not directly addressed by the notations above. For instance, early automated teller machines gave the customers the money before returning their cards. Unfortunately, this led to many customers leaving their cards behind. This was despite on-screen messages telling them to wait. This is referred to as a problem of closure. The user's principal goal is to get money; when that goal is satisfied, the user does not complete or close the various subtasks which still remain open:
Another problem for grammars is the lowest-level lexical structure. Pressing a cursor key is a reasonable lexeme, but moving a mouse one pixel is less sensible. In addition, pointer-based dialogs are more display oriented. Clicking a cursor at a particular point on the screen has a meaning dependent on the current screen contents. This problem can be partially resolved by regarding operations such as 'select region of text' or 'click on quit button' as the terminals of the grammar. If this approach is taken, the detailed mouse movements and parsing of mouse events in the context of display information (menus etc.) are abstracted away.
The times for the other operators are obtained from empirical data. The keying time obviously depends on the typing skill of the user and different times are thus used for different users. Pressing a mouse button is usually quicker than typing (especially for two-finger typists), and a more accurate time prediction can be made by separating out the button presses B from the rest of the keystrokes K. The pointing
In Chapter 2, we saw that a range of pointing devices exists in addition to the mouse. Often these devices are considered logically equivalent, if the same inputs are available to the application. That is, so long as you can select a point on the screen, they are all the same. However, these different devices -- mouse, trackball, light pen -- feel very different. Although the devices are similar from the application's viewpoint, they have very different sensory--motor characteristics.
If instead we consider a light pen with a button, it behaves just like a mouse when it is touching the screen. When its button is not depressed, it is in state 1, and when its button is down, state 2. However, the light pen has a third state, when the light pen is not touching the screen. In this state the system cannot track the light pen's position. This state is called state 0 (see Figure 6.2).
A touchscreen is like the light pen with no button. While the user is not touching the screen, the system cannot track the finger -- that is, state 0 again. When the user touches the screen, the system can begin to track -- state 1. So a touchscreen is a state 0--1 device whereas a mouse is a state 1--2 device. As there is no difference between a state 0--2 and a state 0--1 device, there are only the three possibilities we have seen. The only additional complexity is if the device has several buttons, in which case we would have one state for each button: 2left, 2middle, 2right.
One use of this classification is to look at different pointing tasks, such as icon selection or line drawing, and see what state 0--1--2 behaviour they require. We can then see whether a particular device can support the required task. If we have to use an inadequate device, it is possible to use keyboard keys to add device states. For example, with a touchscreen, we may nominate the escape key to be the 'virtual' mouse button whilst the user's finger is on the screen. Although the mixing of keyboard and mouse keys is normally a bad habit, it is obviously necessary on occasions.
processed in 0.004 seconds
| |
HCI Book 3rd Edition || old HCI 2e home page || search
|
|
feedback to feedback@hcibook.com | hosted by hiraeth mixed media |
|