HUMAN-COMPUTER INTERACTION SECOND EDITION
Dix, Finlay, Abowd and Beale


Search Results


Search results for screens
Showing 121 to 130 of 281 [<< prev] [next >>] [new search]


Chapter 6 Models of the user in design 6.7.2 Cognitive complexity theory Page 237

TEXT refers to the text of the manuscript that is being edited and CURSOR refers to the insertion cursor on the screen. Of course, these items are not actually located in working memory -- they are external to the user -- but we assume that knowledge from observing them is stored in the user's working memory.


Chapter 6 Models of the user in design 6.7.2 Cognitive complexity theory Page 237

The location (5,23) is the line and column of the typing mistake where the space is required. However, the current cursor position is at line 8 and column 7. This is of course acquired into the user's working memory by looking at the screen. Looking at the four rules above (SELECT-INSERT-SPACE, INSERT-SPACE-DONE, INSERT-SPACE-1 and INSERT-SPACE-2), only the first can fire. The condition for SELECT-INSERT-SPACE is:

(AND (TEST-GOAL perform unit task)

        true because (GOAL perform unit task) is in w.m.

    (TEST-TEXT task is insert space)

         true because (TEXT task is insert space) is in w.m.

    (NOT    (TEST-GOAL insert space))

         true because (GOAL insert space) is not in w.m.

    (NOT    (TEST-NOTE executing insert space)) )

         true because (NOTE executing insert space) is not in w.m. 

Chapter 6 Models of the user in design 6.7.2 Cognitive complexity theory Page 238

The rules in CCT need not represent error-free performance. They can be used to explain error phenomena, though they cannot predict them. For instance, the rules above for inserting a space are 'buggy' -- they do not check the editor's mode. Imagine you had just been typing the 'cognitive' in 'cognitivecomplexity theory' (with the space missing), you think for a few minutes and then look again at the screen and notice that the space is missing. The cursor is at the correct position for the space, so rule INSERT-SPACE-1 never gets fired and we go directly through the sequence: SELECT-INSERT-SPACE, INSERT-SPACE-2 then INSERT-SPACE-DONE. You type 'i', a space and then escape. However, the 'i' assumes that you are in vi's command mode, and is the command to move the editor into insert mode. If, however, after typing 'cognitive' you had not typed escape, to get you back into command mode, the whole sequence would be done in insert mode. The text would read: 'cognitiveI complexity theory'.


Chapter 6 Models of the user in design 6.7.3 Problems and extensions of goal hierarchies Page 240

On the positive side, the conceptual framework of goal hierarchies and user goal stacks can be used to express interface issues, not directly addressed by the notations above. For instance, early automated teller machines gave the customers the money before returning their cards. Unfortunately, this led to many customers leaving their cards behind. This was despite on-screen messages telling them to wait. This is referred to as a problem of closure. The user's principal goal is to get money; when that goal is satisfied, the user does not complete or close the various subtasks which still remain open:


Chapter 6 Models of the user in design 6.9 The challenge of display-based systems Page 245

Another problem for grammars is the lowest-level lexical structure. Pressing a cursor key is a reasonable lexeme, but moving a mouse one pixel is less sensible. In addition, pointer-based dialogs are more display oriented. Clicking a cursor at a particular point on the screen has a meaning dependent on the current screen contents. This problem can be partially resolved by regarding operations such as 'select region of text' or 'click on quit button' as the terminals of the grammar. If this approach is taken, the detailed mouse movements and parsing of mouse events in the context of display information (menus etc.) are abstracted away.


Chapter 6 Models of the user in design 6.10.1 Keystroke-level model Page 247

The times for the other operators are obtained from empirical data. The keying time obviously depends on the typing skill of the user and different times are thus used for different users. Pressing a mouse button is usually quicker than typing (especially for two-finger typists), and a more accurate time prediction can be made by separating out the button presses B from the rest of the keystrokes K. The pointing time can be calculated using Fitts' law (see Chapter 1), and thus depends on the size and position of the target1. Alternatively, a fixed time based on average within screen pointing can be used. Drawing time depends on the number and length of the lines drawn, and is fairly domain specific, but one can easily use empirical data for more general drawing tasks. Finally, homing time and mental preparation time are assumed constant. Typical times are summarized in Table 6.1.


Chapter 6 Models of the user in design 6.10.1 Keystroke-level model Page 250

In Chapter 2, we saw that a range of pointing devices exists in addition to the mouse. Often these devices are considered logically equivalent, if the same inputs are available to the application. That is, so long as you can select a point on the screen, they are all the same. However, these different devices -- mouse, trackball, light pen -- feel very different. Although the devices are similar from the application's viewpoint, they have very different sensory--motor characteristics.


Chapter 6 Models of the user in design 6.10.1 Keystroke-level model Page 250

If instead we consider a light pen with a button, it behaves just like a mouse when it is touching the screen. When its button is not depressed, it is in state 1, and when its button is down, state 2. However, the light pen has a third state, when the light pen is not touching the screen. In this state the system cannot track the light pen's position. This state is called state 0 (see Figure 6.2).


Chapter 6 Models of the user in design 6.10.1 Keystroke-level model Page 252

A touchscreen is like the light pen with no button. While the user is not touching the screen, the system cannot track the finger -- that is, state 0 again. When the user touches the screen, the system can begin to track -- state 1. So a touchscreen is a state 0--1 device whereas a mouse is a state 1--2 device. As there is no difference between a state 0--2 and a state 0--1 device, there are only the three possibilities we have seen. The only additional complexity is if the device has several buttons, in which case we would have one state for each button: 2left, 2middle, 2right.


Chapter 6 Models of the user in design 6.10.1 Keystroke-level model Page 252

One use of this classification is to look at different pointing tasks, such as icon selection or line drawing, and see what state 0--1--2 behaviour they require. We can then see whether a particular device can support the required task. If we have to use an inadequate device, it is possible to use keyboard keys to add device states. For example, with a touchscreen, we may nominate the escape key to be the 'virtual' mouse button whilst the user's finger is on the screen. Although the mixing of keyboard and mouse keys is normally a bad habit, it is obviously necessary on occasions.


Search results for screens
Showing 121 to 130 of 281 [<< prev] [next >>] [new search]

processed in 0.004 seconds


feedback to feedback@hcibook.com hosted by hiraeth mixed media