HUMAN-COMPUTER INTERACTION SECOND EDITION
Dix, Finlay, Abowd and Beale


Search Results


Search results for mouse
Showing 50 to 59 of 176 [<< prev] [next >>] [new search]


Chapter 4 Usability paradigms and principles Task conformance Page 176

Replacement of complex command languages with actions to manipulate directly the visible objects The case for word processors is similar to that described above for syntactic correctness. In addition, operations on portions of text are achieved many times by allowing the user to highlight the text directly with a mouse (or arrow keys). Subsequent action on that text, such as moving it or copying it to somewhere else, can then be achieved more directly by allowing the user to 'drag' the selected text via the mouse to its new location.


Chapter 5 The design process 5.3 Using design rules Page 191

This chapter is about the process of design, and so we are concerned with when in that process design rules can be of use. Design rules are mechanisms for restricting the space of design options, preventing a designer from pursuing design options which would be likely to lead to an unusable system. Thus, design rules would be most effective if they could be adopted in the earliest stages of the life cycle, such as in requirements specification and architectural design, when the space of possible designs is still very large. However, if the assumptions underlying a design rule are not understood by the designer, it is quite possible that early application can prevent the best design choice. For example, a set of design rules might be specific to a particular hardware platform and inappropriate for other platforms (for example, colour vs. monochrome screens, one- vs. two- or three-button mouse). Such bias in design rules causes them to be applicable only in later stages of the life cycle.


Chapter 5 The design process 5.4.1 Problems with usability engineering Page 205

What are the assumptions we have to make in order to arrive at such a usability specification? One of the problems with usability specifications, as we have stated in the chapter, is that they sometimes require quite specific information about the design in order to be expressed. For example, had we set one of our measuring methods to count keystrokes or mouse clicks, we would have had to start making assumptions about the method of interaction that the system would allow. Had we tried to set a usability specification concerning the browsing of the diary, we would have had to start making assumptions about the layout of the calendar (monthly, weekly, daily) in order to make our estimates specific enough to measure. In the examples we have provided above, we have tried to stay as abstract as possible, so that the usability specifications could be of use as early in the design life cycle as possible. A consequence of this abstractness, particularly evident in the second example, is that we run the risk in the usability specification of setting goals that may be completely unrealistic, though well intentioned. If the usability specification were to be used as a contract with the customer, such speculation could spell real trouble for the designer.


Chapter 5 The design process Limited functionality simulations Page 208

Programming support for simulations means a designer can rapidly build graphical and textual interaction objects and attach some behaviour to those objects which mimics the system's functionality. Once this simulation is built, it can be evaluated and changed rapidly to reflect the results of the evaluation study with various users. For example, we might want to build a prototype for the VCR with undo described earlier using only a workstation display, keyboard and mouse. We could draw a picture of the VCR with its control panel using a graphics drawing package, but then we would want to allow a subject to use the mouse to position a finger cursor over one of the buttons to 'press' it and actuate some behaviour of the VCR. In this way, we could simulate the programming task and experiment with different options for undoing.


Chapter 5 The design process Limited functionality simulations Page 209

There are now plenty of prototyping tools available which allow the rapid development of such simulation prototypes. These simulation tools are meant to provide a quick development process for a very wide range of small but highly interactive applications. One of the most well-known and successful prototyping tools is HyperCard, a simulation environment for the Macintosh line of Apple computers. HyperCard is similar to the animation tools described in the previous section in that the user can create a graphical depiction of some system, say the VCR, with common graphical tools. The graphical images are placed on cards and links between cards can be created which control the sequencing from one card to the next for animation effects. What HyperCard provides beyond this type of animation is the ability to describe more sophisticated interactive behaviour by attaching a script, written in the HyperTalk programming language, to any object. So for the VCR, we could attach a script to any control panel button to highlight it or make an audible noise when the user clicks the mouse cursor over it. Then some functionality could be associated to that button by reflecting some change in the VCR display window.


Chapter 5 The design process High-level programming support Page 211

HyperTalk was an example of a special-purpose high-level programming language which makes it easy for the designer to program certain features of an interactive system at the expense of other system features like speed of response or space efficiency. HyperTalk and many other languages similar to it allow the programmer to attach functional behaviour to the specific interactions that the user will be able to do, such as position and click on the mouse over a button on the screen. Previously, the difficulty of interactive programming was that it was so implementation dependent that the programmer would have to know quite a bit of intimate detail of the hardware system in order to control even the simplest of interactive behaviour. These high-level programming languages allow the programmer to abstract away from the hardware specifics and think in terms that are closer to the way the input and output devices are perceived as interaction devices.


Chapter 5 The design process 5.6 Design rationale Page 213

There is usually no single best design alternative. More often, the designer is faced with a set of trade-offs between different alternatives. For example, a graphical interface may involve a set of actions that the user can invoke by use of the mouse and the designer must decide whether to present each action as a 'button' on the screen, which is always visible, or hide all of the actions in a menu which must be explicitly invoked before an action can be chosen. The former option maximizes the operation visibility (as discussed in Chapter 4) but the latter option takes up less screen space. It would be up to the designer to determine which criterion for evaluating the options was more important and then communicating that information in a design rationale.


Chapter 6 Models of the user in design 6.6 Cognitive models Page 230

The remaining techniques and models in this chapter all claim to have some representation of users as they interact with an interface; that is, they model some aspect of the user's understanding, knowledge, intentions or processing. The level of representation differs from technique to technique -- from models of high-level goals and the results of problem-solving activities, to descriptions of motor-level activity, such as keystrokes and mouse clicks. The formalisms have largely been developed by psychologists, or computer scientists whose interest is in understanding user behaviour.


Chapter 6 Models of the user in design 6.7.1 GOMS Page 233

Operators These are the lowest level of analysis. They are the basic actions that the user must perform in order to use the system. They may affect the system (for example, press the 'X' key) or only the user's mental state (for example, read the dialog box). There is still a degree of flexibility about the granularity of operators; we may take the command level 'issue the SELECT command' or be more primitive: 'move mouse to menu bar, press centre mouse button Š'.


Chapter 6 Models of the user in design 6.7.1 GOMS Page 233

Methods As we have already noted, there are typically several ways in which a goal can be split into subgoals. For instance, in a certain window manager a currently selected window can be closed to an icon either by selecting the 'CLOSE' option from a pop-up menu, or by hitting the 'L7' function key. In GOMS these two goal decompositions are referred to as methods, so we have the CLOSE-METHOD and the L7-METHOD:

GOAL: ICONIZE-WINDOW

.    [select GOAL: USE-CLOSE-METHOD

.        .    MOVE-MOUSE-TO-WINDOW-HEADER

.        .    POP-UP-MENU

.        .    CLICK-OVER-CLOSE-OPTION

        GOAL: USE-L7-METHOD

.        .    PRESS-L7-KEY]

Search results for mouse
Showing 50 to 59 of 176 [<< prev] [next >>] [new search]

processed in 0.008 seconds


feedback to feedback@hcibook.com hosted by hiraeth mixed media