HUMAN-COMPUTER INTERACTION SECOND EDITION
Dix, Finlay, Abowd and Beale


Search Results


Search results for cognitive
Showing 60 to 69 of 79 [<< prev] [next >>] [new search]


Chapter 9 Models of the system 9.3.2 Predictability and observability Page 358

When faced with a document on a word processor, the user can simply scroll the display up and down to find out what is there. You cannot see from the current display everything about the system, but you can find out. The process by which the user explores the current state of the system is called a strategy. The formalization of a strategy is quite complex, even ignoring cognitive limitations. These strategies will differ from user to user, but the documentation of a system should tell the user how to get at pertinent information. For example, how to tell what objects in the drawing tools are grouped. This will map out a set of effective strategies with which the user can work.


Chapter 9 Models of the system Recommended reading Page 375

A collected works with chapters covering a range of approaches: formal cognitive modelling, PIE-like models of interaction and formal aspects of dialog description. Various notations are used: TAG (see Chapter 6), Z, functional programming and standard mathematical set theory. The chapters on dialog description include both eventCSP and generative transition networks, a cross between a production system and a state transition network (see Chapter 8). There are also chapters concerning the software engineering issues of moving from formal descriptions through to running programs.


Chapter 9 Models of the system Recommended reading Page 376

Report of a workshop of the same name held in 1996, principally containing articles on formal specification and modelling, but also includes some cognitive modelling.


Chapter 11 Evaluation techniques 11.4 Evaluating the design Page 408

As we have noted, evaluation should occur throughout the design process. In particular, the first evaluation of a system should ideally be performed before any implementation work has started. If the design itself can be evaluated, expensive mistakes can be avoided, since the design can be altered prior to any major resource commitments. Typically, the later in the design process that an error is discovered, the more costly it is to put right. Consequently, a number of methods have been proposed to evaluate the design prior to implementation. Most of these do not involve the user directly (although there are exceptions, for example the paper and pencil walkthrough described in Section 6.5). Instead they depend upon the designer, or a human factors expert, taking the design and assessing the impact that it will have upon a typical user. The basic intent is to identify any areas which are likely to cause difficulties because they violate known cognitive principles, or ignore accepted empirical results. The methods are therefore largely analytic. Although these methods do not rely on the availability of an implementation, they can be used later in the development process on prototyped or full versions of the system, making them flexible evaluation approaches.


Chapter 11 Evaluation techniques 11.4 Evaluating the design Page 408

We will consider four possible approaches to evaluating design: the cognitive walkthrough, heuristic evaluation, review-based evaluation and the use of models. Again, these are not mutually exclusive methods.


Chapter 11 Evaluation techniques 11.4.1 Cognitive walkthrough Page 409

11.4.1 Cognitive walkthrough


Chapter 11 Evaluation techniques 11.4.1 Cognitive walkthrough Page 409

Cognitive walkthrough was originally proposed by Polson and colleagues [200] as an attempt to introduce psychological theory into the informal and subjective walkthrough technique. It has more recently been developed and revised making it more accessible to system designers [258]. The revised version of the walkthrough is discussed here.


Chapter 11 Evaluation techniques 11.4.1 Cognitive walkthrough Page 409

The origin of the cognitive walkthrough approach to evaluation is the code walkthrough familiar in software engineering. Walkthroughs require a detailed review of a sequence of actions. In the code walkthrough, the sequence represents a segment of the program code that is stepped through by the reviewers to check certain characteristics (for example, that coding style is adhered to, conventions for spelling variables versus procedure calls, and to check that system-wide invariants are not violated). In the cognitive walkthrough, the sequence of actions refers to the steps that an interface will require a user to perform in order to accomplish some task. The evaluators then step through that action sequence to check it for potential usability problems. Usually, the main focus of the cognitive walkthrough is to establish how easy a system is to learn. More specifically, the focus is on learning through exploration. Experience shows that many users prefer to learn how to use a system by exploring its functionality hands on, and not after sufficient training or examination of a user's manual. So the checks that are made during the walkthrough ask questions that address this exploratory learning. To do this, the evaluators go through each step in the task and provide a story about why that step is or is not good for a new user. To do a walkthrough (the term walkthrough from now on refers to the cognitive walkthrough, and not any other kinds of walkthroughs), you need four things:


Chapter 11 Evaluation techniques 11.4.1 Cognitive walkthrough Page 410

It is vital to document the cognitive walkthrough to keep a record of what is good and what needs improvement in the design. It is therefore good to produce some standard evaluation forms for the walkthrough. The cover form would list the information in items 1--4 above, as well as identify the date and time of the walkthrough and the names of the evaluators. Then for each action (from item 3 on the cover form), a separate standard form is filled out that answers each of the questions above. Any negative answer for any of the questions for any particular action should be documented on a separate usability problem report sheet. This problem report sheet should indicate the system being built (the version, if necessary), the date, the evaluators and a detailed description of the usability problem. It would also be useful to determine the severity of the problem, that is whether the evaluators think this problem will occur often, and an impression of how serious the problem will be for the users. This information will help the designers to decide priorities for correcting the design.


Chapter 11 Evaluation techniques 11.4.4 Model-based evaluation Page 415

The final approach to evaluating the design that we will note is the use of models. Certain cognitive and design models provide a means of combining design specification and evaluation into the same framework. For example, the GOMS model (see Chapter 6) predicts user performance with a particular interface and can be used to filter particular design options. Similarly, lower-level modelling techniques such as the keystroke-level model (Chapter 6) provide predictions of the time users will take to perform low-level physical tasks.


Search results for cognitive
Showing 60 to 69 of 79 [<< prev] [next >>] [new search]

processed in 0.004 seconds


feedback to feedback@hcibook.com hosted by hiraeth mixed media