HUMAN-COMPUTER INTERACTION
SECOND EDITION
In addition to evaluating the system design in terms of its functional capabilities, it is important to be able to measure the impact of the design on the user. This includes considering aspects such as how easy the system is to learn, its usability
The final goal of evaluation is to identify specific problems with the design. These may be aspects of the design which, when used in their intended context, cause unexpected results, or confusion amongst users. This is of course related to both the functionality and usability of the design (depending on the cause of the problem). However, it is specifically concerned with the negative aspects of the design.
Before we consider some of the techniques that are available for evaluation we will distinguish between a number of distinct evaluation styles. There are two main styles of evaluation: those performed under laboratory conditions and those conducted in the work environment or 'in the field'.
The first style of evaluation studies the use of the system within the laboratory. In some cases (particularly in evaluating the design) this involves the designer performing some assessment of the design without the involvement of users. However, users may also be brought into the laboratory to take part in evaluation studies. This approach has a number of benefits and disadvantages.
The second style of evaluation takes the designer or evaluator out into the user's work environment in order to observe the system in action. Again this approach has its pros and cons.
This is of course a generalization: there are circumstances, as we have noted, in which laboratory testing is necessary. In particular, controlled experiments can be useful for evaluation of specific interface features, and must normally be conducted under laboratory conditions. From an economic angle, we need to weigh the costs of establishing recording equipment in the field, and possibly disrupting the actual work situation, with the costs of taking one or more subjects away from their jobs into the laboratory. This balance is not at all obvious.
As we have noted, evaluation should occur throughout the design process. In particular, the first evaluation of a system should ideally be performed before any implementation work has started. If the design itself can be evaluated, expensive mistakes can be avoided, since the design can be altered prior to any major resource commitments. Typically, the later in the design process that an error is discovered, the more costly it is to put right. Consequently, a number of methods have been proposed to evaluate the design prior to implementation. Most of these do not involve the user directly (although there are exceptions, for example the paper and pencil walkthrough described in Section 6.5). Instead they depend upon the designer, or a human factors expert, taking the design and assessing the impact that it will have upon a typical user. The basic intent is to identify any areas which are likely to cause difficulties because they violate known cognitive principles, or ignore accepted empirical results. The methods are therefore largely analytic. Although these methods do not rely on the availability of an implementation, they can be used later in the development process on prototyped or full versions of the system, making them flexible evaluation approaches.
We will consider four possible approaches to evaluating design: the cognitive walkthrough, heuristic evaluation, review-based evaluation and the use of models. Again, these are not mutually exclusive methods.
The origin of the cognitive walkthrough approach to evaluation is the code walkthrough familiar in software engineering. Walkthroughs require a detailed review of a sequence of actions. In the code walkthrough, the sequence represents a segment of the program code that is stepped through by the reviewers to check certain characteristics (for example, that coding style is adhered to, conventions for spelling variables versus procedure calls, and to check that system-wide invariants are not violated). In the cognitive walkthrough, the sequence of actions refers to the steps that an interface will require a user to perform in order to accomplish some task. The evaluators then step through that action sequence to check it for potential usability problems. Usually, the main focus of the cognitive walkthrough is to establish how easy a system is to learn. More specifically, the focus is on learning through exploration. Experience shows that many users prefer to learn how to use a system by exploring its functionality hands on, and not after sufficient training or examination of a user's manual. So the checks that are made during the walkthrough ask questions that address this exploratory learning. To do this, the evaluators go through each step in the task and provide a story about why that step is or is not good for a new user. To do a walkthrough (the term walkthrough from now on refers to the cognitive walkthrough, and not any other kinds of walkthroughs), you need four things:
processed in 0.003 seconds
| |
HCI Book 3rd Edition || old HCI 2e home page || search
|
|
feedback to feedback@hcibook.com | hosted by hiraeth mixed media |
|