HUMAN-COMPUTER INTERACTION
SECOND EDITION
A third example is DRUM [148] which also provides video annotation and tagging facilities. DRUM is part of the MUSiC (Measuring the Usability of Systems in Context/Metrics for Usability Standards in Computing) toolkit which supports
Roughly speaking, evaluation at the design stage tends to involve design experts only and be analytic, whereas evaluation of the implementation brings in users as subjects and is experimental. There are of course exceptions to this: participatory design (see Chapter 6) involves users throughout the design process, and techniques such as cognitive walkthrough are expert based and analytic but can be used to evaluate implementations as well as designs.
Evaluation techniques also vary according to their objectivity -- some techniques rely heavily on the interpretation of the evaluator, others would provide the same information more or less regardless of who is performing the evaluation. The more subjective techniques, such as cognitive walkthrough or think aloud, rely to a large extent on the knowledge and expertise of the evaluator, who must recognize problems and understand what the user is doing. They can be powerful if used correctly and will provide information that may not be available from more objective methods. However, the problem of evaluator bias should be recognized and avoided. One way to decrease the possibility of bias is to use more than one evaluator. Objective techniques, on the other hand, should produce repeatable results which are not dependent on the persuasion of the particular evaluator. Controlled experiments are an example of an objective measure. These avoid bias and provide comparable results but may not reveal the unexpected problem or give detailed feedback on user experience. Ideally, both objective and subjective measures should be used.
A design can be evaluated before any implementation work has started, to minimize the cost of early design errors. Most techniques for evaluation at this stage are analytic and involve using an expert to assess the design against cognitive and usability principles. Previous experimental results and modelling approaches can also provide insight at this stage. Once an artefact has been developed (whether a prototype or full system), experimental and observational techniques can be used to get both quantitative and qualitative results. Query techniques provide subjective information from the user.
11.1 In groups or pairs, use the cognitive walkthrough example, and what you know about user psychology (see Chapter 1), to discuss the design of a computer application of your choice (for example, a word processor or a drawing package). (Hint: Focus your discussion on one or two specific tasks within the application.)
11.5 Complete the cognitive walkthrough example for the video remote control design.
Describes the revised version of the cognitive walkthrough method of evaluation.
Includes material on both cognitive walkthrough and heuristic evaluation.
The emphasis on external forms is encouraging for a designer. It is not necessary to understand completely the individual's cognitive processing in order to design effective groupware. That is an impossible task. Instead, we can focus our analysis of existing group situations and design of groupware on the external representations used by the participants.
processed in 0.003 seconds
| |
HCI Book 3rd Edition || old HCI 2e home page || search
|
|
feedback to feedback@hcibook.com | hosted by hiraeth mixed media |
|