HUMAN-COMPUTER INTERACTION
SECOND EDITION
A variation on think aloud is known as cooperative evaluation [162] in which the user is encouraged to see himself as a collaborator in the evaluation and not simply as an experimental subject. As well as asking the user to think aloud at the beginning of the session the evaluator can ask the user questions (typically of the 'why?' or 'what-if?' type) if the user's behaviour is unclear, and the user can ask the evaluator for clarification if a problem arises. This more relaxed view of the think aloud process has a number of advantages:
The usefulness of think aloud and general observation is largely dependent on the effectiveness of the recording method and subsequent analysis. The record of an evaluation session of this type is known as a protocol, and there are a number of methods from which to choose.
A third example is DRUM [148] which also provides video annotation and tagging facilities. DRUM is part of the MUSiC (Measuring the Usability of Systems in Context/Metrics for Usability Standards in Computing) toolkit which supports
Systems such as these are extremely important as evaluation tools since they offer a means of handling the data that are collected in observational studies and allowing a more systematic approach to the analysis. The evaluator's task is facilitated and it is likely that more valuable observations will emerge as a result.
Query techniques are less formal than controlled experimentation, but can be useful in eliciting detail of the user's view of a system. They embody the philosophy which states that the best way to find out how a system meets user requirements is to 'ask the user'. They can be used in evaluation and more widely to collect information about user requirements and tasks. The advantage of such methods is that they get the user's viewpoint directly and may reveal issues which have not been considered by the designer. In addition they are relatively simple and cheap to administer. However, the information gained is necessarily subjective, and may be a 'rationalized' account of events rather than a wholly accurate one. Also it may be difficult to get accurate feedback about alternative designs if the user has not
Interviews can be effective for high-level evaluation, particularly in eliciting information about user preferences, impressions and attitudes. They may also reveal problems which have not been anticipated by the designer or which have not occurred under observation. When used in conjunction with observation they are a useful means of clarifying an event (compare the post-task walkthrough).
An alternative method of querying the user is to administer a questionnaire. This is clearly less flexible than the interview technique, since questions are fixed in advance, and it is likely that the questions will be less probing. However, it can be used to reach a wider subject group, it takes less time to administer, and it can be analyzed more rigorously. It can also be administered at various points in the design process, including during requirements capture, task analysis and evaluation, in order to get information on the user's needs, preferences and experience.
As we have seen in this chapter, a range of techniques is available for evaluating an interactive system, at all stages in its development. So how do we decide which methods are most appropriate for our needs? There are no hard and fast rules in this -- each method has its particular strengths and weaknesses and each is useful if applied appropriately. However, there are a number of factors which should be taken into account when selecting evaluation techniques. These also provide a way of categorizing the different methods so that we can compare and choose between them. In this final section we will consider these factors.
processed in 0.003 seconds
| |
HCI Book 3rd Edition || old HCI 2e home page || search
|
|
feedback to feedback@hcibook.com | hosted by hiraeth mixed media |
|