HUMAN-COMPUTER INTERACTION SECOND EDITION
Dix, Finlay, Abowd and Beale


Search Results


Search results for evaluation
Showing 71 to 80 of 110 [<< prev] [next >>] [new search]


Chapter 11 Evaluation techniques Think aloud and cooperative evaluation Page 427

A variation on think aloud is known as cooperative evaluation [162] in which the user is encouraged to see himself as a collaborator in the evaluation and not simply as an experimental subject. As well as asking the user to think aloud at the beginning of the session the evaluator can ask the user questions (typically of the 'why?' or 'what-if?' type) if the user's behaviour is unclear, and the user can ask the evaluator for clarification if a problem arises. This more relaxed view of the think aloud process has a number of advantages:


Chapter 11 Evaluation techniques Think aloud and cooperative evaluation Page 428

The usefulness of think aloud and general observation is largely dependent on the effectiveness of the recording method and subsequent analysis. The record of an evaluation session of this type is known as a protocol, and there are a number of methods from which to choose.


Chapter 11 Evaluation techniques Automatic protocol analysis tools Page 430

A third example is DRUM [148] which also provides video annotation and tagging facilities. DRUM is part of the MUSiC (Measuring the Usability of Systems in Context/Metrics for Usability Standards in Computing) toolkit which supports a complete methodology for evaluation, based upon the application of usability metrics on analytic metrics, cognitive workload, performance and user satisfaction. DRUM is concerned particularly with measuring performance. The methodology provides a range of tools as well as DRUM, including manuals, questionnaires, analysis software and databases.


Chapter 11 Evaluation techniques Automatic protocol analysis tools Page 431

Systems such as these are extremely important as evaluation tools since they offer a means of handling the data that are collected in observational studies and allowing a more systematic approach to the analysis. The evaluator's task is facilitated and it is likely that more valuable observations will emerge as a result.


Chapter 11 Evaluation techniques 11.5.3 Query techniques Page 431

Query techniques are less formal than controlled experimentation, but can be useful in eliciting detail of the user's view of a system. They embody the philosophy which states that the best way to find out how a system meets user requirements is to 'ask the user'. They can be used in evaluation and more widely to collect information about user requirements and tasks. The advantage of such methods is that they get the user's viewpoint directly and may reveal issues which have not been considered by the designer. In addition they are relatively simple and cheap to administer. However, the information gained is necessarily subjective, and may be a 'rationalized' account of events rather than a wholly accurate one. Also it may be difficult to get accurate feedback about alternative designs if the user has not experienced them, which limits the scope of the information that can be gleaned. However, the methods provide useful supplementary material to other methods. There are two main types of query technique: interviews and questionnaires.


Chapter 11 Evaluation techniques Interviews Page 432

Interviews can be effective for high-level evaluation, particularly in eliciting information about user preferences, impressions and attitudes. They may also reveal problems which have not been anticipated by the designer or which have not occurred under observation. When used in conjunction with observation they are a useful means of clarifying an event (compare the post-task walkthrough).


Chapter 11 Evaluation techniques Questionnaires Page 432

An alternative method of querying the user is to administer a questionnaire. This is clearly less flexible than the interview technique, since questions are fixed in advance, and it is likely that the questions will be less probing. However, it can be used to reach a wider subject group, it takes less time to administer, and it can be analyzed more rigorously. It can also be administered at various points in the design process, including during requirements capture, task analysis and evaluation, in order to get information on the user's needs, preferences and experience.


Chapter 11 Evaluation techniques 11.6 Choosing an evaluation method Page 436

11.6 Choosing an evaluation method


Chapter 11 Evaluation techniques 11.6 Choosing an evaluation method Page 436

As we have seen in this chapter, a range of techniques is available for evaluating an interactive system, at all stages in its development. So how do we decide which methods are most appropriate for our needs? There are no hard and fast rules in this -- each method has its particular strengths and weaknesses and each is useful if applied appropriately. However, there are a number of factors which should be taken into account when selecting evaluation techniques. These also provide a way of categorizing the different methods so that we can compare and choose between them. In this final section we will consider these factors.


Chapter 11 Evaluation techniques 11.6.1 Factors distinguishing evaluation techniques Page 436

11.6.1 Factors distinguishing evaluation techniques


Search results for evaluation
Showing 71 to 80 of 110 [<< prev] [next >>] [new search]

processed in 0.003 seconds


feedback to feedback@hcibook.com hosted by hiraeth mixed media