HUMAN-COMPUTER INTERACTION SECOND EDITION
Dix, Finlay, Abowd and Beale


Search Results


Search results for evaluation
Showing 51 to 60 of 110 [<< prev] [next >>] [new search]


Chapter 11 Evaluation techniques 11.4.1 Cognitive walkthrough Page 410
  • After the action is taken, will users understand the feedback they get? Assuming the users did the correct action, will they know that? This is the completion of the execution/evaluation interaction cycle. In order to determine if they have accomplished their goal, the users need appropriate feedback.

  • Chapter 11 Evaluation techniques 11.4.1 Cognitive walkthrough Page 410

    It is vital to document the cognitive walkthrough to keep a record of what is good and what needs improvement in the design. It is therefore good to produce some standard evaluation forms for the walkthrough. The cover form would list the information in items 1--4 above, as well as identify the date and time of the walkthrough and the names of the evaluators. Then for each action (from item 3 on the cover form), a separate standard form is filled out that answers each of the questions above. Any negative answer for any of the questions for any particular action should be documented on a separate usability problem report sheet. This problem report sheet should indicate the system being built (the version, if necessary), the date, the evaluators and a detailed description of the usability problem. It would also be useful to determine the severity of the problem, that is whether the evaluators think this problem will occur often, and an impression of how serious the problem will be for the users. This information will help the designers to decide priorities for correcting the design.


    Chapter 11 Evaluation techniques 11.4.2 Heuristic evaluation Page 412

    11.4.2 Heuristic evaluation


    Chapter 11 Evaluation techniques 11.4.2 Heuristic evaluation Page 412

    A heuristic is a guideline or general principle or rule of thumb that can guide a design decision or be used to critique a decision that has already been made. Heuristic evaluation, developed by Jakob Nielsen and Rolf Molich, is a method for structuring the critique of a system using a set of relatively simple and general heuristics.


    Chapter 11 Evaluation techniques 11.4.2 Heuristic evaluation Page 413

    The general idea behind heuristic evaluation is that several evaluators independently critique a system to come up with potential usability problems. It is important that there be several of these evaluators and that the evaluations be done independently. Nielsen's experience indicates that around five evaluators usually results in about 75% of the overall usability problems being discovered.


    Chapter 11 Evaluation techniques 11.4.2 Heuristic evaluation Page 413

    What is evaluated? Heuristic evaluation is best used for evaluating early designs, because it is easier to fix a lot of the usability problems that arise. But all that is required to do the evaluation is some kind of artefact that describes the system, which can range from a set of storyboards giving an overview of the system to a fully functioning system that is in use in the field.


    Chapter 11 Evaluation techniques 11.4.2 Heuristic evaluation Page 414

    Remember -- the purpose of the evaluation is to uncover usability problems. Any problem you as an evaluator think is a potential problem IS a usability problem. Don't worry too much about which heuristic justifies the problem. The heuristics are there to guide you in finding the problems. Once all of the problems are collected, the design team can determine which ones are the most important and will receive attention.


    Chapter 11 Evaluation techniques 11.4.3 Review-based evaluation Page 415

    11.4.3 Review-based evaluation


    Chapter 11 Evaluation techniques 11.4.3 Review-based evaluation Page 415

    However, it should be noted that experimental results cannot be expected to hold arbitrarily across contexts. The reviewer must therefore select evidence carefully, noting the experimental design chosen, the population of subjects used, the analyses performed and the assumptions made. For example, an experiment testing the usability of a particular style of help system using novice subjects may not provide accurate evaluation of a help system designed for expert users. The review should therefore take account of both the similarities and the differences between the experimental context and the design under consideration.


    Chapter 11 Evaluation techniques 11.4.4 Model-based evaluation Page 415

    11.4.4 Model-based evaluation


    Search results for evaluation
    Showing 51 to 60 of 110 [<< prev] [next >>] [new search]

    processed in 0.003 seconds


    feedback to feedback@hcibook.com hosted by hiraeth mixed media