HUMAN-COMPUTER INTERACTION SECOND EDITION
Dix, Finlay, Abowd and Beale


Search Results


Search results for interfaces
Showing 260 to 269 of 331 [<< prev] [next >>] [new search]


Chapter 11 Evaluation techniques 11.4.4 Model-based evaluation Page 415

The final approach to evaluating the design that we will note is the use of models. Certain cognitive and design models provide a means of combining design specification and evaluation into the same framework. For example, the GOMS model (see Chapter 6) predicts user performance with a particular interface and can be used to filter particular design options. Similarly, lower-level modelling techniques such as the keystroke-level model (Chapter 6) provide predictions of the time users will take to perform low-level physical tasks.


Chapter 11 Evaluation techniques Subjects Page 416

The choice of subjects is vital to the success of any experiment. In evaluation experiments subjects should be chosen to match the expected user population as closely as possible. Ideally this will involve experimental testing with the actual users but this is not always possible. If subjects are not actual users they should be chosen to be of a similar age and level of education as the intended user group. Their experience with computers in general, and with systems related to that being tested, should be similar, as should their experience or knowledge of the task domain. It is no good testing an interface designed to be used by the general public on a subject set made up of computer science undergraduates: they are simply not representative of the intended user population.


Chapter 11 Evaluation techniques Variables Page 417

Independent variables are those characteristics of the experiment which are manipulated to produce different conditions for comparison. Examples of independent variables in evaluation experiments are interface style, level of help, number of menu items and icon design. Each of these variables can be given a number of different values; each value that is used in an experiment is known as a level of the variable. So, for example, an experiment that wants to test whether search speed improves as the number of menu items decreases may consider menus with five, seven, and 10 items. Here the independent variable, number of menu items, has three levels.


Chapter 11 Evaluation techniques Statistical measures Page 419

Variables can be classified as either discrete variables or continuous variables. A discrete variable can only take a finite number of values or levels, for example a screen colour which can be red, green or blue. A continuous variable can take any value (although it may have an upper or lower limit), for example a person's height or the time taken to complete a task. A special case of continuous data is when they are positive, for example a response time cannot be negative. A continuous variable can be rendered discrete by clumping it into classes, for example we could divide heights into short (<5 ft (1.5 m)), medium (5 ft--6 ft (1.5 m--1.8 m)) and tall (>6 ft (1.8 m)). In many interface experiments we will be testing one design against another. In these cases the independent variable is usually discrete.


Chapter 11 Evaluation techniques Statistical measures Page 419

There are ways of checking whether data are really normal, but for these the reader should consult a statistics book, or a professional statistician. However, as a general rule, if data can be seen as the sum or average of many small independent effects they are likely to be normal. For example, the time taken to complete a complex task is the sum of the times of all the minor tasks of which it is composed. On the other hand, a subjective rating of the usability of an interface will not be normal. Occasionally data can be transformed to become approximately normal. The most common is the log-transformation, which is used for positive data with near-zero values. As a log-transformation has little effect when the data are clustered well away from zero, many experimenters habitually log-transform. However, this practice makes the results difficult to interpret and is not recommended.


Chapter 11 Evaluation techniques Statistical measures Page 422

Design an experiment to test whether adding colour coding to an interface will improve accuracy.


Chapter 11 Evaluation techniques Statistical measures Page 422

Task The interfaces are identical in each of the conditions, except that, in the second, colour is added to indicate related menu items. Subjects are presented with a screen of menu choices (ordered randomly) and verbally told what they have to select. Selection must be done within a strict time limit when the screen clears. Failure to select the correct item is deemed an error. Each presentation places items in new positions. Subjects perform in one of the two conditions.


Chapter 11 Evaluation techniques An example: evaluating icon designs Page 424

Of course we need to control the experiment so that any differences we observe are clearly attributable to the independent variable, and so that our measurements of the dependent variables are comparable. To do this we provide an interface which is identical in every way except for the icon design, and a selection task which can be repeated for each condition. The latter could be either a naturalistic task (such as producing a document) or a more artificial task in which the user has to select the appropriate icon to a given prompt. The second task has the advantage that it is more controlled (there is little variation between users as to how they will perform the task) and it can be varied to avoid transfer of learning. Before performing the selection task the users will be allowed to learn the icons in controlled conditions: for example, they may be given a fixed amount of time to learn the icon meanings.


Chapter 11 Evaluation techniques An example: evaluating icon designs Page 424

So all that remains is to finalize the details of our experiment, given the constraints imposed by these choices. We devise two interfaces composed of blocks of icons, one for each condition. The user is presented with a task (say 'delete a document') and is required to select the appropriate icon. The selection task comprises a set of such presentations. In order to avoid learning effects from icon position, the placing of icons in the block can be randomly varied on each presentation. Each user performs the selection task under each condition. In order to avoid transfer of learning, the users are divided into two groups with each group taking a different starting condition. For each user we measure the time taken to complete the task and the number of errors made.


Chapter 11 Evaluation techniques Think aloud and cooperative evaluation Page 427

Think aloud has the advantage of simplicity; it requires little expertise to perform and can provide useful insight into problems with an interface. Also it may be used to observe how the system is actually used. It can be used for evaluation throughout the design process using paper or simulated mock-ups for the earlier stages. However, the information provided is often subjective and may be selective, depending on the tasks provided. The process of observation can alter the way that people perform tasks and so provide a biased view. The very act of describing what you are doing often changes the way you do it -- like the joke about the centipede who was asked how he walked Š


Search results for interfaces
Showing 260 to 269 of 331 [<< prev] [next >>] [new search]

processed in 0.005 seconds


feedback to feedback@hcibook.com hosted by hiraeth mixed media