HUMAN-COMPUTER INTERACTION
SECOND EDITION
Cognitive complexity theory (CCT), introduced by Kieras and Polson [128], begins with the basic premises of goal decomposition from GOMS and enriches the model to provide more predictive power. CCT has two parallel descriptions: one of the user's goals and the other of the computer system (called the device in CCT). The description of the user's goals is based on a GOMS-like goal hierarchy, but is expressed primarily using production rules. We introduced production rules in Chapter 1 and we further describe their use in CCT below. For the system grammar, CCT uses generalized transition networks, a form of state transition network. This
The rules in CCT need not represent error-free performance. They can be used to explain error phenomena, though they cannot predict them. For instance, the rules above for inserting a space are 'buggy' -- they do not check the editor's mode. Imagine you had just been typing the 'cognitive' in 'cognitivecomplexity theory' (with the space missing), you think for a few minutes and then look again at the
We have only discussed the user side of CCT here. If the cognitive user description is complemented by a description of the system, it is claimed that one can predict the difficulty of the mapping between the user's goals and the system model. The generalized transition networks which describe the system grammar themselves have a hierarchical structure. Thus both the description of the user and that of the system can be represented as hierarchies. These can then be compared to find mismatches and to produce a measure of dissonance.
Another problem is the particular choice of notations. Production rules are often suggested as a good model of the way people remember procedural knowledge, but there are obvious 'cludges' in the CCT description given above. In particular, the working memory entry (NOTE executing insert space) is there purely to allow the INSERT-SPACE-DONE rule to fire at the appropriate time. It is not at all clear that it has any real cognitive significance. One may also question whether the particular notation chosen for the system is critical to the method. One might choose to represent the system using any one of the dialog description notations in Chapter 8. Different notations would probably yield slightly different measures of dissonance.
The user's interaction with a computer is often viewed in terms of a language, so it is not surprising that several modelling formalisms have developed centred around this concept. Several of the dialog notations described in Chapter 8 are also based on linguistic ideas. Indeed, BNF grammars are frequently used to specify dialogs. The models here, although similar in form to dialog design notations, have been proposed with the intention of understanding the user's behaviour and analyzing the cognitive difficulty of the interface.
The BNF description above only represented the user's actions, not the user's perception of the system's responses. This input bias is surprisingly common amongst cognitive models, as we will discuss in Section 6.9. Reisner has developed extensions to the basic BNF descriptions which attempt to deal with this by adding 'information-seeking actions' to the grammar.
Measures based upon BNF have been criticized as not 'cognitive' enough. They ignore the advantages of consistency both in the language's structure and in its use
Thus goal hierarchies can partially cope with display-oriented systems by an appropriate choice of level, but the problems do emphasize the rather prescriptive nature of the cognitive models underlying them.
These problems have been one of the factors behind the growing popularity of situated action [230] and distributed cognition [135, 119] in HCI (see also Chapter 14). Both approaches emphasize the way in which actions are contingent upon events and determined by context, rather than being preplanned. At one extreme, protagonists of these approaches seem to deny any planned actions or long-term goals. On the other side, traditional cognitive modellers are modelling display-based cognition using production rules and similar methods, which include sensory data within the models.
At a low level, chunked expert behaviour is modelled effectively using hierarchical or linguistic models, and is where the keystroke-level model (discussed later in this chapter) has proved effective. In contrast, it is clear that no amount of cognitive modelling can capture the activity during the writing of a poem. Between these two, cognitive models will have differing levels of success and utility. Certainly models at all but the lowest levels must take into account the user's reactions to feedback from the system, otherwise they cannnot address the fundamental issue of interactivity at all.
processed in 0.003 seconds
| |
HCI Book 3rd Edition || old HCI 2e home page || search
|
|
feedback to feedback@hcibook.com | hosted by hiraeth mixed media |
|