HUMAN-COMPUTER INTERACTION
SECOND EDITION
Furthermore, in face-to-face conversation, the manager can easily exert influence over a subordinate: both know their relative positions and this is reflected in the patterns of conversation and in other non-verbal cues. Email messages lose much of this sense of presence and it is more difficult for a manager to exercise authority. The 'levelling' effect even makes it possible for a subordinate to direct messages 'diagonally' across the hierarchy, to one's manager's peers, or, even worse, to one's manager's manager!
Again we see that social and managerial relationships completely dominate technological considerations. Many video-based groupware systems are intended to create a sense of engagement, of active participation and social presence. Whether this will be sufficient to overcome ingrained attitudes remains to be seen.
As we saw in Chapter 1, there are five senses: sight, sound, touch, taste and smell. Of these, sight is used predominantly, but this is backed up in everyday life by the others. Some of us are without one or more of these senses, and are considered to be disabled, less able to perform well in some activities. There are many social effects and attitudes suffered by those with a lack of one or more of these senses, which are more disabling than any actual reduced ability. However, many things are inherently harder if we have only limited sensory input.
Consider an onion and an apple, very different foods. They have different uses in cooking because of their different natures, and most people would think they look, smell, taste and feel very different. However, when eaten, their texture is remarkably similar. Moreover, if you are blindfolded and cannot use sight to disambiguate them, you are left with taste and smell to tell them apart. The interesting thing is that if you have a cold, and thus your sense of smell is also removed, there is no way of distinguishing between them. It is only the smell of an onion that is different from that of an apple, not its taste, and so people who are blindfolded, with a cold, will happily eat an onion in just the same way as an apple. Thus our senses cannot always be relied upon on their own; indeed, we have seen in Chapter 1 that the visual system is easily fooled by optical illusions. However, together the senses represent a much more potent force. We utilize sound to keep us aware of our surroundings, subconsciously monitoring the movement of people around us, the conversations going on that we are not consciously listening to, reacting to sudden noises, providing clues and cues that switch our attention from one thing to another. Such reactions were probably honed over thousands of years as being useful for survival, but our existence in the world of today is just as active and reactive as it ever was in the Stone Age; only the problems have altered and we have swapped predators - cars for sabre-toothed tigers, gunfire for stealthy assault with clubs, mugging for tribal warfare.
Smell provides us with other useful information in daily life: checking if food is bad, detecting early signs of fire, noticing that manure has been spread in a field. Touch too is a vital sense for us: tactile feedback forms an intrinsic part of the operation of many common tools - cars, typewriters, pens, anything that requires holding or moving. It can form a sensuous bond between individuals, communicating a wealth of non-verbal information. Examples of the use of sensory information are easy to come by (we looked at some in Chapter 1), but a vital feature is that our everyday interaction with each other and the world around us is a multi-sensory one, each sense providing different information that is built up into a whole. Since our interaction with the world is improved by multi-sensory input, it makes sense to ask whether multi-sensory information would benefit human-computer interaction. As we consider ourselves to be disabled if we are without one or more of our senses,
In computing, the visual channel is used as the predominant channel for communication, but if we are to use the other senses we have to consider their suitability and the nature of the information that they can convey.
The use of sound is an obvious area for further exploitation. There is little doubt that we use hearing a great deal in daily life, and so extending its application to the interface may be beneficial. Sound is already used, in a limited manner, in some interfaces: beeps are used as warnings and synthesized speech is also used. Tactile feedback, as we have already seen, is also important in improving interactivity and so this represents another sense that we can utilize more effectively. However, taste and smell pose more serious problems for us. They are the least used of our senses, and are used more for receiving information than for communicating it. There are currently very few ways of implementing devices that can generate tastes and smells, and so these two areas are not supported. Whether this is a serious omission remains to be seen, but the tertiary nature of those senses tends to suggest that their incorporation, if it were possible, would lead to only a marginal improvement.
Even if we do not use other senses in our systems, it is certainly worth thinking about the nature of these senses and what we gain from them as this will improve our understanding of the strengths and weaknesses of visual communication [70].
We have to distinguish what we mean when we talk about multi-modal and multi-media systems. Multi-modal systems have been developed to take advantage of the multi-sensory nature of humans. Utilizing more than one sense, or mode of communication, these systems make much fuller use of the auditory channel, and to a
However, in spite of this, the auditory channel is comparatively little used in standard interfaces. Information is provided almost entirely visually. There is a danger that this will overload the visual channel, demanding that the user attend to too many things at once and select appropriate information from a mass of detail in the display. Reliance on visual information forces attention to remain focused on the screen, and the persistence of visual information means that even detail that is quickly out of date may remain on display after it is required, cluttering the screen further. Careful use of sound in the interface would alleviate these problems. Hearing is our second most used sense and provides us with a range of information in everyday life, as we saw in Chapter 1. Humans can differentiate a wide range of sounds and can react faster to auditory than to visual stimuli. So how can we exploit this capability in interface design?
processed in 0.006 seconds
| |
HCI Book 3rd Edition || old HCI 2e home page || search
|
|
feedback to feedback@hcibook.com | hosted by hiraeth mixed media |
|