HUMAN-COMPUTER INTERACTION
SECOND EDITION
Understanding the basic construction of the eye goes some way to explaining the physical mechanisms of vision but visual perception is more than this. The information received by the visual apparatus must be filtered and passed to processing elements which allow us to recognize coherent scenes, disambiguate relative distances and differentiate colour. We will consider some of the capabilities and limitations of visual processing later, but first we will look a little more closely at how we perceive size and depth, brightness and colour, each of which is crucial to the design of effective visual interfaces.
As we have seen, sound can convey a remarkable amount of information. It is rarely used to its potential in interface design, usually being confined to warning
Optical character recognition (OCR) is the process whereby the computer can 'read' the characters on the page. It is only comparatively recently that print could be reliably read, since the wide variety of typefaces and print sizes makes this more difficult than one would imagine -- it is not simply a matter of matching a character shape to the image on the page. In fact, OCR is rather a misnomer nowadays as, although the document is optically scanned, the OCR software itself operates on the bitmap image. Current software can recognize 'unseen' fonts and can even produce output in word-processing formats preserving super- and subscripts, centring, italics and so on.
Workers at Xerox Palo Alto Research Center (also known as Xerox PARC) capitalized on this by using paper as a medium of interaction with computer systems [125]. A special identifying mark is printed onto forms and similar output. The printed forms may have check boxes or areas for writing numbers or (in block capitals!) words. The form can then be scanned back in. The system reads the identifying mark and therefore knows what sort of paper form it is dealing with. It doesn't have to use OCR of the printed text of the form as it printed it, but can detect the check boxes that have been
One application of this technology is mail order catalogues. The order form is printed with a glyph. When completed, forms can simply be collected into bundles and scanned in batches, generating orders automatically. If the customer faxes an order the fax-receiving software recognizes the glyph and the order is processed without ever being handled at the company end. Such a paper user interface may involve no screens or keyboards whatsoever. It is paradoxical that Xerox PARC, where much of the driving work behind the WIMP interface began, have also been the developers of this totally non-screen and non-mouse paradigm. However, the common principle behind each is the novel and appropriate use of different media for graceful interaction.
The last main feature of windowing systems is the menu, an interaction technique that is common across many non-windowing systems as well. A menu presents a choice of operations or services that can be performed by the system at a given time. In Chapter 1, we pointed out that our ability to recall information is inferior to our ability to recognize it from some visual cue. Menus provide information cues in the form of an ordered list of operations that can be scanned. This implies that the names used for the commands in the menu should be meaningful and informative.
This is one of the reasons for platform and company style guides. If everyone designs buttons the same and menus the same, then users will be able to recognize them when they see them. However, this is not sufficient in itself. It is important
It is worth remembering that interactivity is the defining feature of an interactive system. This can be seen in many areas of HCI. For example, the recognition rate for speech recognition is too low to allow transcription from tape, but in an airline reservation system, so long as the system can reliably recognize yes and no it can reflect back its understanding of what you said and seek confirmation. Speech-based input is difficult, speech-based interaction easier. Also, in the area of information visualization the most exciting developments are all where users can interact with a visualization in real time, changing parameters and seeing the effect.
Adaptivity is automatic customization of the user interface by the system. Decisions for adaptation can be based on user expertise or observed repetition of certain task sequences. The distinction between adaptivity and adaptability is that the user plays an explicit role in adaptability, whereas his role in an adaptive interface is more implicit. A system can be trained to recognize the behaviour of an expert or novice and accordingly adjust its dialog control or help system automatically to match the needs of the current user. This is in contrast with a system which would require the user to classify himself as novice or expert at the beginning of a session. We discuss adaptive systems further in Chapter 12. Automatic macro construction is a form of programming by example, combining adaptability with adaptivity in a simple and useful way. Repetitive tasks can be detected by observing user behaviour and macros can be automatically (or with user consent) constructed from this observation to perform repetitive tasks automatically.
processed in 0.002 seconds
| |
HCI Book 3rd Edition || old HCI 2e home page || search
|
|
feedback to feedback@hcibook.com | hosted by hiraeth mixed media |
|