HUMAN-COMPUTER INTERACTION SECOND EDITION
Dix, Finlay, Abowd and Beale


Search Results


Search results for screens
Showing 101 to 110 of 281 [<< prev] [next >>] [new search]


Chapter 3 The interaction 3.10 Summary Page 138

Screen design and layout are also very important requiring psychological understanding and graphic design. However, one has to be careful to distinguish good-looking from effective layout.


Chapter 3 The interaction 3.10 Summary Page 139

Interactivity is the heart of all modern interfaces and is important at many levels from the ordering of screens to the clicking of a button.


Chapter 4 Usability paradigms and principles 4.2.2 Video display units Page 145

As early as the mid-1950s researchers were experimenting with the possibility of presenting and manipulating information from a computer in the form of images on a video display unit (VDU). These display screens could provide a more suitable medium than a paper printout for presenting vast quantities of strategic information for rapid assimilation. The earliest applications of display screen images were developed in military applications, most notably the Semi-Automatic Ground Environment (SAGE) project of the US Air Force. It was not until 1962, however, when a young graduate student at the Massachussetts Institute of Technology, Ivan Sutherland, astonished the established computer science community with his Sketchpad program, that the capabilities of visual images were realized. As described in Howard Rheingold's history of computing book Tools for Thought [207]:


Chapter 4 Usability paradigms and principles 4.2.2 Video display units Page 146

Sketchpad allowed a computer operator to use the computer to create, very rapidly, sophisticated visual models on a display screen that resembled a television set. The visual patterns could be stored in the computer's memory like any other data, and could be manipulated by the computer's processorŠ. But Sketchpad was much more than a tool for creating visual displays. It was a kind of simulation language that enabled computers to translate abstractions into perceptually concrete forms. And it was a model for totally new ways of operating computers; by changing something on the display screen, it was possible, via Sketchpad, to change something in the computer's memory.


Chapter 4 Usability paradigms and principles 4.2.6 The metaphor Page 149

A more extreme example of metaphor occurs with virtual reality systems. In a virtual reality system, the metaphor is not simply captured on a display screen. Rather, the user is also portrayed within the metaphor, literally creating an alternative, or virtual, reality. Any actions that the user performs are supposed to become more natural and so more movements of the user are interpreted, instead of just keypresses, button clicks and movements of an external pointing device. A virtual reality system also needs to know the location and orientation of the user. Consequently, the user is often 'rigged' with special tracking devices so that the system can locate them and interpret their motion correctly.


Chapter 4 Usability paradigms and principles 4.2.7 Direct manipulation Page 150

In the early 1980s as the price of fast and high-quality graphics hardware was steadily decreasing, designers were beginning to see that their products were gaining popularity as their visual content increased. As long as the user--system dialog remained largely unidirectional -- from user command to system command line prompt -- computing was going to stay within the minority population of the hackers who revelled in the challenge of complexity. In a standard command line interface, the only way to get any feedback on the results of previous interaction is to know that you have to ask for it and to know how to ask for it. In terms of the interaction framework discussed in Chapter 3, not every articulated input expression from the user is accompanied by some output expression which reveals an underlying change in the internal state of the system. Rapid visual and audio feedback on a high-resolution display screen or through a high-quality sound system makes it possible to provide evaluative information for every executed user action.


Chapter 4 Usability paradigms and principles 4.2.7 Direct manipulation Page 151

Somewhat related to the visualization provided by direct manipulation is the WYSIWYG paradigm, which stands for 'What you see is what you get'. What you see on a display screen, for example when you are using a word processor, is not the actual document that you will be producing in the end. Rather, it is a representation or rendering of what that final document will look like. The implication with a WYSIWYG interface is that the difference between the representation and the final product is minimal, and the user is easily able to visualize the final product from the computer's representation. So, in the word-processing example, you would be able to see what the overall layout of your document would be from its image on screen, minimizing any guesswork on your part to format the final printed copy.


Chapter 4 Usability paradigms and principles 4.2.7 Direct manipulation Page 152

With WYSIWYG interfaces, it is the simplicity and immediacy of the mapping between representation and final product that matters. In terms of the interaction framework, the observation of an output expression is made simple so that assessment of goal achievement is straightforward. But WYSIWYG is not a panacea for usability. What you see is all you get! In the case of a word processor, it is difficult to achieve more sophisticated page design if you must always see the results of the layout on screen. For example, suppose you want to include a picture in a document you are writing. You design the picture and then place it in the current draft of your document, positioning it at the top of the page on which it is first referenced. As you make changes to the paper, the position of the picture will change. If you still want it to appear at the top of a page, you will no doubt have to make adjustments to the document. It would be easier if you only had to include the picture once, with a directive that it should be positioned at the top of the printed page, whether or not it appears that way on screen. You might sacrifice the WYSIWYG principle in order to make it easier to incorporate such floatable objects in your documents.


Chapter 4 Usability paradigms and principles 4.2.10 Multi-modality Page 154

The vast majority of interactive systems use the traditional keyboard and possibly a pointing device such as a mouse for input and are restricted to one (possibly colour) display screen with limited sound capabilities for output. Each of these input and output devices can be considered as communication channels for the system and they correspond to certain human communication channels, as we saw in Chapter 1. A multi-modal interactive system is a system that relies on the use of multiple human communication channels. Each different channel for the user is referred to as a modality of interaction. In this sense, all interactive systems can be considered multi-modal, for humans have always used their visual and haptic (touch) channels in manipulating a computer. In fact, we often use our audio channel to hear whether the computer is actually running properly.


Chapter 4 Usability paradigms and principles Predictability Page 163

Except when interacting with some video games, a user does not take very well to surprises. Predictability of an interactive system means that the user's knowledge of the interaction history is sufficient to determine the result of his future interaction with it. There are many degrees to which predictability can be satisfied. The knowledge can be restricted to the presently perceivable information, so that the user need not remember anything other than what is currently observable. The knowledge requirement can be increased to the limit where the user is actually forced to remember what every previous keystroke was and what every previous screen display contained (and the order of each!) in order to determine the consequences of the next input action.


Search results for screens
Showing 101 to 110 of 281 [<< prev] [next >>] [new search]

processed in 0.023 seconds


feedback to feedback@hcibook.com hosted by hiraeth mixed media