HUMAN-COMPUTER INTERACTION SECOND EDITION
Dix, Finlay, Abowd and Beale


Search Results


Search results for mouse
Showing 40 to 49 of 176 [<< prev] [next >>] [new search]


Chapter 3 The interaction 3.6.4 Menus Page 127

Pull-down menus are dragged down from the title at the top of the screen, by moving the mouse pointer into the title bar area and pressing the button. Fall-down menus are similar, except that the menu automatically appears when the mouse pointer enters the title bar, without the user having to press the button. Some menus are pin-up menus, in that they can be 'pinned' to the screen, staying in place until explicitly asked to go away. Pop-up menus appear when a particular region of the screen, maybe designated by an icon, is selected, but they only stay as long as the mouse button is depressed.


Chapter 3 The interaction 3.6.7 Palettes Page 130

In many application programs, interaction can enter one of several modes. The defining characteristic of modes is that the interpretation of actions, such as keystrokes or gestures with the mouse, changes as the mode changes. For example, using the standard UNIX text editor vi, keystrokes can be interpreted either as operations to insert characters in the document (insert mode) or as operations to perform file manipulation (command mode). Problems occur if the user is not aware of the current mode. Palettes are a mechanism for making the set of possible modes and the active mode visible to the user. A palette is usually a collection of icons that are reminiscent of the purpose of the various modes. An example in a drawing package would be a collection of icons to indicate the pixel colour or pattern that is used to fill in objects, much like an artist's palette for paint.


Chapter 3 The interaction 3.8 Interactivity Page 136

Interactivity is also crucial in determining the 'feel' of a WIMP environment. Apparently, all WIMP systems have virtually the same elements: windows, icons, menus, pointers, dialog boxes, buttons, etc. However, the precise behaviour of these elements differs both within a single environment and between environments. For example, we have already discussed the different behaviour of pull-down and fall-down menus. These look the same, but fall-down menus are more easily invoked by accident (and not surprisingly the windowing environments that use them have largely fallen into disuse!). In fact, menus are a major difference between the MacOS and Microsoft Windows environments: in MacOS you have to keep the mouse depressed throughout menu selection; in Windows you can click on the menu bar and a pull-down menu appears and remains there until an item is selected or it is cancelled. Similarly the detailed behaviour of buttons is quite complex, as we shall see in Chapter 9.


Chapter 4 Usability paradigms and principles 4.2.3 Programming toolkits Page 147

Many of the ideas that Engelbart's team developed at the Augmentation Research Center -- such as word processing and the mouse -- only attained mass commercial success decades after their invention. A live demonstration of his oNLine System (NLS, also later known as NLS/Augment) was given in the autumn of 1968 at the Fall Joint Computer Conference in San Francisco before a captivated audience of computer sceptics. We are not so concerned here with the interaction techniques which were present in NLS, as many of those will be discussed later. What is important here is the method that Engelbart's team adopted in creating their very innovative and powerful interactive systems with the relatively impoverished technology of the 1960s.


Chapter 4 Usability paradigms and principles 4.2.10 Multi-modality Page 154

The vast majority of interactive systems use the traditional keyboard and possibly a pointing device such as a mouse for input and are restricted to one (possibly colour) display screen with limited sound capabilities for output. Each of these input and output devices can be considered as communication channels for the system and they correspond to certain human communication channels, as we saw in Chapter 1. A multi-modal interactive system is a system that relies on the use of multiple human communication channels. Each different channel for the user is referred to as a modality of interaction. In this sense, all interactive systems can be considered multi-modal, for humans have always used their visual and haptic (touch) channels in manipulating a computer. In fact, we often use our audio channel to hear whether the computer is actually running properly.


Chapter 4 Usability paradigms and principles 4.2.12 The World Wide Web Page 156

Whilst the Internet has been around since 1969, it did not become a major para-digm for interaction until the advent and ease of availability of well-designed graphical interfaces (browsers) for the Web. These browsers allow users to access multimedia information easily, using only a mouse to point and click. This shift towards the integration of computation and communication is transparent to users; all they realize is that they can get the current version of published information practically instantly. In addition, the language used to create these multimedia documents is relatively simple, opening the opportunity of publishing information to any literate, and connected, person. However, there are important limitations of the Web as a hypertext medium and in Chapter 16 we discuss some of the special design issues for the Web. Interestingly, the Web did not provide any technological breakthroughs; all the required functionality previously existed, such as transmission protocols, distributed file systems, hypertext and so on. The impact has been due to the ease of use of both the browsers and HTML, and the fact that critical mass (see Chapter 14) was established, at first in academic circles, and then rapidly expanded into the leisure and business domains. The burgeoning interest led to service providers, those providing connections to the Internet, to make it cheap to connect, and a whole new subculture was born.


Chapter 4 Usability paradigms and principles Predictability Page 164

As another, possibly more pertinent example, imagine you have created a complex picture using a mouse-driven graphical drawing package. You leave the picture for a few days and then go back to change it around a bit. You are allowed to select certain objects for editing by positioning the mouse over the object and clicking a mouse button to highlight it. Can you tell what the set of selectable objects is? Can you determine which area of the screen belongs to which of these objects, especially if some objects overlap? Does the visual image on the screen indicate what objects form a compound object which can only be selected as a group? Predictability of selection in this example depends on how much of the history of the creation of the visual image is necessary in order for you to determine what happens when you click on the mouse button.


Chapter 4 Usability paradigms and principles Familiarity Page 166

Some psychologists argue that there are intrinsic properties, or affordances, of any visual object that suggest to us how they can be manipulated. The appearance of the object stimulates a familiarity with its behaviour. For example, the shape of a door handle can suggest how it should be manipulated to open a door, and a key on a keyboard suggests to us that it can be pushed. In the design of a graphical user interface, it is implied that a soft button used in a form's interface suggests it should be pushed (though it does not suggest how it is to be pushed via the mouse). Effective use of the affordances which exist for interface objects can enhance the familiarity of the interactive system.


Chapter 4 Usability paradigms and principles Multi-threading Page 169

Multi-modality of a dialog is related to multi-threading. Coutaz has characterized two dimensions of multi-modal systems [56]. First, we can consider how the separate modalities (or channels of communication) are combined to form a single input or output expression. Multiple channels may be available, but any one expression may be restricted to just one channel (keyboard or audio, for example). As an example, to open a window the user can choose between a double-click on an icon, a keyboard shortcut, or saying 'open window'. Alternatively, a single expression can be formed by a mixing of channels. Examples of such fused modality are error warnings which usually contain a textual message accompanied by an audible beep. On the input side, we could consider chord sequences of input with a keyboard and mouse (pressing the shift key while a mouse button is pressed, or saying 'drop' as you drag a file over the trash icon). We can also characterize a multi-modality dialog depending on whether it allows concurrent or interleaved use of multiple modes.


Chapter 4 Usability paradigms and principles Responsiveness Page 174

As significant as absolute response time is response time stability. Response time stability covers the invariance of the duration for identical or similar computational resources. For example, pull-down menus are expected to pop up instantaneously as soon as a mouse button is pressed. Variations in response time will impede anticipation exploited by motor skill.


Search results for mouse
Showing 40 to 49 of 176 [<< prev] [next >>] [new search]

processed in 0.004 seconds


feedback to feedback@hcibook.com hosted by hiraeth mixed media