HUMAN-COMPUTER INTERACTION
SECOND EDITION
The power of the PIE model is that it can be applied at many levels of abstraction. Some properties may only be valid at one level, but many should be true at all levels of system description. It is even possible to apply the PIE model just within the user, in the sense that the commands are the user's intended actions and the display, the perceived response.
A related issue is predictability. Imagine you have been using a drawing package and in the middle you get thirsty and go to get a cup of tea. On returning, you are faced with the screen -- do you know what to do next? If there are two shapes one on top of the other, the graphics package may interpret mouse clicks as operating on the 'top' shape. However, there may be no visual indication of which is topmost. The screen image does not tell you what the effect of your actions will be; you need to remember how you got there, your command history. This has been called the 'gone away for a cup of tea problem'. In fact, the state of the system determines the effects of any future commands, so if we have a system which is observable in the sense that the display determines the state, it is also predictable. Predictability is a special case of observability.
What would it mean for a system to be transparent in one of these senses? If the system were result transparent, when we come back from our cup of tea, we can look at the display and then work out in our head (using transparentR) exactly what the printed drawing would look like. Whether we could do this in our heads is another matter. For most drawing packages the function would be simply to ignore the menus and 'photocopy' the screen.
We now concentrate our attention on programming the actual interactive application, which would correspond to a client in the client--server architecture of Figure 10.2. Interactive applications are generally user driven in the sense that the action the application takes is determined by the input received from the user. We describe two programming paradigms which can be used to organize the flow of control within the application. The windowing system does not necessarily determine which of these two paradigms is to be followed.
Implicit in the term 'cooperative work' is that there are two or more participants. These are denoted by the circles labelled 'P'. They are engaged in some common work, and to do so interact with various tools and products. Some of these are
One set of problems is connected with the small field of view of a television camera, and the size and quality of the resulting images. Even with a one-to-one video conversation, we need to decide whether to take a simple head and shoulders shot, the whole torso, or head to foot. If there is a group at either end, even just two or three people, the problems magnify enormously. If you view all three people at once, then the image of the speaker may become so small that it is hard to see the body gestures. These gestures are one of the big advantages of video conferences over the much cheaper telephone conference. However, you need a skilled camera technician to follow the speaker, zooming in and out as necessary. Furthermore, zooming in to the speaker runs the risk of losing the sense of presence. The participants at the far end do not know whether the speaker's colleagues are nodding in agreement or falling asleep!
Video conferences support specific planned meetings. However, one of the losses of working in a different site from a colleague is the chance meetings whilst walking down a corridor or drinking tea. Several experimental systems aim to counter this, giving a sense of social presence at a distance. One solution is the video window or video wall, a very large television screen set into the wall of common rooms at different sites [90]. The idea is that as people wander about the common room at one site they can see and talk to people at the other site -- the video wall is almost like a window or doorway between the sites.
Given the virtual environment is within a computer, it makes sense to allow participants to bring other computer-based artefacts into the virtual environment. However, many of these are not themselves 3D objects, but simple text or diagrams. It is possible to map these onto flat virtual surfaces within the 3D virtual world (or even have virtual computer screens!). However, text is especially difficult to read when rendered in perspective and so some environments take advantage of the fact that this is a virtual world to present such surfaces face on to all participants. However, now we have a world the appearance of which is participant dependent.
Imagine you and a colleague are facing each other and looking at the same text object. It will be rather like having a piece of paper held up between you with the same text printed on both sides. Your colleague points at the left hand side of the text and refers to it. What should you see -- a virtual finger pointing at the wrong side of the text, a disembodied hand tear off from your colleague's arm and point at the correct place on your side of the paper? This is similar to the problems of shared focus which we shall discuss in the next section, but perhaps worse in this context as the users are lulled into a false sense that the world they are dealing with is truly like the normal real world.
The video wall allows people from remote locations to meet. In a sense it extends the normal physical space of the participants as they can see the remote room, but the image rendered is a real physical space, albeit on a video screen.
processed in 0.003 seconds
| |
HCI Book 3rd Edition || old HCI 2e home page || search
|
|
feedback to feedback@hcibook.com | hosted by hiraeth mixed media |
|