3.3.5    Manipulation of objects and conventional education theory

(This section completes the literature review).

The aim in this section was to explore in some detail elements in the wider readings of the literature review related to the quality of computer interface engagement that may have resonance in the conventional education theory of writers such as Piaget and Vygotsky. Specifically, relationships were inferred between the value of children’s manipulation of objects in a natural environment as part of the learning process and manipulation on the computer, which may inform the significance of the physical child-relationship for a child involved in activity at a computer.

There was evidence for children having a close emotional relationship – attributing qualities of ‘life’ to objects they interact with or manipulate – as explored by Piaget and Vygotsky. Manipulation was seen by Piaget in the context of ‘building block’ theory, a process by which the concept of causation was learnt by children manipulating objects. Lakoff and Johnson (1980) considered manipulation as a ‘gestalt consisting of properties that naturally occur together in our daily experience of performing direct manipulations’ (p. 75). The significance of the role of manipulation was explored in more detail.

Piaget and Vygotsky were writing at a time when computers were in their infancy and graphical user interface had not been invented. Their findings were compared with Turkle (1984) who was writing about the time when children were primarily writing code on computers. ‘Killing’ was the term used by children to express what happened when they ‘crashed’ the old machines and generated feelings of endings and beginnings and life and death. Turkle specifically referred back to Piaget’s theories in relationship to children and the computer ‘computers offer an experience of restoring life as well as ending it’ (p. 22). Turkle reported that strong emotional feelings were associated with children experiencing computers. They were smart machines. They were alive. There was excitement at responding to a living machine. She also reported the particular view children have of the concept of aliveness of objects. In Piaget’s terms, at six a child might see a rolling stone, a river and a bicycle as alive because they move. The notion of life was built on the concept of motion. For an eight year old child, the river may still be alive because the child cannot yet account for its motion as coming from outside itself, but the stone and the bicycle are not alive. At different stages in a child’s development trees could be alive because their branches wave, not alive because they stay in the same place, and alive again because they grow or because sap flows in them. Objects around them were things to think with, for example, the evolution of motion as life developed as children ‘play’ with the stories of ‘the man in the moon’, and only later understood the scientific explanation.

However, having pointed out the similarities between a child’s relationship to things and a child’s relationship to a computer, Turkle considered whether computers had muddied this ordered and evolutionary process. A computer was now highly interactive and contained movement and this confers life. A child will say ‘No, the computer is not alive’, but a second later will say ‘Yea, it can think!’ Turkle also referred to Piaget describing how children occasionally use talking as a reason for believing something to be alive.

So far as traditional educational sources infer that a child related to the computer in similar ways to ordinary activities, even to the extent of accounting for why children were observed talking to computers, Vygotsky (1978) also provided a traditional educational explanation of the way children joyously tried all the keys buttons and icons on a computer with zeal. He referred to the natural process of ‘try it and see’ process of learning through activity. He reported how children did their selecting, while carrying out whatever movements the choice required. His experiment used a keyboard. Children were required to identify each one of a series of picture stimuli assigned to each key. The adult process of internal decision-making was present, but appeared as a series of tentative interrupted activities. The tentative movements were part of the selective process.

The value of Vygotsky’s observations has significance to the argument if the traditional keyboard and visual stimulus experiments at the time were assumed to be transferable to current computer interaction situations. There was a precedent for the assumption. Leventhal et al., (1994) reported just such exploratory behaviour in hypertext operations with a child at a computer, though the suggested reason was because ‘children did not have as complete understanding of card catalogue concepts as adults’ (p. 27). They also reported a preference of children clicking on an animal icon to navigate instead of a logical structure defined by the designers. They suggested a tree hierarchy to solve the problem, though their experiments showing how children dropped this search strategy as they became experienced with the program.

In the contextual research children were observed talking to the computer as they carried out activities. Vygotsky described an experiment to show how transfer of attention from one location to another involves hand and eye moving in unison demonstrating a synchronicity between movement and perception. His experiments showed ‘the fundamental tie between speech and child’s activity’ and echoed Piaget’s observations. When children were confronted by a complicated problem, they tried to attain their goal by talking to the object itself, talking about the problem, or talking to the person in charge.

Interpreting Vygotsky’s findings in a modern context: children using a new CD-ROM for the first time or understanding interfaces and icons (signs) should be expected to ‘fiddle around’, but the ‘fiddling’ has meaning and making some mistakes was part of the process and does not show a lack of intelligence. Trial and experimentation was an inherent process of learning for the child in this age group. That children should read the documentation and follow teachers’ instructions was desirable, but it was an adult expectation.

In terms of physical activity, Vygotsky also considered children’s development to be related to the control by using signs and tools. Gordon (1978) discussed Vygotsky’s view of the ability of children to memorise and employ signs as shortcuts in memory. He reported that sign-using activity in children was neither simply invented nor passed down by adults, but was the result of small stages of mind and body activities. Children (between four and six) turned upside down a symbol (a bench) till the sign physically resembled the seat whilst repeatedly speaking the word. Memory processes appeared to operate like a knot in a handkerchief making signs an integral part of recalling events and external objects. O’Hagan and Anderson (1989) defined different kinds of software in terms of first, closed systems that required more explaining and second, open systems that created opportunities for more assessing and taking stock of progress discussions. Improvements in design were proposed with future use of manipulation techniques (as yet unspecified) so children could examine the effect of their behaviour.

The evidence in this section proposed activity at a computer not just being similar to everyday learning opportunities, but that manipulation of objects was an integral process of learning. Learning activities on a computer could be more meaningful if manipulation was employed. A computer mouse had been used as a point and click device, manipulation being confined to system operations, such as moving windows around and carrying out drawing activities in particular graphics software. These findings informed the Research Tool which makes extensive use of manipulation techniques using the mouse for learning activities. The educational value of manipulation seems clear but there were problems.

There were those who were highly critical of the way a computer forces a child to relate to it. For Setzer (1989), the physical involvement using joystick and mouse were considered to be, ‘exactly defined computation steps’ (p. 10), and are defined as, ‘input devices’. Setzer viewed the mechanical, mathematical and abstract operation of the CAL computer GUI designs of the time as the antithesis of the Steiner approach to education. The Steiner approach offers an evolutionary and holistic learning process of a child beginning by participating with whole body developing through interaction with nature, to a being with ‘conscious introspection capabilities’. Yet even here the Steiner concern with the whole body and natural activity has a sympathetic resonance with manipulation techniques in the Research Tool.

Crook (1992) reported that pre-school and children in the first three school years had no problem in co-ordinating hand-eye activities to control a computer mouse, though the study was

limited to specially designed activities not used in commercial products. There was a conceptual dimension to children using the mouse. However, Crook observed ‘children insisting on pointing the mouse in the direction of the arrow on the screen despite repeated guidance and assistance from adults’ (p. 206), a feature also observed by the researcher during contextual research. During early trials of the Research Tool children found difficulty moving objects long distances across the screen, an operation that required the mouse to be lifted up to maintain control of the screen object. Small hands had difficulty grasping the mouse. To solve the problem informed by the literature review the researcher redesigned the screen’s artefacts so children had only to manipulate objects for smaller distances.

Finally, in relation to Figure 3.9 showing three children at a computer, what do educational writers have to say about group activities at a computer? Anderson et al., (1992, p. 235) saw Piaget and Vygotsky providing clear empirical evidence for the importance of peer interaction in learning as a stimulus to cognitive growth, which might appear to support the value of two or three children working at a computer. However, evidence about two or three children manipulating objects together at the computer was not available, nor has evidence been found concerning issues of angle of view for a group of three children at the computer. Sewell (1990) only referred to the benefits of computers to promote cognitive development in software classified in terms of user control – user control meaning instructional programs using drill and practice techniques. Direct manipulation with a mouse was considered at the time to involve prohibitively complicated and expensive programming. The features of the Research Tool using mouse manipulation were informed by the idea of ‘manipulation to enhance learning’.

The section of the literature review on mainstream writings on learning theory was relevant to the study in two aspects:

1.  Research pointed to the significance of manipulation and physical relationship with objects in learning processes and informed the Research Tool design.

2.  There were no direct references to the importance of optimum viewing conditions, or head-down conditions in learning. However, it was confidently construed that the research experiments by Piaget and Vygotsky were exclusively carried out in head-down state of body posture whereas in many instances in school, activity at a computer was carried out with children looking head-up at computer screens. The element of optimum viewing conditions was outside the scope of the main study but observations inform the discussion in chapter 6 and conclusions in chapter 7.

The previous sections concluded a series of detailed investigations into visual, emotional and physiological aspects that may improve the quality of engagement in the Research Tool design. These previous sections, each provided challenging evidence for re-evaluating the nature of child-computer relationships. The wider context of metaphors used in interface design was now considered in the next section.

3.3.6    The role of metaphor in interface design

In the contextual research the following observations were made:

1.  Pupils appeared to have difficulty navigating or finding their way around the software, instead becoming ‘lost’ in the multimedia activities, and expressing their displeasure by going off-task and showing anxiety.

2.  There was the difficulty of children recognising the meaning of icons and the role of lettering and colour.

Metaphors had a particularly significant in an educational context. This was because, as indicated in 3.2.2, CD-ROMs in the contextual research were and in many cases still are based on the metaphor ‘book’ (p. 65). The previous sections of the review questioned the visual relationship of the reader to the metaphor ‘book’ in a physiological context of a computer user and a computer. The argument moved to challenging the understanding of metaphor particularly the ‘book’ metaphor for the reasons above, but also the ‘navigation’ metaphor. For the metaphor ‘navigation’ was very frequently used by interface designers of multimedia products and was an issue for children using CD-ROMs. The challenge to the conventional use of metaphor in interface design was in a cultural context – the Anglo-American philosophic and linguistic tradition of metaphor. The review focused on experiential metaphorical concept theory as described by Lakoff and Johnson (1980). The theory has since been developed further (Lakoff and Johnson, 1999) to include cognitive science, neuroscience and philosophical views of metaphor. A deeper dimension was proposed for the role of metaphor in human-computer relationships because the early evidence from Lakoff and Johnson indicated the involvement of the physiological elements reviewed in earlier sections of the literature.

Before the cultural aspects were considered the next section examined the tendency for interface metaphors to have been applied superficially – as linguistic concepts. First, sources of metaphors and their application to the computer interface design were described in a broadly chronological and evolutionary structure. Second, the process was traced by which others including psychologists specialised in creating rules for interface design and may have applied metaphors in a superficial manner in doing so.

The evolution of screen design metaphors

The role of metaphor in screen design had its origins in the traditional sense of something which was noticed by the reader as a metaphor. For example, ‘his mouth felt like old socks’. Metaphor was defined in this context as a literary construct, a figure of speech. The success of a metaphor was the extent to which comparisons help understanding often through humour and sometimes failed when the metaphors were mixed.

The principles of metaphor had been applied to interface design following its application in other media, notably television and graphic media. For example, the application of metaphor in conventional (literary) media was well established in advertising. Fiske and Hartley (1990) analysed the relationship of metaphor to visual media in terms of icon and metonym to create a system of logical and aesthetic codes. They provided the example of the mother in a TV advert as a metaphor of love and security and a metonym of maternal activities. Fiske and Hartley’s hypothesis of the ‘bardic television’ story metaphor for the structure of programmes also did not transfer easily to new media. This was because conventional story structure was broken up, the elements were available at the whim of users and not under the guidance of the storyteller. However, Mountford and Laurel (1990, p. 104) in a survey of the implications of computer games for computer interface design suggested that children developed a new literacy as a result of interactivity.

In the field of graphic design, according to Johnson et al., (1989), Tufte (1985) was strictly graphical information design was significant because his understanding of metaphor was often referred to by new media interface designers, particularly the first Xerox interface designers. Tufte described what may happen in conventional media if there was poor design in visual display. This caused users to be engaged in a verbal exercise, in effect an internal dialogue. ‘The visual image flowed through the verbal decoder initially necessary to understand the graphic’ (p. 153). Tufte provided evidence of the continued literary tradition even in the area of visual design. However, he gave a prophetic warning when he suggested that multifunctional graphics create ‘graphical puzzles with encoding that could only be broken by their inventor’ (p. 139). No explicit references by new media designers to Tufte’s internal dialogue issues have been found.

Metaphor was represented visually in physical form, by diagrams, shapes and colours by semiologists such as Bertin (1983) who considered visual order to come from value perception. For example, there was an order to perception based on a listing of codes, shape, orientation, colour, value and size. However, Boston (1935) suggested a source of rules of legibility for representation from a visual viewpoint. For example there was only one possible path to go from one point to another was a tree. Later Naylor (1966) employed the tree metaphor in the context of early computer development ‘There should be a list of actions possible, with added conditions and corresponding results and tree diagram structures illustrate the flow of events.’ He proposed knowledge of structural design was essential to multimedia design.

The large-scale computer systems, with their knowledge engineering arising from the scientific tradition, preceded the popular PC multimedia developments in interface design, and appeared to have had no place for metaphors. They were not needed. It was only later that less-intellectual minded users of the popular PCs needed metaphors to lower the cognitive overload. Browne (1994) used a task-analysis approach for the early large-scale systems. Johnson (1992) wrote before the interactive graphical user interfaces in current CD-ROMs were created. Metaphor did not appear in his approach either. He defined interface design in terms of the system operations.

It was Gardiner and Christie (1987) who illustrated the increasing role of the cognitive psychologists in interface design and proposed effective human-computer interaction relied on users being able to develop an accurate mental model of the way system functions. Eberts (1994) used metaphors in conjunction with analogy:

Metaphors and analogies are an important kind of learning used quite often in teaching. In teaching the instructor chooses some concrete situation with which the student is familiar and presents new information in terms of how it relates to the old, familiar information.

and

To incorporate a metaphor the user must be able to apply the old, familiar metaphor. (p. 208)

The role of metaphor took on a different aspect when the graphical interfaces appeared. Analogies arose of ‘clearly defined routes’ and how users should be able to ‘move around easily’. Information should be ‘oriented’ information at the ‘top’ and ‘bottom’ of the screen. Eberts (1994) reported the origin of the ‘window’ metaphor to have been first proposed by Mayer in 1975:

Mayer told the subjects that: computer input was similar to a ticket window, output was similar to a message pad, control systems were similar to a shopping list with a pointer, and computer memory was similar to an erasable blackboard. (p. 218).

Theoretical justification for using metaphors in abstract models was suggested by Carroll et al., (1988). They reported that spatial metaphors worked very well. However, Tognazzini (1992) questioned people’s process of recognising mental models as incomplete and people’s inability to use them limited through forgetfulness and lack of firm boundaries. He challenged the cognitive psychologist’s abstract model approach suggesting conceptual models of computer systems should be kept as simple as possible and instead proposed a kinaesthetic model where user’s physical actions directly manipulated the interface objects.

The nature of the limitations in the use of metaphor in an abstract scientific model of cognitive psychology identified by Tognazzini was explored further and the possible reasons for the limitation explained in detail in the next section by considering the experiential metaphorical concept theory of Lakoff and Johnson (1980).

The Experiential Metaphorical Concept

Lakoff and Johnson questioned the Anglo-American philosophic and linguistic tradition of metaphor. Lakoff and Johnson’s (1980) definition of metaphor as ‘understanding and experiencing one kind of thing in terms of another’, appears simple. However, they stressed ‘In actuality we feel that no metaphor can ever be comprehended or even adequately represented independently of its experiential basis’ (1980, p. 19). They considered metaphor as primarily a matter of thought and action and only derivatively a matter of language. The primary claim of their position was that metaphors were not arbitrary, but instead were a natural outgrowth of the manner in which our minds are constituted. They stressed the concepts that occur in metaphorical definitions were those that corresponded to natural experience. They offered human concern to be primarily with physical orientations, objects, substances and seeing. Furthermore these concerns sat within overriding ‘container’ metaphors of the natural world:

We are physical beings bounded and set off from the rest of the world by the surface of our skins, and we experience the rest of the world as outside us. Each of us is a container, with bounding surface and an in-out orientation. (p. 29)

‘Container’ metaphors were ontological metaphors that help with orientation – up-down, front-back. The container metaphor did more than that also defined our ways of viewing events, activity and emotions as entities and substances (p. 25). Ideas became entities – objects. So it was possible to say “I pressed the computer key and got the solution to my problem”, where the solution was an object. Also using the Mind as a Container (a machine) metaphor for example enabled us to say I “can’t get my mind round this computer program today.”

The significance of container metaphors to the thesis was that Lakoff and Johnson identified the physical relationship between the user and the computer in one inclusive concept – the container and the visual field – in effect subsuming the visual field container within the mind as container:

We conceptualise our visual field as a container and conceptualise what we see as being inside it. Even the term ‘visual field’ suggests this. The metaphor is a natural one that emerges from the fact that when you look at some territory (land, floor space, etc.), your field of vision defines a boundary of the territory, namely, the part that you can see. Given that a bounded physical space is a container and that our field of vision correlates with that bounded physical space, the metaphorical concept, visual fields are containers emerges naturally. Thus we can say, ‘The ship is coming into view.’ ‘I have him in sight.’ ‘I can’t see him the tree is in the way.’ ‘He’s out of sight now’. ‘That’s in the centre of my field of vision’. ‘There’s nothing in sight.’ ‘I can’t get all of the ships in sight at once.’ (p. 30)

Lakoff and Johnson acknowledged the ecological psychology of Gibson (1986) and the tradition of research in human development of Piaget (1952), drawing together in their theory an integral relationship of the ‘visual field’ as a container within which objects can be directly manipulated (p. 70) as part of the learning process.

The Macintosh Human Interface Guidelines (Apple Computer Inc., 1992), acknowledged the significance of Lakoff and Johnson, defining metaphors in the context of:

You can take advantage of people’s knowledge of the world around them by using metaphors to convey concepts and features of your application. (p. 4)

However, at issue here was that Apple system interface designers might have not comprehended the wide and pervasive effect of the experiential metaphorical concept. The computer’s internal metaphors were still conceived as relating to external ‘intellectual’ knowledge. Apple metaphors employed ‘concrete familiar ideas’ (p. 4). The example was used of hard disk files and folders, which were ‘analogous to the way people organise their filing cabinets’ – note the use of analogy – again a literary form. An important point was that these were guidelines to the operating system only, not to the later multimedia products, as these were not yet in production. The Guidelines considered Lakoff and Johnson’s work as:

A delightful book that discusses the ubiquity of metaphors in language. The book makes the point that metaphors are not so much picturesque uses of words, as systems of concepts that affect how we describe, think about and experience the world. (p. 4)

The use of ‘analogous’, ‘system of concepts’, ‘describe’, ‘think’ and ‘experience’ also focussed on the superficial qualities of the Macintosh operating system rather than the deeper, interactional advantages of manipulation which the operating system incorporated. The difference was subtle but vital. Lakoff and Johnson themselves may have distracted new interface designers:

We feel that objectivism and subjectivism both provide impoverished views of all these areas because each misses the motivating concerns of the other. What they both miss in all of these areas is an interactionally base and creative understanding. (p. 231)

Lakoff and Johnson’s view was that the origin of the limited, linguistic interpretation of metaphor lies in the expression of the Anglo-American philosophic and linguistic tradition that developed as an adjunct to scientific thought and particularly the need for a descriptive language. In the process of creating a separate literate description of events, the human involvement with the environment has been divorced from the physical experience:

The traditional view of metaphor has been treated as a matter of language rather than a means of structuring our conceptual system and the everyday activities we perform. The idea that metaphor is just a matter of language stems from the view that what is real is wholly external and independent of humans – this is objective reality but leaves out human aspects of reality that matter to us. There is no such thing as metaphorical thought or action. (p. 153)

That the new multimedia groups of experts coming together to work on the new multimedia software challenged these original science and art cultural boundaries was conjecture. However, Mountford and Laurel (1990) reported that there were language problems and conflicts between the ‘new’ and ‘old ‘ elements of the design team; that was the graphics design and the programmers. Both had expectations of users’ familiarity with their own areas of knowledge and were exasperated by users’ ignorance. In other words metaphor was being used in the context of a new, complex production process combining science and art skills. The result – the user interface –was to be judged by the user, according to Mountford and Laurel ‘If an interface does not meet the user’s need then it doesn’t matter about the design’ (p. 54). They proposed the user interface should assist the user by employing the metaphor of an ambassador, pen pal, or tour guide. However, Mountford and Laurel appeared to use the metaphor ‘guide’ incorrectly in Lakoff and Johnson’s terms. For Mountford and Laurel a ‘guide’ was a literary metaphor. It was not an experiential metaphorical concept; it was an abstract concept. The ‘guide’ might be an experiential metaphorical concept in Lakoff and Johnson’s sense only if the ‘guide’ could be physically manipulated by the user – moved round the screen by the mouse – and might even physically ‘guide’ the user by ‘taking’ him/her into folders by clicking and dragging featuring the manipulation and interactive physical experience of a guide. Mountford and Laurel’s pen pal only sat at the edge of the screen and gave verbal instructions, such was the constraint of programming at the time.

A final example was provided by Erikson (1990) of the limited and superficial cultural interpretation of interface design metaphors in his account of the construction of the HyperCard software. HyperCard used the ‘card’ metaphor but effectively the result was a ‘book’ with ‘pages’. Erikson criticised the physical element of the page changes, which disappeared instead of using a visual form of page turning. However, Erikson, in common with the cognitive psychologists described above, perceived metaphors functioning as ‘…natural models that allow us to take our knowledge of a familiar object and event and use it to give structure to an abstract, less well-understood concept’ (p. 73).

One of the major multimedia interface design metaphors employed has been the metaphor ‘navigation’. Because the contextual research showed children having particular problems with ‘finding their way round’ CD-ROMs the issue of navigation was investigated in greater detail.


The Navigation Metaphor

This section was a case study example of the metaphor ‘navigation’ re-evaluated in the light of Lakoff and Johnson’s theory, and illuminated further the general issues outlined in the previous section. It was a chronological description of the evolution of navigation in the context of that development culminated in suggesting the possibility of an improved metaphorical model as a three-dimensional construct within a higher order 3-D navigation metaphor – ‘container’.

The navigation metaphor was arguably the most quoted and discussed interface metaphor, and was important in terms of computer operations. It was used in several contexts. For example navigation formed a key element in the design of a multimedia product, as authors’ structured a user’s experience of the product. Also users described the process of navigating their way round the product. Writers described the multimedia product operations in terms of difficulty or ease of the process of navigation.

The navigation metaphor was not a new concept. For example, it had a literary tradition in 2-D text as in Figure 3.12. Chapman (1987) used chaining, register, cohesion, ellipsis, conjunction, and co-location.

Figure 3.12: Trails of connectivity between key word themes. (Chapman, 1987, p.93)

Chapman measured the quality of reading texts by physically drawing ‘routes’ through text. Relational lines between key words in a story are ‘mapped’. Text with continuous lines of communication through a paragraph was easier to read – more easily physically navigated by the human eye. Eco (1994) showed the physical relationships between elements of a story represented in 2-D ‘container' diagrams as in Figure 3.13 below.

Figure 3.13: From ‘Entering the woods’. (Eco, 1994, p. 21)

Campbell (1988) identified how in any story there was a journey with navigation as a process and a universal metaphorical element. His work, which was first published in 1948, drew on historical texts, but recently his analysis has been popularised and applied to films such as Star Wars. He identified the elements of a quest – a hero, a mentor or guide, a journey and a goal, death – and defining moments were shown to be consistently evident in a comprehensive view of learning in life. Campbell’s argument was more comprehensive than that proposed by Laurillard (1993) in an educational environment in terms of a cyclical teacher and learner relationship, or in a computer context by Mountford and Laurel (1990) who defined the metaphor ‘life as a stage’.

‘Routes’ and ‘journey’ navigation metaphors exist in audio terms too. An example from the Starcatcher script in Figure 3.14 was used based on the researcher’s own experience as a radio producer to illustrate the radio equivalent. The figure shows the journey structure of a typical radio script with the Narrator (Robert) introduces the subject and provides factual information. The sound effects (FX/Music) provides a transition or bridge giving an audio clue carrying the listener to a fictional ‘place’ emphasised with a magical sound effect (FX: shooting star). The narrator intervenes again with factual material, in this case ideas how children can sing a song. The listener is guided on a journey and navigates the story with audio signposts that are recognised by children through experience.

Figure 3.14: Diagram: A visual representation of how the metaphor ‘ navigation’ operates in a conventional children’s drama script. (Howarth, 1995b, p. 5)

In a radio production there were also navigation metaphors of: a good ‘beginning’, two or three ‘high points’, at most five key teaching features, and a good ‘ending’ in a radio production. Features of all effective education broadcasts were; a narrator who told users what they were about to learn, a ‘transition’ sound effects (FX) or music to set up the example – typically a drama fantasy illustrating the point, there was a return, via a transition, to the real world – the narrator – and a repetition of the teaching point. The switch from real world to fantasy world placed in a radio script was ‘navigation’ – an audio ‘journey in 2-D’. The journey did not work with quick jumps (except in moderation, to attract the listener’s attention). It was the slow, musical bridges or sound effects which helped users’ navigate their way take along the journey. The navigation metaphor in education radio deserves a much deeper study than was possible here.

The argument looked next at the evolution of the metaphor ‘navigation’ in interface design itself. It has already been argued that the navigation metaphor has a tradition in earlier forms of media. In the evolution of new media, the first developments of the metaphor ‘navigation’ were in the hands of the software programmers. For example, Johnson et al., (1989) reported how the 8010 STAR was first conceived in 1975, Xerox PARC having been established in 1970. But the story really started in 1945 when Bush envisioned a desktop device called MEMEX. Sutherland built Sketchpad in the 1960s. The first system to organise navigation of textual information in trees and networks (which developed separately as hypertext) was developed by Engelbart in 1968 who also invented the mouse in a system called NLS. Later the techniques were incorporated in a reactive engine created by Kay containing the seeds of many ideas that were picked up and used in STAR. Kay later developed Smalltalk, a language for object-orientated programming. Navigation using a hierarchical visualisation of content was in commercial development by 1989 with a system called Treemap using the tree metaphor. It was originally a game, later made available in ViewPoint hardware by Xerox and released as PC compatible (Canfield Smith, 1982). It was absorbed into Windows software in the file manager system. Hypertext emerged as a commercial product called HyperCard and exists today in a much-evolved form as HyperStudio.

Navigation using physical manipulation techniques was the subject of further experimentation. Using a co-operative manipulation metaphor by Taylor et al., (1991) described the use of a computer glove as ‘exotic’ (p. 6) preferring a conversational metaphor of ‘dialogue’ be used. Clarkson (1991) described how Xerox PARC Information Visualizer ‘is going to appear in the next 5 years’ using metaphors such as to ‘browse files’, ‘browser’, ‘people browser’. The Visualiser also used 3-D rooms with perspective walls and multiple workspaces. The tree was a cone-shaped hierarchy of files. Navigation included the manipulation of information with the mouse, which took place while walking, touching walls and changing rooms and picking up objects. The wall slid round like a sheet of music on a ‘player piano’. Despite featuring many of the ideal elements of a modern graphical user interface the 3-D Information Visualizer was not developed as a commercial product because of its high computer memory requirements. Instead the less memory hungry two-dimensional hypertext software was chosen, based on the property sheets or pages of the STAR system. Smeaton (1991) suggested lack of consensus to old and new media solutions to the navigation metaphor:

Many solutions to the problem of navigation have been proposed, some of which exploit human spatial processing abilities by representing the hypertext in a 2 or 3 dimensional space with maps, landmarks, compasses, while other methods employ the navigation tools used in traditional printed media like bookmarks, annotations and thumb tabs. (p. 173)

And there was always an awareness of the limitations of navigation in two-dimensional features of hypertext as Smeaton states:

However, such a structure can present problems when navigated by users who can easily become lost as the topology of hypertext is monotonous and lacks guiding features. (p. 173)

Lipner (1994) indicated possible implications for gender differences in human-computer interaction strategy preferences, with reference to computer display navigation:

Differences in the way in which males and females navigated were also suggested, but these gender differences were found only in the complex conditions. Males used constructive or global strategies whereas females used analytic or sequential strategies to navigate. In contrast to males, females deviated more from the direct path, were more disoriented and did not internalise the information space. Collectively, these findings demonstrate that strategy differences play an important role in determining users’ spatial behaviour in electronic information space. (p. 214)

Leventhal et al., (1994) reported on age-related differences in the use of hypertext:

While adults were superior to children in speed and accuracy, there were no indications that children were qualitatively different from adults in navigating patterns or perceptions of the system. Some children exhibited more exploratory problem solving behaviours than adults did. (p. 19)

There were other navigation metaphor issues reported. Hypertext was known to cause disorientation as discovered early on by Conklin (1987). Norman (1994) referred to the term hypertext, first coined by Ted Nelson in the 1960s as meaning ‘vast and far-reaching’ (p. 36), but hypertext had since come to mean ‘fast and frenzied’ and not conducive to education, because ‘It is capable of creating erratic jumps both within and across vast domains of knowledge’ (p. 37).

One solution was a program called EntryWay as illustrated in Figure 3.15, that maintained a history of visits to each hypertext node with ‘trails’, ‘nodes’, ‘links’ and map ‘threads’ to allow users to ‘track’ where they had been. Horney (1993) expressed reservations about the diversity of navigation in hypertext and called for a coherent study of what readers do and when they do it using the EntryWay’s visualised results. The visualisation was a pseudo 3-D perspective image (Figure 3.15).  The researcher proposed that the pseudo 3-D visualisation gives a far clearer indication of the overall route to be ‘navigated’, ‘places visited’ and the relationship between the two, than the usual menu page as illustrated by the drop down menu list of pages and subjects.

Figure 3.15: A map of threads showing the connections between postings of different file cards. Created by the EntryWay software. (Horney, 1993, p. 259)

Lakoff and Johnson’s (1980) ‘navigation as container’ informed the value of the pseudo 3-D visualisation in the Research Tool as indicated above. This was because they defined the relationship between routes, journeys, information and argument with container terminology because more ‘surface’ was created, as in Figure 3.16.

Figure 3.16: The relationship between moreroute, journeys, information and argument surface and the metaphor container. (Lakoff and Johnson, 1980, p. 94)

It was proposed that the EntryWay illustration with its pseudo 3-D perspective properties in Figure 3.15 was an example of the visualisation of more ‘surface’ being conceptually experienced by the ‘container’ metaphor mapped out in Figure 3.16. As a result, an argument has more content as in pseudo 3-D visualisation and, in an educational context, potentially but not necessarily, a richer learning experience.

The significance being drawn between Horney and Lakoff and Johnson to the argument in this literature review was due to the following relationships: evidence of the values of greater depth and quality of engagement when children use computers came from the visual search and ergonomics elements of the literature review; interface designers have superficially applied the metaphor ‘navigation’; Lakoff and Johnson have demonstrated the deeper, physical, experiential aspects of the metaphor ‘navigation’ in a 3-D; the value of manipulation during learning has been demonstrated in the section on educational theory; manipulation that happens naturally in a 3-D environment. The conclusion was that there was a logical argument for children, indeed adults finding ‘navigation’ and learning processes more comprehensive if physically involved in some form of 3-D interface design – even in pseudo 3-D perspective – as illustrated in Figure 3.15. Many researchers have confirmed the value of 3-D environment but the argument proposed here was for a logical relationship between existing software and hardware configuration problems in an educational context.

Finally, educationalist writings did not consider the issues described by Held (1974) who demonstrated the visual field of the human as a spherical 3-D field of view as illustrated in Figure 3.17.

Figure 3.17: Diagram: The directions of deformation in the Visual Field during forward locomotion as projected on a spherical surface round the head (Held, 1974, p.123).

Held drew from Gibson’s (1986) ecological interpretation of the flow of visual information in 3-D, represented by the arrows across the surface of the eye radiating out from a central position. The flow causes distortion of the visual image. Also the only object in focus at any one time was a small area at any point on the sphere. It can be inferred that busy children moving their head and body using the computer in less than optimum viewing conditions would be subject to these distortions of view.

However, Held’s physiological findings considered in conjunction with Lakoff and Johnson’s linguistic research combined to conclude the argument; a proposal that the physical manipulation of navigation in a spherical-shaped container of three dimensions was physiologically and conceptually more efficient. By inference pseudo 3-D interfaces could contribute to a more effective educational computer environment especially when the navigation ‘tool’ was the mouse physically manipulated by the user.

Restating the main argument; if engagement was physiologically deeper and visual clarity more efficient when users look down; when learning benefited from manipulation of objects; if pleasure included manipulation and concentration; if metaphorical understanding included three dimensional concepts; then the creation of human computer configurations and interface designs that took into account these factors were likely to enhance the quality of learning at the computer.

Earlier in the literature review problems with interfaces which used the metaphor ‘book’ were discussed. In the next section a proposal that a screen ‘book’ metaphor using a pseudo 3-D perspective interface may be conceptually more effective was investigated.

3.3.7    Improving learning using pseudo 3-D perspective interfaces

Earlier in the literature review issues surrounding interfaces which used the metaphor ‘book’ were discussed. If the value of a pseudo 3-D perspective interface using the metaphor ‘book’ which gave users a physical and visual sensation of being manipulated even in a pseudo 3-D environment could be shown as logically sound, based on the evidence found in the literature review, then the main argument would have its validity confirmed.

Briefly summarising the evidence already presented; a real book was shown to be held naturally in the orthogonal plane to the reader as in the Mach drawing (Held and Durlach, 1991), Figure 3.7 above. In the natural optimum viewing conditions of a head-down book reader; a reader was aware of where he or she was in the book because of the physical clues such as the tactile responses of the relative thickness of the pages in each hand as well as visual clues including perspective on a page.

The situation of a reader of a 2-D computer screen flat representation of a ‘book’ was quite different. The reader was in the head-up position on most school computers. Physiologically inefficient, the interface using the metaphor ‘book’ or ‘page’ presented in the vertical plane had specific problems associated with it; the computer page was nearly vertical, a reader’s head was held at a near horizontal viewing angle. Literature reviewed described the stress associated with the horizontal viewing angle of the head and lack of agreement by ergonomists as to the optimum position of head tilt, and the optimum position angle of the screen sloping away from the reader. As already reviewed Intriligator and Cavanagh (2001) and Sheedy (1990) informed the reasons for the advantages of Cochrane’s observations in their studies of features of the visual system. Also in terms of manipulation in the learning process these single and especially multiple viewers have no tactile clues as to the depth and structure of the book and may hardly be aware of their hands and the rest of their body at the edge of their peripheral vision because of the head-up position; the result was similar to the problems of driving a robot vehicle (Held and Durlach, 1991) when the controller reported slow reactions when a narrow camera view did not show the foreground. In addition, the relationship of a child and the screen may vary considerably (Figure 3.9) and that not only one child but three may all be looking at the computer with varying vertical and horizontal viewing angles.

Lakoff and Johnson (1980) informed the argument for why the existing 2-D screen HyperCard ‘book’ metaphor was much less efficient and informed the navigation issues described by Erikson (1990) and problems of children being ‘lost in hypertext’. Interpreting the value of a pseudo 3-D metaphor ‘book’ argument using Lakoff and Johnson’s evidence; when they described the conceptual process of greater ‘surface’ enhancing argument, the 3-D metaphor ‘book’ was only a conceptually whole experience when an individual related to the book in a ‘container’ metaphor, and was easier to navigate through when physically involved in the reading activity. A book in the hand – in Lakoff and Johnson’s terms of the metaphor ‘book’ – implied a rich cognitive experience because the book had more ‘surface’. 

How the pseudo 3-D perspective interface with its greater ‘surface’ could be created was well-know and relatively easy to achieve. Ellis et al., (1991) proposed the best ways to provide the 3-D visual clues to depth created in a pseudo 3-D environment in 2-D were: overlap, linear perspective, manipulating intervals between greys or colours between objects, lighting, providing shadow especially from the side. Shadows provide information about refractive properties and density of the object. The default lighting was upper left on objects in a simulated 3-D environment. All these features increased the ability of users to acquire information more efficiently in terms of faster recognition times.

The application of a pseudo 3-D perspective interface to the metaphor ‘book’ already exists in CD-ROM publications by the British Library (2002). The evidence from the literature review suggests that the 3-D perspective interface has concrete, significant value in an educational context beyond an interesting visual technique.

Finally, of especially interest to the education process, Lakoff and Johnson reported that when people could not  ‘grasp’ the situation, when things were ‘up in the air’, in terms of orientation within the ‘container’, unknown was ‘up’ and known was ‘down’. Here, the experiential basis was ‘understanding is grasping’:

With physical objects, if you can grasp it and hold it in your hands, you can look it over carefully and get a reasonable understanding of it. It’s easier to grasp something and look at it if it’s on the ground in a fixed location than if it’s floating through the air. (p. 20)

Lakoff and Johnson were clear about the physical need for grasping knowledge. The inference was that the ‘book’ held in hand would be better understood. They evidenced the advantageous physical relationship between the natural head-down position and the mentally best position for ‘understanding’ with a manipulation of information. The language used has much in common with the language used by children describing the advantages of the portable computer (Bowell et al., 1994).

This section demonstrated how coherence existed between physiological vision processes and the metaphor ‘book’, when metaphor was interpreted as having a physical experiential component. It was proposed that the example of using the metaphor ‘book’ confirmed the argument that a metaphor using a pseudo 3-D interface could contribute to a more effective educational computer environment especially if the navigation ‘tool’ was the mouse physically manipulated by the user. The coherence was formalised in the next section.

3.4       A new holistic paradigm

Arising from the evidence presented in the literature review was a proposal for a new coherent holistic paradigm that may enhance conditions for greater depth of engagement between child and computer interface in a classroom. The evidence is summarised in Table 3.2 as follows:

1.    Children’s eye function, field of view and vision issues

An understanding of children’s eye search patterns:

  • How the eye tended to focus on small areas due to the physical structure of the eye.
  • Preference for shapes of text and for edges.
  • Attention was more focussed when objects were moved with the mouse.
  • Looking down and awareness of lower body were natural states for deeper engagement.
  • Children often did not view screens in these optimum conditions.
  • The role of ergonomics and workstation design
  • Human–computer ergonomic standards originated in VDU control-room workstation specifications.
  • The standards for an optimum display angle were a compromise between optimum reading and optimum writing positions.
  • There were unresolved controversies about agreed standards for writing and reading positions at the computer.
3.  Pleasure in learning
    • The quality of learning was enhanced by encouraging learning events through 8 criteria (see p. 93), particularly manipulation of objects, which combines enjoyment with activity and results in a deeper form of engagement.

4.     Manipulation of objects and conventional education theory                  

  • The benefit of physical activity in traditional child development was clearly evidenced. The case for a similar relationship in the context of the new media interface design was not apparent in educational multimedia writings.
  • There were no references that have been found to the importance of optimum viewing conditions, or head-down conditions in conventional learning – the supposition assumed this was a natural posture.

5.    The role of metaphor in interface design

  • The navigation metaphor existed in pre-computer media.
  • The early use of metaphor was employed as in a relatively narrow abstract model for popularising understanding of interface operations.
  • The metaphor ‘container’ was a physical 3-D experienced construct.
  • Conventional use of navigation metaphors could be more effective if interpreted as a pseudo 3-D container metaphor.
  • The efficiency of 3-D and manipulated computer environments was acknowledged but the application was limited by commercial considerations.

6.    The potential for improving learning using 3-D perspective interface designs

There were no existing guidelines for incorporating 3-D imagery in educational new media:

  • Pseudo 3-D methods made screen interfaces easier to use.
  • Errors occurred when looking at objects in 2-D.
  • Features in 3-D may provide faster visual search
  • British Standards of viewing angles may be questioned.

Table 3.2: List of features encouraging greater depth of engagement and coherence in interface design for educational purposes.

The evidence summarised in Table 3.2 described a set of features which might enhance engagement: visually clear easy to follow interfaces, simple navigation, audio instructions, manipulation of objects, humour and enjoyment, small changes in tasks, an increased complexity of tasks, and discussion. Essential, but once removed from these features, due to technical aspects and external factors are the value of a pseudo 3-D perspective interface and the optimum desk and screen configuration. The new holistic paradigm as a framework describing the conditions for a greater depth of engagement was aided by the synchronicity of the container metaphor and its spherical human field of view boundary.

The potential value of pseudo 3-D perspective interfaces in the new holistic paradigm arose through showing the integration of the visual field of the child within a physical 3-D container of body awareness, which was itself a conflation of the physical and linguistic elements of the container metaphor.

The redefinition of the height and viewing angles of a computer screen might enable the full advantages of the 3-D container to be optimised. These optimum viewing conditions cannot be achieved by interface design alone and, though they can be applied within current classroom conditions require a programme of training for staff and students.

The evidence of the literature review informed ten features of an improved interface design that forms the new holistic paradigm. These ten features guided the design of Research Tool interfaces. Some of the features meet current accepted standards, such as clarity of instructions and tasks, and ease of navigation, but are qualified in the light of evidence in the literature review. The Research Tool also included, potential ‘higher’ specifications – qualities that arose from drawing together a range of disciplines studied in the literature review. The choice of these features was subject to the limitations imposed by the production schedule, costs and the software as for example the pseudo 3-D element. Table 3.3 is a list of the ten improved interface design features.


Currently accepted standards with improvements

1.  Clearly defined tasks but take advantage of vision issues.

2.  An easy to use interface but includes one interface – the story – using a looking-down viewpoint to explore unresolved ergonomics controversies.

3.  Clear feedback from interface actions but actions using physical manipulation so feedback involves a wider range of senses and a greater depth of engagement.

4.  Easy navigation but take into account advantage of the reported benefits of pseudo 3-D features.

Higher specification arising from the literature review findings

5.  The value of enjoyable and absorbing educational activities was formally recognised.

6.  An interface activity that engages the user in concentrated activity through manipulation of screen objects.

7.  Manipulation of a screen object must be easily achieved by a small child’s hands.

8.  Interface activities that make small changes in demands on the user.

  • Activities should have elements of multi-functionality that are absorbing but do not cause confusion.
  • A teacher should be able to control the organisation of children’s use of the software.

Table 3.3: List of ten improved interface design features for investigation in the main study.

The ten features will become the subject of the main study and in chapter 4 and 5 will be formulated as ten criteria used to evaluate four core components of interface design: the design and screen layout, the audio instructions, the actions involving the movement of objects with the mouse, and the control panel. The analysis of the results will then inform the research question: What are the design features required to improve the quality of computer interface interaction for 5 to 7-year-old children?


The relationship between the wide range of subjects reviewed and the ten features that enhance the depth of engagement and form a new holistic paradigm are summarised in Table 3.4 to help clarify the structure of the argument.

Ref
The areas of study in the literature review
The ten features of the Research Tool
Currently accepted standards with improvements

3.3.2

Children’s eye function, field of view and vision issues: Visual search patterns are a valuable source that can inform multimedia graphical interface user design guidelines. Children are not viewing the computer screen in optimum conditions.

1) Clearly defined tasks, but take advantage of vision issues.

3.3.3

The role of ergonomic and human factors: Unresolved controversies of human computer design issues. Standards are a compromise to meet known discrepancies between optimum reading and optimum writing positions.

2) An easy to use interface, but in a child/computer configuration that resolves ergonomics controversies of desk angle.

3.3.2

3.3.5

Children’s eye function, field of view and vision issues:

Manipulation and conventional education theory: The significance of manipulation and physical relationship with objects in the learning process was well recognised, if not applied in educational multimedia.

3) Clear feedback from interface actions, but actions should have a physical/ manipulation component so feedback involves a wider range of senses.

3.3.7

Improving learning using pseudo 3-D perspective: The potential role of improving learning using pseudo 3-D perspective interface designs. An understanding of how pseudo 3-D methods can still aid screen design. 3-D increases the speed of recognition. Navigation to use a container metaphor.

4) Easy navigationbut taking into account advantages of downward perspective viewpoint

Higher specification arising from the literature review

3.3.4

Pleasure in learning: Interfaces which combine enjoyment with activity and a deeper form of engagement (Flow Theory)

5) The value of enjoyable and absorbing educational activities was formally recognised.

3.3.4

3.3.3      

Pleasure in learning: Pleasure in using one’s body.

The role of ergonomic and human factors: manipulation‘giving the qualitative feeling that one is directly engaged with the control of objects’.

6) Interface activity through manipulation of screen objects that engages users in more concentrated activity.

3.2

IT in primary education: The negative effect of ‘hardware-first’ approach.

7) Child friendly computer desks and mouse activities easily achieved by a small child’s hand.

3.3.4

Pleasure in learning: Environments that vary in difficulty level, ‘increase both challenge and potential for learning’.

8) Interface activities that make small changes in demands on the user.

3.3.4

Pleasure in learning: Varying levels of difficulty enhance concentration and enjoyment.

9) Activities should have elements of multi-functionality that are absorbing but do not cause confusion.

3.2

IT in primary education: Teachers need training in IT across all subjects in their classrooms.

Table 3.4: Relationship of the literature review to the ten features of the Research Tool.

3.5    Summary

During the period when the literature review was undertaken (1995–1999) there have been exciting developments in interface design. The period was signified by a parallel development in multimedia programming and technology. Lines of code on a black and white screen have evolved to now contain multi-functional graphical user interfaces on full-colour high-powered computers. The literature review took place in the context of this period of rapid experimentation and growth. The rapid growth left areas of concern with significance for the relationship between children and computers in a classroom. The evidence from the literature review suggested a strong case for a new coherence that required a revision of the conventional holistic human-computer perspective. The literature review has also revealed how these areas of concern may be addressed. The proposed ten features may enhance the depth and quality of engagement in a new holistic paradigm and these features were formulated into criteria that informed the research question that the Research Tool is to answer. The methodology design for the Research Tool is now the subject of chapter 4 and the main study is described in chapter 5.