Chapter three is a study of the literature that informs the design of the Research Tool and defines the areas of enquiry for the main study. The literature review and the Research Tool production took place in parallel because the BBC required that production should be completed by January 1995. The chapter is comprised of two sections. The first section, 3.2 IT in primary education is a review of texts concerning historical events during the development of Information Technology in primary education. The aim of this section is to ensure that the Research Tool is placed in the context of these developments and reveal the lack of research which has resulted in some issues being overlooked. For example, the section indicates the lack of research into close observation of what children are doing when they use computers in the classroom environment. The Research Tool was designed to rectify the situation. For example, difficulty observed by children caused by unstructured organisation in the early multimedia software investigated has led to the incorporation of a teachers’ control panel menu system allowing teachers close control of the Research Tool in the classroom.
The second section of the literature review, 3.3 Interface design redresses the evident lack of research into what children do when they use the computer arising from the review in 3.2 IT in primary education. The areas of research are summarised in detail and the beginning of that section – issues that the researcher first raised in his close observations of children using computers – form the structure for the review. The contextual research described in chapter 1 (p. 9) for example children appeared to ignore icons in the corners of the screen and the physical problems of children managing the mouse are included for clarity and then investigated in detail in the review that follows. Aspects covered in section one of the literature review are now outlined in more detail.
The preliminary observations of the research in progress were published in the British Journal of Educational Technology in April 1997
(See Appendix 2).
The section IT in primary education is a critical analysis of the significant historical events in the development and growth of the introduction of information technology in British Schools. A definition of information technology (IT) precedes a discussion of early approaches to software and is followed by an overview of IT education research during this period.
Information technology, in an educational context emerged from a general exploration of using computers to aid learning in the 1980s. By the 1990s IT was clearly defined to be, ‘The development of pupils’ capability to use information sources and IT tools such as computer systems and software to analyse, process and present information, and to model measure and control external events.’ (DfEE, 1995, p. 1)
The IT activities at the beginning of the research period (the 1980s) involved children in individual, group or class experiences. However, the consensus on what can or ought to be taught as IT has changed over time. For example, Nicholls and Richardson (1995) suggest IT was associated with the use of computers for instruction, and expressed reservations about their application in an educational context. They hoped the aim of IT was to aid investigation and discovery through practical experience. They stressed the need for developing essential skills such as observing, checking, applying ideas, classifying or evaluating with second-hand experiences. Attention then shifted to a discussion concerning the relative values of the role of information technology as a source of skill development and the application of the technology in discrete subject areas (Andrews, 1996).
The definition of Information and Communications Technology (ICT) is a more recent acronym replacing the use of IT. The focus of ICT includes information technology that supports teachers in their everyday classroom across every subject, but also in their training and development in administration. The technology includes not just the Internet, CD-ROM and other software but, television and radio, video, cameras and other equipment (DfEE, 1998).
In the next section three early trends in the development of IT are identified and discussed to show the evolution of styles of Information Technology developments.
This section investigated Logo, Computer Assisted Learning and simulations, three particular aspects of early innovation in computer software in schools. The introduction of Logo in schools illustrated the pattern of educational research in the 1980s before the centralisation of the National Curriculum – exploratory, practical classroom experience of IT – valued by teachers when the software could be adapted to the reality of the classroom, a ‘bottom-up’ approach to the introduction of IT. However, it was suggested that Logo has not achieved its original ‘top-down’ goal – to teach mathematical concepts through programming – certainly not in the primary school.
A survey of the introduction of Logo into schools was important primarily because in the 1980s when IT was in its infancy there was generally little software created and few programming languages in use. Logo stands out in the history of IT because there was no other programming language at the time appropriate for the purpose. The Beginners All Purpose Symbolic Instruction Code (BASIC) software language was considered to be too complicated for children to program by Hoyles et al., (1985), who also suggested a significant strand of educational thought in the 1980s – that children should be taught programming. Learning about computers – would provide another educational benefit that programs could be written by committed teachers. Indeed Logo was a common topic written about in popular magazines, a pattern only recently revived with the simplicity of HTML web coding.
In Britain in the early 1980s attempts were made to use Logo as a support tool to teach mathematics. There is a case for showing that the enthusiasm for Logo by British educationalists was misplaced and unfortunately influenced by the US example. British research (Hoyles et al., 1985) showed that Logo could not teach mathematics in its broadest sense as seemed to have been shown in the US.
Papert, the developer of Logo focused on the use of programming languages to teach mathematical concepts, (Feurzig, 1969). Papert (1980) then developed this theme into an influential book called Mindstorms. He invoked connections with Piaget and stressed a significant role for computers in learning for a changing society. Whilst the opportunity to introduce children to computing skills became a popular theme in magazines, Solomon (1982) and Ross and Howe (1981) criticised the close controlled conditions of the Logo testing in American schools as being artificial. The longest running research into Logo in the United Kingdom was at Edinburgh University at the Department of Artificial Intelligence (1976 onwards). This involved children over eleven. The concerns were wide ranging, but purported to examine the value of Logo to teach mathematical concepts. Many of the studies involved children learning to program using worksheets, and studies also covered metaphors for the computer process. One of the results was that programming became an end in itself not a means to an end (Goodyear, 1984, p. 166). Conclusive proof as to whether the mathematical ability had improved was not confirmed, but the improved articulacy in mathematical discussion was noted as was the pupil’s ability to communicate sensibly and clearly about mathematics.
At the time, teachers looking at these reports might assume that using Logo could give their pupils these benefits and justify them in trying it out. Also Logo had the support of the London University Institute of Education where a study had been carried out looking at what the Americans had done and researched its value in the context of British schools. It was a small-scale study by Hoyles et al., (1985) that examined the impact of Logo on mathematics teaching on 7 pairs of children, mainly secondary with some primary children. The study revealed that:
Pupils don’t follow stages of planning hands on and debugging in sequential order when working towards well-defined goals (Hoyles et al., 1985, p. 272).
The report found no evidence of the value of Logo for developing problem solving strategies and the nature and extent of the collaboration between students. Yet programming activity was claimed to be a powerful aid to decentration (the reflection on one’s own thought processes). There was no evidence for the claim that Logo gave teachers a transparent indication of how children perceive a mathematical problem and formulate its solution. The report mentioned the long sessions required getting the most out of Logo. ‘The product design was not changed to take into account this real classroom limitation.’ (Hoyles et al., 1985, p. 1)
One of several studies looking at the use of Logo in primary schools was The Chiltern Logo project, which began in 1982 (Noss and MEP, 1984b). It highlighted the unreality of comparisons with the American experience where there were large numbers of computers available in specialised rooms. The study focused more on the problems of time and the requirement for young children to immerse themselves to the detriment of other areas of the curriculum. The original aims of Logo for mathematics became more loosely interpreted as skills in planning, hypothesis testing and problem solving .
An indication of the fervour for programming at the time was provided by Goodyear (1984) who rejected the testing of Logo by measuring its capacity to be cosily integrated into the established classroom ‘as nonsense.’ He wanted to establish whether Logo could embrace exploratory child-centred conjectural learning. Noss (1984c) suggested Logo was more acceptable in primary schools because of the flexibility of the timetable, even when the project schools had one dedicated computer per class. Noss found children’s problem solving skills developed considerably as their knowledge of Logo increased.
The Chiltern Project explored the value of Logo as a generic program. Its aim was to reinforce the concept that children should be in control of machines rather than being ‘programmed’ by them. It claimed that Logo provided advantages for education compared to using the computer for word processing, environmental and topic work. Yet Taylor (1980) suggested Logo has a tutee role and that the Logo program was not open ended. Wellington (1985) suggested that Papert’s claims that Logo changes the nature of the learning environment are largely unjustified and that children found programming difficult. Adams (1987, p.1) saw the main issue to be the ‘little chance of widespread acceptance against the prevailing conservatism of computer science educators’.
However, the MEP strategy encouraged building on existing curriculum projects such as The Chiltern Logo Project and the endorsement may have given Logo legitimacy at a time when there was a mushrooming of interest in computers, especially when there was a lack of commercial education software available, (DES, 1987). It was also a time when programming was relatively simple and many individuals were programming using Logo themselves.
Goodyear (1984) confirmed this picture of Logo’s development. It was a period clearly described in his introductory chapter. It was a time of exploration, of lack of software, of concern to make the best use of the new technology to provide support for uncertain and worried teachers. However, only the most enthusiastic and competent programmer or teacher could understand any of the programming necessary to carry out the classroom activities in the rest of the publication. Noss (1984b) admitted that Logo provided no evidence of a positive effect on children’s learning of programming skills. He suggested that programming gave children power over the machine and there was value in the incidental learning that took place while learning to program in Logo. Hoyles et al., (1988) also confirmed that ‘despite the many years of research in this area there is as yet no firm evidence that there is any transference of skills from programming in Logo to problem solving in other domains’ (p. 108). The difficulty of defining what exactly was of value in Logo may have been due to the poor quality of research. Maddux (1993) asserted that the popular view of research into Logo is contradictory because researchers ignored learning and teaching variables and interactions.
However, today Logo is still being learnt as a valuable experience in problem solving (Kapa, 1999). Logo is also being used for simple exercises in control, but only two of the simplest ideas in Kapa’s book have endured. These are first, control programming for the on-screen and floor turtle and a very important second, the development of mathematical concepts through the creation of patterns. These are seen as simple, enjoyable and educationally justifiable projects. This is how teachers are using Logo in many primary schools today and is included in the National Curriculum. It is important to note that children achieve these tasks by writing the simplest code using Logo programs that other people have written in the early stages of development (Forster, 1986). However, the enduring quality of Logo is that teachers can identify its value to children. For children readily experiencing ‘a sense of owning’ was powerful and more important than the idea of discovery as originally reported by Noss (1984a).
The researcher considered that the story of the introduction of Logo in schools illustrated the pattern of educational research during the period. This was the ‘bottom-up’ evidence that teachers’ experience of Logo – their research over the years – recognised that Logo had some place in the classroom. It could be of value if teachers adapted the software to the reality of the classroom. However, the evidence was that Logo had not achieved its original ‘top-down’ goal – to teach mathematical concepts through programming – certainly not in the primary school.
In this section the example of computer-assisted learning further demonstrated the changing ground of IT during the period of this study. Computer-assisted learning in the United Kingdom was mainly a term discussed in a university environment. This section reviewed how software development grew in complexity but was overtaken by commercially available multimedia software tools. Primary schools were creating teaching material with different features to CAL designs which were predominantly drill and practice routines exploring the potential for independent or guided learning. The story of CAL development was of the creation of home-made materials specifically designed by lecturers. It was a university-based phenomenon with staff creating their own programs and using them for internal teaching purposes; a main theme of papers in the Conference on Computer-Assisted Learning Bristol, 1983 (CAL 83).
The feature of development of CAL methods was the attempt to computerise learning as an interactive process as opposed to typical early programs which involved users in making responses to lists and test questions of the drill and practice type. The model was the university lecture passive style of interaction. There was no definition of the quality of interface interaction. The model focused on the need to effectively track what students had completed and provide feedback. To support this view Tait (1984) described a typical example of the early stages of learning software design in the General Author Language Teaching System (GALTS) used by lecturers in Leeds University (p. 16). Authoring languages were first designed to make materials for learning using multiple choice questions. Learner Controlled Modules (LCM) were also developed to help students solve problems through a process of step by step solutions using a review and summary approach. Leeds University developed its own Frame Orientated Author Language (FOAL) creating frames – rectangular areas of the screen text – to be displayed in such frames and linked to interaction during the final compilation process. The system allowed users to phrase questions. The aim was to create an adaptive system to measure and anticipate the learner’s needs and deliver sequences accordingly.
The narrow view of the early university-developed learning materials was identified by Elsom-Cook and O’Malley (1989) who defined traditional CAL systems as author generated materials presented by the computer which ‘simply follows explicit instructions of the author in interacting with the student’ (p. 69). But by 1990 an Open University team had developed a more flexible system called Enhanced Computer Assisted Learning (ECAL) which allowed an author (lecturer) to track student progress, and modify teaching materials.
The key element in the evolution and redefining of CAL that has endured was described by White (1994) who suggested that the source of the pressure for hypermedia was in higher education. ‘We are faced with increasing student numbers, worsening student to staff ratio, and a widening of the ability range’ (p. 64). Significantly White mentioned the first commercially available bundling of tools to create learning materials called HyperCard using the HyperTalk programming language which was developed by Apple Computers for creating what became known as hypermedia. White considered a strength of an open hypermedia system such as Microcosm – against a conventional authoring system – that there was no idea of developing a finite piece of courseware. The advantage of Microcosm was that the links were created as a linked database, the ‘Linkbase’ that sat behind all the documents. It could be rearranged so novice students created their own hypermedia links and even their own dissertations. The significance of HyperTalk programming was that while the discrete elements of authoring packages were expensively developed by universities for their own needs, authoring packages were becoming commercially available to everyone wanting to create educational materials through cheap, easier to use multi-functional authoring packages on the open market such as HyperCard.
Allinson and Hammond (1990) identified the programmed learning, intelligent tutoring systems and learner support environment styles of CAL, but pointed to the limitations of university CAL programmed learning which ignored the involvement of the learners being taught. The application in CAL programs of Skinner’s (1968) theory of learning, that posits learning is reinforced through the process of operating a machine in such a way to get the answers correct in a mechanised drill and practice context was questioned by Allinson and Hammond. They suggested Skinner’s theory of learning informed the approach to computer-based package design ‘but to limit the learner merely, say, to browsing an information database, or to directed step-by-step tutorial, hardly matches the richness of everyday learning’ (p. 137). The latter part of the paper was a consideration of the problems of disorientation with hypertext. This paper was significant because it reflects how commercial software forces overtook CAL development in Higher Education.
CAL had limited impact in the primary and secondary education sector. Freeman (1989) in a paper, perhaps one of the first CAL papers with ‘multimedia’ in its title, in the context of the higher education dominated CAL conference talked about multimedia developments in primary and secondary education being:
…a growth industry in production of multimedia materials for training, information and to a lesser extent education. The fruits of this industry have now impinged on schools, in the same way as other business software, such as spreadsheets, is making its way into schools. This is not surprising, as the major producers of software have had funding curtailed by the government since the demise of the Microelectronics Education Programme. (p. 189)
Freeman described navigating, discussed spatial awareness of maps and the interrogation of charts, data, pictures, text, in the BBC’s Doomsday Project computer controlled laser disc system. Freeman observed how users reacted to the system by understanding how to use it and finding their way round. But she criticises the quality of the interactivity. Interactivity becomes consumed by dealing with problems created by the interface, not with the information, because of interface design difficulties (p. 192).
University staff looked to the growth in children’s commercial software games to resolve the problems of poor quality presentation and boredom of university students using CAL. Wishart (1990) studied primary school user involvement with computers and their effect upon learning. The study involved an educational computing game called VESTA, a game to teach children how to avoid being trapped in a fire. The conclusion was that learning significantly increased by giving a learner control of action by increasing the complexity of the game progressively and providing challenge by using a scoring system (p. 149).
Watson delivered an early paper on primary school software at CAL 83 and suggested simulations improved group dynamics, discussion, and the interplay of ideas.
...an increasing number of CAL units in the humanities are not asking questions on the screen, but creating an environment in which the pupils ask the questions for themselves deciding a path to follow and seeing the impact of their decision on the screen.
(Watson, 1984, p. 13)
What were the reasons why the university CAL experience was different to developments in schools? First, primary software programs were written by educationalists with practical experience of the primary classroom. Second, the software was free or very cheap using the BBC Micro. The ethos of providing shareware and grass-roots development of programs like the Microelectronics Education Programme was the norm. Third, the early work was carried out by teachers who created programs like Granny’s Garden which were ready prepared for the market by commercial companies. CAL had little impact on primary schools.
However, while universities were embracing open and distance learning using the Internet at lightening speed, only recently has computer-assisted learning called Integrated Learning Systems (ILS) in a DfEE funded project re-emerged in the primary sector. In a report (NCET, 1994c, p. 14) of two US English and Maths products, the evaluation appeared to mirror the research evidence into the value of Microsoft’s Windows software criticised by Maddux (1993). There was a list of concerns such as the quality of content, rate of progress and progression of ILS material. Also significant were the concerns over comfort, headphones, blinds and heights of the computer. There were comments about fall-off of concentration after 17 minutes, 15 minutes being the average. Primary students seem to show a deterioration of behaviour from their normal school level in the session immediately following an ILS period. One school solved this by a 5-minute session in the playground afterwards. Findings suggested it was beneficial to have a non-book-based activity after the period in the ILS classroom.
The long ‘list of concerns’ hindered the introduction of further computer software research. Education research was at an early stage and priorities were different anyway. Indeed NCET’s priority was to manage and develop materials, research came later (Brown and Howlett, 1994) and the report expressed doubt that there would ever be the funds to provide the hugely expensive servers, systems manager, and dedicated rooms that ILS required in the UK.
The next section looks at the special role that simulations occupied in the 1980s and their disappearance as an educational tool in the 1990s. The researcher was involved in the early development of simulations, reflects on their value despite a recent decline in educational use, and considers principles that might be applied in the Research Tool.
The popular definition of computer games covers a range of types of which simulation is only one, race games, arcade games being others. In an education context the role of computer games was specialised with specific differentiations concerned with their educational value. Tagg (1985) in a publication for educationalists and parents identified a list of different categories of simulations in schools, which demonstrated how early the genre had taken hold and how extensive was its exploration. The list was as follows:
Bradbeer (1982) evidenced that children liked using simulations, and encouraged learning without the mediation of a teacher. A simulation such as Mary Rose included one or more of the categories above by allowing children not just to search for a wreck, navigate a boat around the Solent, and send down divers, but developed decision making and problem solving skills. Ballooning simulated flight over various kinds of terrain with choices made in relation to height and wind strength and direction. However, children discovered that the questions were not random. The result was that they could learn the way through and then tell others in the class.
Simulations such as Granny’s Garden were successful because they were exciting and stimulating for children. They gave opportunities for talk, decision making and predictive writing (Watson, 1986), and the depth of engagement through pleasurable involvement in the game overcame the issues of scarcity of machines, background noise, and poor preparation for the task. The activities were easier and familiar activities and thus more comfortable for teachers to handle at a time when they were so uncertain what they could or should do. The activities were flexible and could fit into the time constraints of a classroom day.
The attractiveness of simulation was often because teachers’ notes accompanying the early simulation software set up the rest of the project. Children had to use the reference library, worked in groups on one aspect such as read and listened to a story about going on an expedition. Children drew and made models and devised plays – away from the computer. It was seen as a good thing that children were not at the computer all the time. Indeed there were not enough computers for children to use anyway.
Some simulations also had the benefits of simplicity not present in today’s multimedia structures in the sense that they had only function keys to operate a limited range of options. The limits of the software and memory defined the way Granny’s Garden and Mary Rose worked in contrast to the complex, time-consuming activities of CD-ROM projects such as MYST. It was only possible to create simple graphics on the screen, with few words. The result almost by default was a simple structure that children could easily follow. The possibilities of becoming lost or not understanding the cluttered screens did not often arise.
The BBC explored the growing potential for selling simulations. The researcher was personally involved in discussions as to ways of applying traditional high quality production values to the new media. Those values included activities which could be contained within the time scale and organisational limitations of a classroom timetable. These were the values underlying the Climbing Everest, Flight and BirdSpy created by the researcher for the BBC Micro. The subjects also had radio programmes that set the emotive and imaginative scene for the measured activities in the software. Attractive subjects that had general as well as educational sales potential were deliberately chosen.
According to (McFarlane et al., 2002) there is very little change in the specific educational approach to games and simulations.
It seems that the final obstacle to games’ use in schools is the mismatch between games and curriculum content and the lack of opportunity to gain recognition for skill development. This problem is present in primary school, but significantly more acute in secondary (p. 4).
Simulations invite learning but do not guarantee it and learning is not so easy to quantify. They are hard and therefore expensive to create and each child needs to spend a lot of time at a computer. Perhaps it is because multimedia simulations such as Sim City and MYST though beautifully created are extremely complex and require a powerful computer, skilled teachers and curriculum time that is not available. There are examples of their successful use in Australia including Exploring the Nardoo but significant teacher input and commitment is required.
The Research Tool presented an opportunity to rekindle interest in simulations using BBC resources. The Research Tool, instead of considering simulations in terms of skill development and learning outcomes (McFarlane et al., 2002, p. 11) explored the pleasure element of ‘gaming’ in the context of improving the quality of interaction through depth of engagement and is pursued in the second section of the literature review.
Carr and England (1995) considered simulation as the basis for developments in virtual environments. A natural successor of simulations in the adult environment was virtual reality and according to Carr and England it ‘can provide high levels of engagement depth across a wide range of abstractions’ (p. 211) in terms of knowledge gained, length of time used and user satisfaction. Virtual reality technology was outside the technical possibilities of the research at the time. The Research Tool explored on the simulated 3-D perspective which was achievable with the software available.
The discussion of different software types is used to provide a background to the increase in pace of developments (prior to the period of the Research Tool’s development) that are the concern of the thesis and in particular the nature of technology research, which is the focus of the next section.
What was the pace and extent of research into the use of computers in schools? Summarising the events of the ‘watershed’ of the National Curriculum – before the 1990s – can be best described as a fitful set of research initiatives. In the early 1980s computers were first introduced into primary schools. Financial incentives were used to encourage schools to do so. But there was no focus, no national curriculum. The computer was such a new tool that the context was to see what educational potential could be achieved by writing code. The official approach to the developmental process was entirely in keeping with current educational policy – the practice of funding specific developments with short-term strict deadlines and by a gradual permeation of ideas using the traditional cascade approach. There is little change today – a primary school in 2001 might have 39 such ‘pots’ of funding each with a report to be written and an inspection to confirm its accepted application.
The process in the new area of computers was achieved through the Microelectronics Education Programme (MEP), which begun work in 1981. There were primary focus groups operating as centres of excellence and good practice such as King Alfred’s College in Winchester. Individuals, including teachers designed and created specifically classroom based educational software. In addition, the BBC took a leading role by marketing the BBC Micro as a nationally available educational computer. It was not until 1987 that the next Macintosh II and the Archimedes computers were introduced, shortly followed by the publication of the Microsoft Windows graphical user interface, which included software for word processing and database creation.
Did the speed and extent of the impact of research change from 1981 onwards? According to McFarlane (1997) the answer was a confident, yes. In 1989 the National Curriculum was introduced which established that information technology had to be taught. IT became an Attainment Target in the Technology Orders. However, by 1995, IT was not being taught in school to any extent. The reasons were considered to be lack of resources, cost and lack of staff training (p. 5). Various MEP pump-priming initiatives were introduced to ensure hardware and software including the new multimedia products got into schools. The extent of the impact was limited by the learning model which was still the 1980s ‘cascade’ approach, and ineffective, because it lacked a focus on training for teachers (INSET), (Cox, 1999). Table 3.1 is a brief chronology of government policies and significant events in IT showing the increase in pace of policy implementation during the period.
Microelectronics Education Programme (MEP) Micros in Schools Scheme with teacher training (cascade model) and subsidised computers
MEP extended, free modems for schools 1986
Microelectronics education support unit (MESU) established National Curriculum begins
CD-ROM in Schools Scheme. Interactive Video, edutainment and business software
Grants for education support (GEST) from the DES
NCET Looking at Laptops project
CD-ROM in Primary School Initiative
Superhighways Initiative Office for Standards in Education (OFSTED) inspections begin GEST, IT in subjects
Initial teacher training in IT begins
NCET produce series of television programmes about using IT. Rapid growth of schools on-line Integrated Learning (ILS) in schools
Teacher Training in IT
Table 3.1: Chronology of government educational Information Technology policies.
It can be argued that the really significant change in educational IT came about not by a planned development and the role of the MEP, but because of a group of factors surrounding popular demand, fashion and parental expectations in the early 1990s. A significant role for popular demand is a case put forward by Young (1988). Up to this time schools used software delivered on the BBC Micro and similar computers that children and teachers could not use at home as in the manner of the later PCs and the Windows environment. The new mass marketing of IBM and Mac II personal business computers were also advertised in the domestic market as having an edutainment value in the home. These computers used the new graphics-based user systems interface. They were fitted with CD-ROMs and were sold with ‘bundled’ edutainment CD-ROM software, often based on existing books and encyclopedias. What can be identified is at this same time there was a move to organise effective research in IT (Cox, 1999) for reasons described by Waxman and Walberg (1986) in more detail below.
The requirement for more organised research was part of a political movement for improvements in education and by 1995 IT in the National Curriculum was established as a subject in its own right and had to be taught (DfEE, 1995). Schools were now required to improve IT standards. Research in how IT should be introduced and what educational value could be achieved developed in parallel. The decision that IT should be taught through subjects in reality caused staffing and cost problems for schools. The result was that IT tended to be taught as a separate subject in dedicated computer suites for staffing and resource reasons. Schools that did teach IT as a separate subject emerged well from OFSTED because they met the key inspection targets.
Since 1998 there has been a further development – the extension of IT to include communication technology – and the acronym ICT is now in common usage. The initial teacher training curriculum for the use of information set new statutory standards for equipping new trainee teachers with the knowledge, skills and understanding to make sound decisions about when, when not, and how to use ICT effectively in teaching particular subjects. The result was that trainees were being taught to use ICT within the relevant subject and level in the National Curriculum, rather than teaching how to use ICT generically or as an end in itself. There has also been the establishment of examinations (GCSE) in ICT. The focus has moved on to the real world use of computers reflected in the similarity of academic (GCSE) and vocational (GNVQ) course content.
Having summarised the key features of the period 1980s and 1990s, the next stage of the argument was to consider the quality of developments in research before and after the watershed of the introduction of the National Curriculum in 1989 in more detail. Trends in IT teaching during the early period of this study were illustrated by comparing the keynote speeches at the computer-assisted learning (CAL) conferences in the UK in 1983 and 1989. The comparison provided an insight into the changes in approaches to IT. By 1995 the theme of the CAL conference (Kibby and Hartley, 1995) was ‘Learning to Succeed, draws upon the experience of the widespread use of the microcomputer in education over the past decade in order to assess the use in the future millennium. It attempted to answer such questions as what will the future look like and what lessons could we draw upon from the past to guide us?’ (p.1).Bork (1983) at the 1983 CAL conference saw the future of computers in education as depending on solving issues of the widespread future use of computers, confirmation that computers would lead to a better not worse education system, an effective production system for education programs and institutional change (p. 4). He suggested that much of the software material including commercially published materials were also of very poor quality (p. 1). Enough experience had been gained by 1989 relating to Bork’s view to categorise the use of computers and express doubts about the rationale for introducing them in schools. For example at the 1989 CAL conference Hawkridge (1989) asked ‘Who needs computers in schools and why?’ Analysing four rationales for using IT in schools: increasing awareness, learning computer programming, learning word processing, spreadsheet and information retrieval, as well as learning selective topics, he suggested it was not teachers who wanted computers in schools but the policy makers and the multinational companies who wished to sell generic software and hardware (p. 2). In the context of the speed of the development of computers adapted to specific educational use, Russell (1996) argued that IT in Education in the 1980s might have evolved much more slowly were it not for the improvisation of primary school teachers.
By the 1990s schools had gained enough experience to express doubts about the rationale for introducing computers in schools.
For example, Hawkridge referred to the awareness argument, ‘let children use computers and they will learn about computers and any subject’. He considered it to be flawed as a waste of resources. Also that the ‘preparation for the workplace’ rationale he considered was not backed up by the resources to use computers to the best advantage. Most importantly, Hawkridge identified that there was a lack of good educational research carried out up to that time.
Hawkridge was echoing a long-held doubt concerning educational policy. Shaw (1982) has suggested that educational policy was usually an attempt by society to get the education system to solve problems that it cannot solve itself. Shaw noted that these objectives are not being achieved without in-service training, finance and enough staff and teachers whose dominant role was to create software.
The concern for good research in education was a relatively recent attitude. Evidence to support this argument was provided by the history of the main national body promoting the use of IT in Britain, the National Council for Educational Technology (NCET). NCET was established in 1988. It was an executive non-departmental public body with charitable status. The Secretary of State for Education and Employment in consultation with Secretaries of State for Wales, Scotland and Northern Ireland appointed the members of the Council. It replaced the SCAA, formerly the Schools Council. NCET, then became the British Educational Communications and Technology agency (BECTa) in April 1998. Only since that date was there a requirement to provide research references to support evidence for good IT practice in schools. In fact the first major publication providing research information of this kind was in 1994 (Brown and Howlett, 1994). This brief pamphlet contains references of 122 papers, of which only 38 date from 1990 or before.
One of the reasons why educational research was limited before the watershed of the National Curriculum was put forward by Bork (1983)who asserted that programming languages were useless in generating educationally useful computer-based learning materials (p. 1). The software for creating the learning materials had to be written by the designers first. Generic authoring programs such as Director (which once required the power of a mainframe computer) ran on a desktop PC and enabled an educational software company to avoid writing program code to create each new individual multimedia classroom resource didn’t arrive until the 1990s. However, Hawkridge could not predict the speed of commercial authoring software development and the greater range of integrated interactive techniques incorporated in any one package. By 1994, the authoring programs speeded up the creation of new products, cut costs and allowed the developers to concentrate on designing educationally useful content. By then British schools were even able to purchase software called HyperStudio to allow children to make interactive material themselves.
Young (1988) found it was not government schemes that had the greatest impact on the role of IT in Education. Schools became part of a trend. Young points out that change frequently occurs in education due to fashion rather than research, and that computers reached school as a result of trends in society. Prophetically anticipating the huge growth in PC computer ownership, reductions in price and availability of software were the dominating factors in the early 1990s. Just as significantly, Young also suggests technological innovation was different from other innovations because of the enormous range of tasks a computer can undertake. He considers that the study of a link between innovation and research will illustrate the reasons why IT was introduced in schools.
Sage and Smith (1983) offer a simple reason why there was a lack of research.
The educational worth of computers was relatively unknown in primary schools by teachers, heads, advisory bodies and researchers. Such knowledge has developed after acquisition (p. 40).
Self (1985) also identifies the pervading priority that hardware should be introduced first, criticised the available software and called for more research:
While the United Kingdom government can spend £30-£50 million on putting computer hardware into schools the Research Councils are struggling to raise £500,000 to support related research (p.168).
Waxman and Walberg (1986) suggested that education IT practices in the 1980s were based on unexamined assumptions, with little empirical data, referring to the work of Howey (1977), Tabachnick et al., (1980) and Waxman and Bright (1986).Waxman and Walburg also suggested one of the reasons for the slowness in growth of research was the poor research methodology of the period. Researchers tended to formulate problems that they have the skills in solving and studies used different methods and created contradictory conclusions. However, the political mood changed. By the end of the decade an increasing imposition of education policy from central government also included a general concern about IT in education and resulted in more research being carried out, methodology was also improved and there was an active argument about what constituted good research. Teachers and educators had become both producers and consumers of educational research (Doyle, 1990). The methodology of this thesis was informed by these developments and great attention has been paid to thoroughness of techniques employed.
During the period (1980-90) the long tradition of teacher involvement in ‘grass roots’ curriculum development changed generally with the imposition of the National Curriculum towards a more centralised ‘top-down’ model. Early in the period (1970-80) the views of Advise (1982) argued that development occurs within the existing curriculum. That teachers should be researchers, the teacher-centric model had been a central theme from the 1970s and the view of Stenhouse (1975). In the 1970s Bruner and Stenhouse were the source of inspiration. An example of their model was the researcher’s anecdotal experience at the Regional Resource Centre at Exeter Institute of Education at that time. Teachers were encouraged to get together and analyse, discuss and develop curriculum materials to meet the learning objectives that they defined for themselves. The role of the Resource Centre staff was to create educational resources using the new miniature cameras and tape recorders and photocopiers that had just arrived on the market. However, this early form of pragmatic educational research using teachers to create learning resources required a high level of training and practice. It was also a decentralised model, out of favour as the centralised National Curriculum was introduced. It is arguable that even though the teacher-centric model is valuable, teacher-based IT research became swamped in problems of the technical knowledge and time-consuming programming. This was the experience of the MEP project (Noss, 1984d).
Watson (1989) found the debate about IT research had changed between the 1980s and 1990s. The focus had been the relative merits of different programming languages, and the structure of computer awareness courses. In the 1990s the discussion shifted to a consideration of the relationship between computers and the curriculum.
Concerning the quality of educational research in the latter period of the National Curriculum watershed, Maddux (1993) was still critical.
....nothing miraculous happens automatically as a result of putting a computer and a child in the same room and that research studying the technology and its infusion into the classroom is extremely limited (p. 14).
Maddux made a series of observations about general attitudes to the need for children to program computers to get good jobs and the need to boost the nation’s faltering economic health, which illuminate the limitations of IT research in the latter period. They confirmed, in an American context, the argument expressed by Hawkridge (1989) in regard to the industrial interest in education in the UK.
Maddux (1993) also reported that more research was done in the 1990s, but it was simplistic i.e. the skills were not defined, with conclusions couched in terms such as:
If learners are taught to use computers they will improve more in....(some skills) than an experimental group who are traditionally taught (p. 7).
Research shifted to teaching children to program using Logo rather than exposure to demonstrating its use. The major problem of this period, Maddux concluded, was that researchers ignored both teaching and learning variables. His observation was grounded in recognising the practical realities of a busy classroom and has special significance to this thesis. The Research Tool for this thesis was software especially designed to be effective within the operational limitations of a real classroom environment.
A third of Maddux’s research stages was predicted from 1993 onwards, the stage which he described as concentrating on learner/teacher interactions. However, the only examples he used as a reference of the valuable type Stage Three research were; use of word processing, Logo meta research, and Special Educational Needs (SEN) (Guddemi and Mills, 1989), research undertaken before the introduction of interactive CD-ROM products, an important omission which this thesis attempts to rectify. Even so, most significantly, Maddux (1993) commented on the new graphical user interfaces being introduced:
It seems incredible, but I have never seen any serious theoretical discussion of whether graphical interfaces are consistent with what we know about the way children think or learn, or whether clicking a mouse on icons has any concrete advantage over typed word commands (p.12).
As an example Maddux (1993) also reported that the Windows 3 with a graphical picture based interface being sold commercially as an educationally valuable product with no evidence that it had educational value. ‘No one has studied it in an educational context.’ (p. 13). Benzie (1988) a year after the introduction of the Windows environment reviewed the potential of Windows Window-Icon-Menu-Pointer (WIMP) environment and expressed concern about the cost, skills and time required for creating education specific software (p. 212). Similarly, the first Mac graphical user interface on the market in 1984 interface was tested extensively but not with children or in an educational context. These observations evidence the lack of classroom level testing of the new graphical user interface based software at the time. This was not to ignore considerable testing of the Windows environment in an adult context (Billingsley, 1988) concerning technical problems of computing power and with adult user issues such as helping people understand windowing operations (p. 433). Testing of children’s reactions to multimedia graphical user interface software only happened later (Brown and Howlett, 1994).
What Maddux also describes in his paper are that the features of the successful international marketing operation by Apple completely bypassing educational considerations. Maddux considered that Apple took advantage of a western cultural consensus that typing was regarded as a low-level e.g. secretarial skill and that managers therefore refuse to use the computer as a demeaning activity. The Macintosh interface was sold to managers on the basis that there was no typing involved and that the mouse was used to make more and better decisions more quickly. There was no reference to education in the Macintosh human interface guidelines (Apple Computer Inc., 1992), which supports the view of Maddux that computers as an educational tool specifically for school use by children were an afterthought.
The evidence of the literature review at the time the Research Tool was designed revealed the lack of research into educational software and particularly the role of the graphical user interface in an educational context, which this thesis is intended to correct. The evidence for the hardware-based priorities of computer introduction into schools – informing the problems children experienced with managing a mouse, the CD-ROM caddy, viewing the adult-specific computer box and screen – also suggested the need to look further into the physical aspects of the child-computer relationship. Sage and Smith’s earlier observations about the priority of policy to introduce hardware in the context of Maddux’s references to lack of commercial marketing to the education section was to begin to build a pattern of government-industry ‘hardware first’ involvement; a pattern that has become increasingly transparent with the introduction of the Internet into schools.
The critical analysis of the introduction of information technology in British Schools has charted the significant historical events in IT and ICT development in the 1980s and 1990s. First, it has shown the influence of the National Curriculum in information technology in context of the shifting ground of what constituted IT learning in schools throughout the period. Second, the evidence suggested there were a range of factors including fashion and commercial that influenced the introduced hardware into schools prior to the creation of specifically designed educational multimedia software. Third, the inadequate staff development and little consideration of classroom practicalities have been revealed. Finally, the literature review of the significant historical events in the development of information technology highlights the poor quality research into IT particularly in the early part of the period studied. Overall, these factors illustrated the lack of attention to children’s use of the computer in a classroom environment. In the next section a reminder of these contextual observations introduces each section of the detailed review of research papers in the related areas of study.