Chapter 4:    The methodology

4.1    Introduction

Chapter 4 sets out to describe the methodology for effectively and thoroughly testing the Research Tool. In the process the research question and subsidiary questions are finalised. Chapter 2 described the underlying principles and the process of the creation of the Research Tool. Chapter 3 investigated textual sources to inform the design of graphical user interfaces of the Research Tool and focussed the investigation into ten features that may improve interface design. The thesis can now progress to refining the methodology for the main study. The subject of the main study was an enquiry that sought to understand the way children used the Research Tool in the classroom, therefore the research was qualitative in nature. Several methodology pilots took place and this chapter demonstrates how research design issues were effectively addressed through a reiterative process to ensure validity through a refining of the methods, the way they were combined and how the data was used. The result was a robust research methodology to assess the Research Tool results in the field. There were 3 stages in the process of refining the qualitative methodology for the main study:

Stage 1:The first methodology pilot began very early during the Research Tool testing period. The timing of the first methodology test was determined by the commercial production schedule.

Stage 2:The second methodology pilot dealt with the issues raised by the nature of the Research Tool as discovered during the early software pilot.

Stage 3:The final methodology design was finalised using the experience gained during the two previous pilots.

In the context of this particular thesis the refining of the research methodology included the testing of the Research Tool in Stage 1. However, once the Research Tool was completed the research methods were the only aspect changed during the later pilots. The opportunity taken to test the methodology at Stage 1 both improved the functionality of the Research Tool and gave rigour to the research methodology itself.


4.2    The first methodology pilot

The first pilot took place at a Junior, Mixed and Infants (JMI) school during the production process of the Research Tool in December 1995. This pilot was designed to achieve two aims: to make an initial, informal and general exploration of the form the methodology should take and also to test the initial operations of the software program. Software testing was required by the BBC. The researcher was a parent governor at the school and knew most children by sight. It was the day before the end of term. The computer brought by the researcher with the Research Tool software already installed was set up in the library with the aim of not disturbing classes. The program was used for the whole school day. The method of recording events was using the Sony Minidisc digital audio recorder not video recording for reasons indicated in section 4.6 Use of the digital audio recorder. Children had some prior knowledge of the project but only in terms of their involvement in creating the audio files for the first draft of the program. Twenty-one children took part; each group of 3 children used the software in turn observed by the researcher and, at the end of the day, the same children now in 4 groups of about 5 children each were interviewed by the researcher in turn about their experience of the software. The researcher observed sitting on a low stool and to one side of the group. In the group interview the researcher sat opposite children arranged in a semi-circle pointing the microphone to each child as they spoke. Groups of three children were chosen because this was the conventional method generally used by teachers in schools. Teachers used this method to ensure the maximum number of children could experience the one or two computers typically available in a classroom at the time.

The observation methodology was changed as the events of the day progressed. The software worked without any technical errors, so the researcher could concentrate on the methodology. The first three observation groups A and B and C were asked just to explore the product. Group A was not given any instruction. Group ‘B’ was told who Starcatcher was and asked to find out more. Group ‘C’ was told about the music and aims of the program. Approaches used in Groups ‘A’, ‘B’, and ‘C’ elicited very little response from children. However, Group ‘D’ was shown a series of work cards that said ‘Find out all you can about Starcatcher and tell me about him.’ Group ‘E’ was asked to take part in all the activities and sing along.

It was observed that there was a marked increase in excitement and interest expressed by Groups ‘D’ and ‘E’. The researcher observed that Group ‘D’ and ‘E’ children, who used the activity cards, displayed greater involvement in terms of exploration, discussion and time spent on activities. However, children had some difficulty reading the cards. The researcher modified his approach during the sessions with Groups ‘D’ and ‘E’. Realising the potential of audio instructions appearing on the activity pages within the software, the researcher began to test the potential by rephrasing the Group ‘D’ instructions cards and reading them to children as clear verbal instructions to Groups ‘F’ and ‘G’. The researcher asked all children in the new Groups ‘F’ and ‘G’ to, ‘Sing along and take part’ in the activities. The observation was that children’s involvement was more immediate, more interactive and focussed. Including this observation, there were four important observations:

  • The researcher could see the Research Tool was too open-ended and children needed more guidance but that the guidance could be made independent of an adult by using the audio instructions.
  • At this stage of the development of the methodology the results of the observations were tentative. Listening to the audio recorder playback, the physical activity rather than clicking, enhanced with the sounds and symbols seemed to engage children in a much more creative and enjoyable way. The Research Tool activities required children to manipulate icons around the screen. The icons were sometimes letters, sometimes symbols and also tools such as an instrument beater. Once encouraged by the verbal instructions of the researcher to take part, children could be heard to be spending less time clicking everywhere and took more time moving the icons around. The effect appeared to be to slow down the excitement to a less frenetic pace. It was quite clear that moving icons around was a new task and quite different from clicking. The activity was exciting to children, but also caused some basic practical problems of making the icons move around. The observation was made that this might be a physical skill that detracted from the task.
  • Children found it difficult to talk about the computer some time after using it. Children would need to be interviewed next to the computer and while using the software.
  • Recording a group of children was technically difficult even with a professional microphone. Interviews should be on an individual basis.

The experience of the first pilot suggested that, as well as the 4 observations described above, the following general requirements should be incorporated in methodology design:

  • The researcher’s observations at the computer would need to be verified from other sources.
  • Methods of recording evidence of the research, particularly children’s activity at the computer, would be required.
  • Methods of recording the effects of the soon-to-be-incorporated sound instructions and the movement of icons should be sought.
  • A method of standardising the introduction of the program to children and staff would be required.

Changes were made as a result of the first pilot to the technical operation of the software to include the audio instructions. At this point the production prototype was delivered to the BBC so completing the commercial production schedule. The second and third methodology pilots and technical changes indicated in chapter 2 (see p. 36, p.38, p.39, p.41) were actually carried out while the researcher was ‘producer in residence’ at the Centre for Electronic Arts at Middlesex University, from January 1996 to March 1997.

4.3   The second methodology pilot

The pilot was undertaken at the same JMI school. This section describes the detailed testing of the research methodology, the results that were recorded, and the changes made in preparation for the main study.

The qualitative research methodology chosen was within the context of descriptive research – where events have been arranged to happen – rather than experimental research in which events that have already occurred were accounted for (Cohen and Manion, 1980, p. 67). Rigour was provided by the collection of qualitative data from 3 sources – the Triangulation Method (Robson, 1993, p. 383). According to Cohen and Manion (1980, p. 233) triangulation gives the opportunity to ‘map out or explain more fully the richness and complexity of human behaviour by studying it from more than one standpoint and, in so doing, by making more use of both qualitative and quantitative data’. The multi-method approach of observation and questionnaires using 3 different sources of data enabled the cross-referencing of records. Importantly it could reduce the potential for error and researcher bias and ensure validity.

The structure of the methodology is described here in a Conceptual Framework of the study – the Why, When, How, Who, What, Where description of the study – which is then presented in an Overview. This is then followed by a precise description of the Procedures – the data collection method. The Observation strategies employed are a study through time using the Time Series Analysis method (Robson, 1993, p. 382) with Key Events collected in a session summary. In the Report, the reporting method takes the form of a log using the narrative method. Finally, the Evaluation of qualitative data uses the quasi-judicial method, i.e. the research questions: What is at issue? What other relevant evidence might there be? How else might one make sense of the data? How was the data obtained? The results of the second test are now described.

The Overview

The headteacher of the JMI school was approached for written permission to carry out the second methodology recognising the need for a formal arrangement for an authoritative research exercise. A class teacher would now be involved as one source of the triangulation method. The researcher recognised the extra work required by teachers during their busy classroom schedule in the research design. The researcher was also aware of the importance of assessing the impact of working with a new teacher to the school who was required to acquire IT skills with minimum opportunity for training (Fisher, 1996). A music specialist was chosen. Impartiality was also an issue with regard to the researcher’s involvement in the systems development; he has an interest in discovering that the system worked. The researcher was a parent governor and chair of the Curriculum Committee. Many children at the school also knew the researcher, due to frequent visits to the school and the proximity of his home to the school. Some children at the school helped in the recording of the sound instructions for the Research Tool CD-ROM. The researcher had already visited the school during the first methodology pilot. Impartiality was managed by the triangulation of sources.

The issues raised by the overview were dealt with, first, by preparing a letter to the school stating clearly that the software was under observation, not a teacher’s use of the software, so clarifying the teacher’s role. Second, by ensuring that the researcher’s manner and approach were quietly formal. Third, by keeping to a minimum the preparation time by asking the teacher to briefly look at associated material prior to the research visit to the school. Fourth, by making it clear that the teacher should only watch over the researcher’s observation session while carrying out their ordinary class supervision. Finally, the teacher would only be interviewed after the class had finished at the end of the morning or afternoon period.

Care was also taken to formalise the physical setting for the research. The computer was set up in the library next to the classroom. 3 children (all seven-year-olds) took part. There were other children using the library for study. The researcher sat to one side and slightly behind the children arranged round the computer.

Procedures

The second methodology pilot took place on a Friday morning from 9.30am to 1pm. The events took place in the following order:

  • The observation session began at 9.30 lasting for 40 minutes.
  • The children’s interviews lasted for 1 hour 15 minutes with a break for lunch.
  • The teacher’s interview lasted for 1 hour (lunch from 12-1pm).

The Data Collection Method

The 3 sources of information for the triangulation process were provided in the following way. A group of children were first asked to explore the software watched by the researcher in the observer role using the Descriptive Record Method. Then, each child in each group was interviewed about the use of the software. Finally, the researcher interviewed the teacher. The original plan was to interview the teacher during morning break. However, the teacher had to cover for an absent member of staff so it was agreed that she would come in and make frequent informal visits to observe, without involvement, children using the software during the morning and she would be free to talk at lunchtime.

Record sheets were designed for each of the 3 sources of information and detailed the following content:

1.   Observation Schedule

      The observation form simply provided spaces for 3 areas of observation:

      What were children doing?

      Why were they doing it?

      What were children learning?

      These questions were considered to be essential questions at this stage. There were eight of these observations sheets, one for each of the interactive pages.

2.   Pupils’ Interview Schedule

The second type of sheet was an interview schedule, which provided a list of key questions prompts and summary questions.

3.   Teacher’s Interview Schedule

A similar questions list was used to guide a ‘conversation’ with the teacher.

4.4   Refining the final methodology design

This section describes the changes made to the final methodology as a result of the second methodology pilot. The main changes were documented in this section beginning with introducing the research to children, then changes to the observations, the design of the questionnaires and a refining of interviews, conversations with children and interviews with the teacher. The aim of the changes was to create a more focussed approach and to organise more accurate cross-referencing of the qualitative data to further reduce the potential for error and ensure validity. The final form of the interview schedules for the 3 sources of data are given in detail (see Appendix 1). The final methodology was to include the Report forming a log of results (see chapter 5) with an Evaluation of the data.

4.4.1    Introducing the research to children

It was observed that following the initial introduction, children spent time off-task asking questions about what the researcher was doing, what was expected of them and only then appeared to feel more confident to try using the mouse and make mistakes. It was therefore proposed that in the final version of the methodology for the main study, key statements should be made by the researcher during his introduction to the whole class that, as far as possible, addressed these concerns. It was also recognised to be essential to place the researcher as a passive observer in the mind of the class. However, interventions that were called for by children were to be allowed and to be recorded.

The Introduction Script

‘Now I have brought along a computer program that I’d like you all to try. It is not a test for each of you. It is not a test and you don’t have to do well or get the right answer. The reason I am here is to find out how you use the new software program on the computer. I want you to try the computer program out. Just try it out. Say what you like and what you don’t like about it and what I’d like you to do is to give it a go and explore it and find out and see what you can do. I’m not going to say anything else other than it is about music; it is learning about music. My job is just to watch you. You can use the software on your own without me and I have asked—[the teacher] to have a watch what you are doing so I can ask [her] afterwards what [she] thinks about the software program, not about you.’

4.4.2    Refinement of the classroom observations

The Descriptive Record Method (DRM) employed to gain data from observations of children using the software was simplified. The aim of trying to identify each category: what pupils are doing, what they are learning and why are they doing it, was too complicated to achieve during the observations. The problem was solved by leaving space in the form for note taking and transferring the task of analysis until later, aided by the audio tape.

The reason for delaying analysis until after the event was that during the pilot, there were a wide range of different events taking place while children were working on the computer to attempt any categorisation during the observation period. However, grouping of observation categories during the process of collecting data during the pilot suggested to the researcher 3 areas for detailed study:

  • The quality of the instructions.
  • The quality of the screen design.
  • The user’s experience of using the mouse to manipulate objects.

As a result the focus of the study was defined as the quality of interface interaction. Therefore the ‘What is being learnt?’ category was removed from the observation sheet.

In an attempt to overcome the observation issues that had been identified, consideration was given to techniques of recording events related to the quality of interface interaction in detail using a systematic recording method for recording a range of events as used, for example by McDevitt (1994). However, the systematic recording method with event recording systems was rejected as inappropriate to the study because it required too much ‘head-down’ attention to the sheet tick boxes. Instead, the observer should be ‘head-up’ watching children in a range of new forms of interaction created by the ability to manipulate elements on the screen using a mouse. Therefore, a more open-ended observation method was chosen to allow the observer to be receptive to as yet undefined interactive processes and patterns that might be taking place. For all these reasons the descriptive research method was finally chosen because it allowed the observer to be open to, and cope with, the range of undefined new interactions between children and computers. Instead of tick boxes identifying a range of predefined activities, spaces were provided for events taking place when children used each interface, including the opening screen, the six activities, the song, and the story.

The Descriptive Research Method gave the researcher flexibility to consider the context, the sequences, the meanings of naturally occurring events such as the starting points and finishing points of recording sequences – after the event. This was carried out by counting of events, noting patterns, themes, clustering, dividing up events into their smaller components, subsuming particulars and creating abstract categories of generalisation, making conceptual or theoretical coherence and general observations. Guidance for these techniques was provided by Simpson and Tusin (1995) and Munn and Drever (1995). The Descriptive Record Method was also the closest formal methodology to a report approach used by the researcher during his professional classroom experience evaluating radio programmes referred to in chapter 1. The researcher was well-practised in lengthy concentration, focussed and detailed observation of events with children.

During the second pilot children were observed to run through the software once and return to work through it again. The reason for their behaviour appeared to be that the first run through comprised activities of children finding their way about. It was at this point that the significance of a key feature of children’s general learning that children need to test out and talk while engaged in manipulative tasks was also observed, in this case solving mouse manipulation problems. The second run through was a more thorough listening to instructions focussing on the activities. The final improved observation sheet allowed for the recording of these events.

All the changes to the observation schedule focussed the researcher’s role as an efficient, impartial and detached observer, not distracted by complex counting of events as they happened, and therefore enhanced the researcher’s ability to concentrate on events as they occurred. The quality of the observation method was also significantly improved, not only by the revised observation form, but also by the new digital audio technology and database software described in more detail below.

4.4.3    Question schedule for individual interviews with children

The schedule of questions for children was refined in the light of previous experience gained. First, questions were organised in similar sequence and focus to the observations schedule, opening screen, activities, story and song. The questions in each sequence were then organised to obtain data on the 3 themes of: the quality of the instructions, the quality of the screen design, and the user’s experience of using a mouse to manipulate objects. The interview was organised so that questioning was not confused or compromised by interventions achieved by giving prompts and supplementary questions a formalised place and a clear role in the questionnaire. The appropriate language for the age group was tested requiring a rephrasing of questions. The finalised list of questions, their grouping and focus of interest was mirrored in an adult form for the teacher interview schedule for effective cross-referencing.

Children’s answers could be recorded clearly on the minidisc. Interviews where given in front of the computer allowing the pupil to refer to or even demonstrate issues. Also as children used the computer, the audio instructions and sound effects punctuated the audio recording as useful reference points for the researcher. Video recordings would have been complex and time consuming to carry out and analyse by comparison.

4.4.4    Interview schedule with the teacher

The role of the teacher in the study was reviewed. The original plan had been for the teacher to listen to the radio programme to gain familiarity with the material. The teacher was now only asked to observe pupils using the software during the day without intervention. This was intended to replicate the minimum involvement of the teacher dealing with computers in a busy classroom. The change focussed the study on the interaction of children around the computer, not a teacher’s role. The question schedule structure and content was based on similar questions asked to children. There was an additional section searching for data on the use of the teachers’ control panel (see Appendix 1).

4.4.5    Final adjustments to the research methodology for the main study

There were a number of further adjustments made to the data collection schedules for the main study. They are listed here:

  • A section initially designed for teachers was extended to the children’s schedule using appropriate language so more information could be cross-referenced.
  • The interview schedules were further piloted with the assistance of a trainee teacher to ascertain the clarity of the questions.
  • The phrasing of the questions was refined to avoid potential for ambiguity.
  • The teacher’s interview schedule was sent to the teacher beforehand so that teachers had time to assimilate their role and information required prior to the day of the visit.
  • The children’s interview schedule was reordered to make a more logical sequence, i.e. asking children to describe what was happening before asking qualitative questions about feelings and preferences, and amended complicated phrasing by using simpler constructions. Only a few changes had to be made as questions had already been substantially tested.
  • Questions about the opening screen were added – the result of an oversight in the original scheme.
  • The researcher undertook a final check of the language in the cross-referencing of questions. Questions in the open ended format were rephrased and key words given prominence in the sentence structure to ensure accuracy and clarity and to ensure their accuracy.

4. 5    Technical issues remaining

There were no changes made to the software after the second pilot. The possibility of further software changes was identified, but these could not have been made because of programming costs and time issues. These were: to stop the moons disappearing momentarily in Activity 6, to make the listeners appear to zoom into the story as it starts (past Granny and through her window), and to allow children to leave the story at any time. Activity 6 might have been better broken up into separate sections to make it more focussed. Activity 3 could have been provided with a repeat button so children could reset the activity. The star words could have been revealed as letters as well as audio in Activity 3 to help recognition of terms. The effect of revealing the beater as Activity 4 began, so children had a clearer idea of what could be accomplished was too technically complicated to achieve, and the limitation was recognised.

The next section is a description of the new digital techniques used to enhance the quality of the research data and the methodology, by improving technical aspects and dependability of data collection and data analysis.

4.6    Use of the digital audio recorder

The new Sony Minidisc digital audio recorder had just been released during the period of the methodology study. The researcher’s experience in radio production led to an interest in exploring the potential of digital audio in assessing the Research Tool. The audio solution solved the issue discovered in the first methodology pilot, which revealed that the quantity of information was too great to record on paper by hand as it happened. The use of a video recorder was rejected because of the complexity and expense of recording the screen and children at the same time and because of the sound quality.

The digital audio recording experiments in the first pilot were first intended to act as an aide-mémoire to the observations using the ability of the digital technology to record date, start and finish times on the minidisc. The original plan was for the recordings not to be transcribed in their entirety but just used to check detail and times and to guide a full writing up of the research session afterwards.

It was noted that Tizard and Hughes (1985) found taped conversations as a record for analysis of interactions inadequate without notes. The researcher demonstrates that digital technology can assist in improving this process. The second methodology pilot firmly established that the new digital audio recorder allowed questions and answers in the child and teacher interview schedules to be recorded clearly and effectively. Also, recordings to support the observation schedule could be made effectively by using stereo imaging to identify individual children speaking in a group of 3, with the aid of a simple diagram added to the observation schedule. This was possible because the digital recorder worked close to the computer without the interference to which analogue machines are prone. The excellent quality of the digital recordings also recorded the mouse clicks and, more importantly, the audio instructions from the software. The effectiveness of the combined results gave the researcher a clear recall reference to analyse the interactions track where children were in the software, and establish what children were doing while they were making comments, quickly and easily. The research could concentrate on observing the issues arising without being distracted by making detailed notes about every aspect of interactions.

The researcher also discovered that the digital recorder allowed much faster and efficient replay of the recordings. This was because the digital technology makes it possible for the researcher to not just digitally mark relevant passages, and to do this without stopping the playback, but also the digital ‘rewind’ worked almost instantly, far quicker and easier than previously possible with the tape recorders of Tizard and Hughes’s experience. The digital recorder is small in size so children were not distracted by its use. Finally, the 6 hours of continual uninterrupted recording time without fear of minidisc or battery running out, also allowed the researcher to focus on the observations without distraction.

4. 7    Use of a computer database

The researcher tested out the use of a computer database in combination with the digital recorder, rather than using a conventional file card reference method to analyse the findings. The new digital audio recorder proved to be every bit as effective in conjunction with the database. The quality of sound and the speed and accuracy of the digital rewind function facilitated easy transcriptions of answers to questions directly into the computer. Therefore it was a realistic option to transcribe answers to all questions in detail straight into the FileMaker Pro database software. Examples of the electronic file card are indicated below (Figures 4.1, 4.2 and 4.3).

Figure 4.1: Example: Children’s Interview Schedule FileMaker Database. (Howarth, 1997)

Figure 4.2: Example: Teacher’s Interview Schedule FileMaker Database, showing expanding pop-up text boxes for automatically accommodating lengthy answers. (Howarth, 1997)

Figure 4.3: Example: Observation Schedule FileMaker Database. (Howarth, 1997)

FileMaker was used to create the 3 separate databases, each one a source of the 3 elements of the triangulation research method: observations, children’s interviews, and teacher’s interviews. Each question with its completely transcribed answer formed one separate file card. Each file card also contained separate searchable fields for:

  • Unique reference number
  • Type of schedule (observation, teacher interview, child interview)
  • Category of questions (screen design, instructions and mouse movements)
  • Category of interface (eight activity titles)
  • Notes under each of the 3 categories in the observation schedule
  • School name
  • Interviewee’s name
  • Date

The database was tested as to whether relevant questions and categories from the observations, teacher interviews, and child interviews could be manually combined, cross-referenced, grouped, compared and analysed using printed out data from the second methodology pilot. The test exceeded expectations in terms of speed and accuracy of data collation and ability to make comparisons of data. For example, the electronic search facility in the software was used to print out collations for manual comparison, counting and interpretation in the following kinds of data print-outs:

  • One question asked about one interface from one schedule from one school only.
  • One question asked about one interface from one schedule from several schools.
  • One question asked about one interface gathered from the two interview schedules, and matched observations (all data sources).
  • Individual word search of positive and negative words relating to each interface, per school, per schedule, and across all schedules.

The text of collated responses could be selected by the researcher and easily transferred from FileMaker Pro into Microsoft Word. Once in a Word text file, the copy could be available for closer analysis, reflection and selection for verbatim quotation in the thesis document.

A newer version of the software was released during this period and allowed for individual words to be searched within each field. For example, it was possible to collate in one new print-out file, all responses using the same positive or negative word in answers about one interface, from all children, a teacher’s response and the researcher’s observations of the same interface.

The comparing and contrasting, grouping and counting events using the print-outs made the process of analysing the responses more accurate and efficient with a smaller potential for human error, and were achieved more quickly and with less fatigue than in conventional methods. For example, there was no need to search manually through pages of transcriptions to find the required data or manage a card index system. Exact transcriptions could be compared next to each other on the print-outs. The process affirmed the validation requirements for the triangulation method, so the FileMaker database software was chosen for the same techniques of compilation and analysis of data for the main study.

4.8   Summary of changes for the final methodology design

The iterative refinements  made to the methodology of the qualitative research through 3 pilot stages are summarised as follows:

1.      Selecting the Descriptive Record Method in preference to the Systematic Recording Method.

2.      Refocussing the areas of research to four main components for study:

  • The design and screen layout.
  • The instructions.
  • Actions involving the movement of objects with the mouse.
  • A teachers’ control panel.

3.      Refining the methods for recording and interpreting data accurately:

         1.  Reorganising and rephrasing questions.

         2.  Redesigning questionnaires.

         3. Introducing digital audio technology and a computer database system.

4.      Ensuring validity of the triangulation process

         1.  Confirming the methods chosen.

         2.  Confirming the coherence of methods chosen. 

         3.  Showing how the data is to be used.

4.9   Defining the research question

The process of the development of the Research Tool, the literature review and the refining of the methodology, led to a first draft of the research question, reflecting the emerging importance of exploring a range of methods of greater depth of engagement including the physical manipulation of objects using the mouse:

What are the elements of movement in the Research Tool interface design that improve the quality of engagement?

The final version became:

What are the design features required to improve the quality of computer interface interaction for 5 to 7-year-old children?

A more general brief which substituted ‘design features’ for ‘elements of movement’ and ‘interaction’ for ‘engagement’ was a device to allow a more precise study of the nature of the engagement using the data from the 3 components of interaction: instructions, screen design, and mouse manipulation and the value of a teachers’ control panel. In particular, the decision to replace ‘engagement’ by ‘interaction’ was based on a view that engagement had a range of undefined features with a qualitative value. Interaction, on the other hand is a descriptive noun in common usage in the field of study and its meaning could be illuminated precisely. The change allowed a more precise understanding of interaction by subjecting the evidence arising from the four main components to the ten criteria.

4.10    The criteria informing the main research question

The criteria, on which to base the evaluation of the four components, comprising the quality of interaction were refined by the methodology piloting. The criteria are indicated below in Table 4.1:

Section A: Standard Interface Design Practice

1.      Are users clear about what task they can do with the interface?

2.      Are users clear how to make the interface work?

3.      Are users clear about what is happening when they use the interface?

4.      Do users find it easy to navigate around the software product?

Section B: Innovative Design Features of the Pilot Software

5.    Is the interface activity an enjoyable and absorbing educational experience?

6.    Does the interface activity engage the user in concentrated activity through movement?

7.    Is the control of movement of an object easy for small hands to achieve?

8.    Do small changes in the design of screen activities stimulate involvement of the user?

9.    Does the interface have multi-functionality within an activity creating flexibility that enhances the quality of engagement, but does not cause confusion?

10.  Is the totality of interface activities in a product capable of flexible organisation by the teacher to facilitate learning?

Table 4.1: Criteria informing the main research question.

4.11    Summary

The piloting process resulted in a main study with a tried and tested methodology. The methodology produced data to inform a set of clearly defined criteria to answer the refined main research question.