A Portfolio for Human Computer Interaction Design

 
Home
Aims & Philosophy
Context & History
Content
Instructional Design
Assessment
Evaluation
Delivery
Contact

True genius resides in the capacity for evaluation of uncertain, hazardous, and conflicting information.
Winston Churchill (1874 - 1965)

Evaluation
 
 
Module Report | Psychology Research Questionnaire

Evaluation of Human Computer Interaction Design (HCID)

Home > Evaluation

Nowadays, in Higher Education everything is evaluated including the institution itself.  This is quite right and proper and this section attempts to look at evaluation mainly on three levels:  at an institutional or course level (both academically and professionally), at a staff or personal level in relation to the HCID module and at a student level in relation to the module.

Institutional Course Evaluation

The Institutional or course level of evaluation is mainly concerned with approval for validation of a complete degree or programme.  The degrees/programmes in which the HCID module is located have already been described fully in the Context section.  These programmes were originally a BSc Business Information Technology with a placement element and a BSc Business Information Systems (non placement) which were validated in 2001 together with the BSc Computer Studies.   I was involved with this revalidation process and devised the HCID module.  It was thought at the time that the module, although shared by both degrees, should be taught separately to the Computer Studies degree in the first semester and the BIT degrees in the second semester.  This was mainly because of timetabling constraints rather than for any pedagogic concern. 

The degrees were then revalidated in 2004 as programmes rather than degrees and additional degrees added to each programme.  From this point, the HCID module was taught uniformly to all students on these programmes in the first semester rather than separately to different programmes in separate semesters.  At this revalidation, the old style sandwich element of the placement BSc BIT degree (where students are allowed to do a year's industrial placement) was modified and approval given for a placement credit.  This means that instead of receiving a Certificate of Industrial Placement the student receives credit points for his/her placement.  From a professional standpoint, the two programmes were approved for British Computer Society accreditation in 2004.  In addition to this, the programmes have had an Annual Programme Review which has now been replaced by a Periodic Academic Review (PAR).  The purpose of the PAR is to evaluate in a self-critical and developmental manner the continuing validity, relevance and operational effectiveness of an area of provision such as taught award bearing programmes. 

Staff or Personal Evaluation in relation to the module

Quite rightly, evaluation is considered an important aspect of teaching and learning and in order to meet the aims of the module, I expect students to evaluate me and the module and I expect to evaluate the module itself.  In order to meet my objectives within HCID, I wish to see if they are engaging with the various HCID activities and one way of gauging this is for them to provide feedback both formatively throughout the module (either  verbally in small group sessions or more formally via a staff-student forum led by their year tutor) and more summatively towards the end of the module.  The more summative feedback is done via an on-line Likert-style module questionnaire  which is provided through the University portal and which has the benefit of offering an effective statistical evaluation.  It also allows students to add their own comments on the module they are evaluating  This was previously done on paper but has been done electronically for the last  two years.  The paper based version meant that students could be encouraged to complete the questionnaire immediately following distribution whereas it is less easy to encourage students to complete the electronic version unless a lecturer is in a computer lab with them.   However, the electronic version allows for more rapid collection of data and statistical analysis leading to more rapid feedback. An example of the statistical evaluation including an analysis of student performance is shown in Figure 16.

Fig. 16 Artefact. Statistical analysis of evaluation of HCID

This is an example of a statistical breakdown of the module in July 2007 because this year's evaluation feedback has not been completed yet.  The analysis reveals that only a small number completed the questionnaire but this was the first year of on-line operation and these level 2 students were used to completing a paper questionnaire. This year I have made sure that all complete the on-line version while they are in the lab.  The Likert scale on this questionnaire basically runs from 1 (bad) to 5 (good) so, although it is difficult to formulate any conclusions based on such a low completion rate, I was  happy that 3.88 (approx. 78%) were clear what they were meant to be learning in this unit and 68% found the teaching effective (similarly the overall satisfaction level was 68%).  A significant number (76%) also indicated that the assessment allowed them to demonstrate what they had understood.  The overall mean student performance for the unit also stood at 59% with significant numbers in the equivalent 2-1 range and five in the equivalent of first class honours category.  The number of positive student comments was also encouraging.  If I compare these statistics with  the work which I marked then I feel that a significant number are reaching the goal of a good understanding of usability which will empower them in this area.  There are still areas of the module which could be modified,  in particular the amount of time on user-centric analysis which could be enhanced.  However, generally I am happy with this evaluation. The results of student work are discussed below but based on these stats and the work that I have formatively assessed, I see no reason why a large proportion of students cannot build on this area in future modules and their final year.

Finally, as part of my evaluation of the module itself and in response to the above statistics, a module report was completed.  This is reproduced in it's entirety here as Figure 17 Artefact on the Module Report page.  In addition to the report and the university-wide questionnaire described above, I produced my own questionnaire which is another source of evaluation and which requests feedback on the module.  I have conducted this survey for over five years now as part of on-going research into the amount of cognitive psychology (and human factors analysis) which computing students consider appropriate and effective in a module on a computing/IT related course.  This is reproduced in its entirety as Figure 18 on the Psychology Research Questionnaire page

Student Evaluation in Relation to the Module

In the HCID module my overriding aim is for students to have a full understanding of usability in order to empower them within this area (cf. Aims & Philosophy).  In order to see whether they have engaged well enough to meet the aims previously described the students are assessed both formatively and summatively.  The important aspects of this are described in the Assessment section but I will now focus on the actual grading or evaluation  of students and, in a sense, how well the module worked for them.  This has already been partly done in the description of the statistical analysis artefact shown in Figure 16.  My own statistics for this year's units suggest that this year's results are similar to last year with a mean overall average for the module of 59% and a similar number of students  scoring over 70 (7%). 

Students on the HCID module complete an in-course assignment which is assessed and graded.  It is a team based piece of work and has been described fully in the Assessment section.  Figure 19 describes the evaluation of a team's work for the summative assignment.

Fig. 19 Artefact. Evaluation of Student Summative Assignment (part of)

Both of the above artefacts (Fig.19) show parts of an evaluation sheet for a team of four students who scored 63 and did reasonably well in most areas of this assignment.  The higher image shows part of the final feedback sheet which a student would receive and the lower image is an initial rough draft set of marks.  Both of these sheets are derived from a similar marking sheet for a demonstration which is based on a team's presentation/demonstration of their work and designed system.  All students should be present at the demonstration and would be expected to be asked questions on what they have done in order to ascertain the individual contribution (and therefore  a lower or higher mark than the other members of the team).  This particular team did not recognise some of the main user tasks and constructed an HTA chart which was confusing and inaccurate. though other areas had major strengths.

I have found that this type of evaluation, with pre-printed feedback with comments, is a positive aspect which students welcome.  In my view it saves time and acts as a check list for the marker while still allowing brief comments which the student appreciates.  The demonstration aspect of the assignment is also a positive feature but has some weaknesses in that it doesn't allow enough time to elicit points from each member of the team.  In future, this assignment will be modified to allow assessment and further individual evaluation of team members via an individual critical review or reflection on the work the individual has contributed.  Alternatively, (or possibly in addition), I would like to experiment with an extension of the demonstration with a view to formalising it into a longer  presentation in which team members present key aspects of their work.

 

Quote  attributed to Churchill during the period of the second world war.

       
David Cox