Nerilee has a background as a teacher, who then moved to the university sector. She used "grounded theory methodology" for her work. After extensive research she concluded the central issue with student's view of assessment was frustration. This frustration comes from different views of assessment by students and teachers and between different students.
Several examples of areas for misunderstanding and frustration were given. The first was assignment deadlines which have nor rational and re not consistently enforced. Another example was vague definitions of word counts for work. These seemed to me not to be fundamental issues with assessment, but simply examples of poor assessment design. Two ways to overcome this were better training for individual teachers and the sue of standards for assessment within and between institutions.
Nerilee presented a theoretical framework showing what the student takes into account in their response to assessment. One comment was that some students will check what the assessment is before enrolling and use this to help decide which course to do. To me this seems a statement of the obvious. As a student I avoid courses which have an semester examination as the primary form of assessment (also as a teacher I think this is not good for learning). But apparently many course designers do not realise that assessment is very important to students when choosing courses.
Factors in student's view of assessment Nerilee found included the teacher's perceived skill in teaching and the extent to which they are seen to be caring about students. One example is a tutor who does not moderate the mature age students contribution, so the younger ones get a chance to speak. This is not an issue for my tutorials, which are online and so everyone can talk (and behind the scenes I provide the hesitant students with individual encouragement to post). Also I provide an individual response to every student every week, which shows the tutor is taking an interest in the student as an individual.
One useful reminder was that not all students understand what the assessment process is. In my view the use of templates and particularly a Learning Management System (such as Moodle) can help with this. With an ad-hoc paper based approach, each teacher prepares their own assessment items and so it is easy to have inconsistency across an institution. If templates are used with standard wording this is much more difficult. Also if an LMS is used, which has links to the same assessment guidelines on every course, this can help.
While Nerilee concentrated on the student's view of assessment, the suggestions presented could equally help assessment more palatable to teachers. Assessment is mostly seen as a drain on time and resources by teachers, who would like to get on with the real teaching. In my view this is a misguided approach. Assessment is an essential part of learning (in one case I enrolled in a course with no assessment and found it frustrating). By integrating the assessment in the way they teach, staff can reduce their workload and also remove many of the day to day annoyances of students grumbling about assessment problems.
Before Nerilee's presentation, Professor John Dearn, talked about the new policy on assessment he is developing for ANU. It is unfortunate that this did not follow Nerilee's talk, as that would have informed the issues.
Professor Dearn mentioned the university had dozens of policies, mentioning "assessment" (I found 254). He discussed the difficulties with formulating a policy for an institution, with differences between disciplines on what assessment is and the link between policy and practice. In my view, one way a univerity could better link policy and practice would be by integrating the teaching of teaching and learning. The unviersity currently has a separate units dealing with teaching teachers, helping students with learning and provision of learning technology. Institutions such as the University of South Australia (which I visited last week) have a more integrated approach.
Professor Dearn then listed some issues:
- Hard to write learning outcomes and align them with teaching strategies: Students need to see a link between the assessment and what the course is about. Professor Dearn mentioned that students get cynical where the assessment is an examination at the end. To me the solution to this is obvious: stop putting so much assessment at the end of courses and use forms of assessment based on simulations of real world experience, not paper based tests. I now use assessment every week in my courses (starting from week one) and have mostly given up using end of semester examinations. Instead I get the students to do what they are being trained to do and asses that.
- Feedback for formative assessment: Students complain about the lack of feedback. Professor Dearn said that staff are concerned that early feedback in a course increases the workload. To me the answer is obvious: include regular small assessment items which are easy to mark but do contribute to the final result. The staff time needed to do this can be provided by eliminating traditional lectures, which have been shown not to be useful for learning, are not popular with students and waste resources.
- Blind marking: It is suggested marking should be blind, so that staff do not know which student's work is being marked. This should be reasonably easy to do using a Learning Management System, where submissions can be made electronically and the system can keep track of which assignment is from who, without showing that information to the assessor. An interesting issue not raised by Professor Dearn is if assessment should be double blind: that is should the student not know which staff member did the marking, so that the staff can give full and frank comments (as is done with reviewing of academic papers for publication).
- Marks for attendance: The university currently allows marks for attendance. Professor Dearn appeared to have concerns about this practice,which in my view this is not a good idea. There is a very simple alternative: provide a small mark for a small assessment item carried out after a session. The student is then not marked for attendance, but on what they learned from the attendance. This encourages the students to attend, and also helps them identify what they have to work more on. This can be easily done with the LMS and I use it routinely.
- Normative marking: Should students be assessed on objective criteria, not according to a marking distribution for the class? In my view a bit of both can be used. Assessment can be done objectively and then double checked to see how the student does against others in that class, the same student in other classes and students in the past. The University's Research School of Computer Science has a sophisticated locally developed computer system, which is used by examination committees to compare courses and students, to ensure consistency of marking. This is a process I was skeptical of (and a little afraid of) until I participated in it and saw the system in use in a collegiate environment. This is something which could be added to the LMS to improve results.