Showing posts with label assesment. Show all posts
Showing posts with label assesment. Show all posts

Tuesday, July 09, 2013

Higher Education Whisperer Helping Improve University Course Design

For the second year running I received high student feedback scores for the course "ICT Sustainability" I designed and run at the Australian National University. Other lecturers have asked how I did this and for tips on improving their courses. I didn't think I was doing anything exceptional with course design or assessment. But then realized I had spent a year studying for a Graduate Certificate in Higher Education and four years refining on-line course design and delivery. In that time I guess I learned a thing or two. So I thought I would offer some tips on how to improve courses.

The title "Higher Education Whisperer" is inspired by Dr Inger Mewburn's "Thesis Whisperer" blog. The term whispering is from horse whispering: training based on a rapport with the animal, observing behavior and having respect. I have found the same applies to students: you need to see things from the student's point of view, look at what they actually do and treat them as people. A teacher who assumes their students are stupid, lazy and dishonest should not be surprised if they do not inspire effort from the students and get poor feedback scores.

The best way I have found to see things from the student's perspective is to enroll in a course. The obvious course to enroll in is one on teaching, where you can look at the research available on student behavior. It can be difficult for a teacher to come to terms with student behavior, even when confronted by the evidence. The most obvious case is with assessment. Assessment is very important to students, which should be obvious to teachers, but many design courses with assessment as an afterthought.

The first thing to look at to improve a course is the assessment. In a recent case I looked at the assessment for a course which was not popular with students. The first obvious problem was that there was a 60% final examination. Such examinations are very stressful for students (and for staff making them), not useful aid to learning and not an effective way to assess what students have learned. So I suggested reducing the exam to 30%.

Also I suggested moving more of the assessment to the first half of the course (up from 20% to 44%). Staff complain that students don't study until just before the final exam, but if you design the assessment that way, what can you expect? Increasing an early assignment from 7% to 20% provided more reward for the student's effort and the opportunity for this to be a learning exercise not just final assessment. For simplicity I suggested also increasing the second assignment from 13% to 20% to match the first.

The course already had 15% allocated for small weekly assessment items. This is a good way to keep students working and also having them pay attention to feedback provided (as it has a mark attached). But the assessment scheme confusingly had the top 10 of 12 items assessed. I suggested a simple sum of all 12 weeks, increased to 24% (2% per week).

Also it is important that students get their mark and feedback promptly each week for the previous week. This is particularly important early in a course so that students who are not doing well (or not doing anything) get the message: "Shape up or ship out".

The final suggested scheme was:
  1. Weekly Work: 24% (+9%) 
  2. Assignment 1: 20% (+13%)
  3. Mid Semester Exam: 10% (+5%)
  4. Assignment 2:20% (+7%)
  5. Final Exam: 26% (-34%)
Obviously the assessment scheme could be revised further, but these changes should greatly reduce the stress on students (and on staff).

Another change is to make deadline firm. Teachers make the mistake of thinking if they provide "flexibility" with assessment it will be appreciated by students. But if you have rubbery deadlines you will cause confusion, stress and a perception of unfairness. If there is a deadline for an assignment, then make it firm: students who do not submit on time get zero marks. Obviously there needs to be provision for special circumstances, such as due to illness. But this should be exceptional, not routine.

Particularly when training professionals, on whom the lives of the community depends (such as doctors, engineers and computer programmers)  deadlines matter. A professional who does not learn to deliver work on time is a danger to the community.

Saturday, April 13, 2013

MOOCs with Books Talk in Singapore 24 April

I will be giving a free talk on "MOOCs with Books" in Singapore 24 April 2013, hosted by Pearson. Anyone interested in attending, please contact Teck Chuan, Ang at Pearson Singapore. This is a short version of my talk "Syncronisation of Large Scale Asynchronous e-Learning" for the 8th International Conference on Computer Science & Education (ICCSE 2013), 28 April 2013, in Colombo.

MOOCs with Books

Tom Worthington will discuss the implications of Massive Open Online Courses (MOOCs) for traditional university education. He will outline why he believes textbooks, in the form of eBooks, will still be key to education, be it on-line or in the classroom. Tom will illustrate this with his own globally accredited on-line course run by Australia's leading university.
  1. What is a MOOC?
  2. Some MOOC Examples
  3. Software and Training for MOOCs
  4. Implications for Universities
  5. Adapting Traditional Courses for Online Use with books
Tom Worthington is an independent certified computer consultant, and an Adjunct Lecturer in the Research School of Computer Science, College of Engineering and Computer Science at the Australian National University. He teaches ICT sustainability, the design of web sites and use of e-commerce systems. In 1999 Tom was elected a Fellow of the Australian Computer Society (ACS) for his contribution to the development of public Internet policy and he was Canberra ICT Educator of the Year for 2010. Tom an ACS Past President, Honorary Life Member, Certified Professional and a Certified Computer Professional of the society as well as a voting member of the Association for Computing Machinery and a member of the Institute of Electrical and Electronics Engineers.

Friday, April 12, 2013

Scalable learning with massive open online courses

Greetings from the Australian National University in Canberra, where a panel, including a Nobel Laureate is discussing "Scalable learning: the beautiful paradox of massive open online courses (MOOCs)". The ANU joined the edX consortium a few weeks ago to develop Massive Open Online Courses (MOOCs). The first two MOOCs from ANU will be "Engaging India" and "Astrophysics" taught by Nobel Laureate Professor Schmidt. On the way to the event I picked up a copy of "The Idea of the Digital University: Ancient Traditions, Disruptive Technologies and the Battle for the Soul of Higher Education" I had asked the  ANU Library to purchase, which is very relevant to the panel's deliberations:
ANU has become the only Australian member of edX.org, the online learning enterprise founded by Harvard University and the Massachusetts Institute of Technology with the aim of providing free education to one billion people worldwide within 10 years.
In this seminar, we will hear from the two pairs of academics behind the first ANUx courses:  Professor Brian Schmidt and  Dr Paul Francis, who will be teaching Astrophysics; and Dr McComas Taylor and Dr Peter Friedlander, whose MOOC is called Engaging India.
We will also hear from Dr Lyndsay Agans, Convenor of the CAP Digital Learning Project, who will discuss some of the organisational thinking and approach behind the development of Engaging India. Lyndsay will open a dialogue on the 'paradox' of distance learning and mass scale as one that actually allows for a more personalised design of education for individual needs, and in particular, the implications of the evolving educational theory of teaching in an open and massive online environment.
One analogy used by the panel for the early days of MOOCs is that it is like "flying a plane while still designing it". As someone with a professional technical background, this analogy is troubling: if I subject my clients to an untested system I can expect appear before an ethics committee, or a court. Similar ethical principles should apply to educators. When designing courses, educators need to confirm they are competent to do so and are using proven techniques: to do otherwise is unethical and may be unlawful. What is reasonable to do is create MOOCs using approaches from decades of previous work on e-learning. The way the MOOC runs can be recorded for research purposes, provided the participants are informed and give permission.

The issue of the purpose of MOOCs was also raised in the seminar. The point was made that at present most of those undertaking MOOCs are already at least undergraduate students at a conventional university. So MOOCs are not brining university education to a new group. The MOOCs may be a marketing exercise for universities to attract students away from other institutions, or to attract high school students to a particular discipline.

Those promoting MOOCs may have the idea of opening education and improving the world. However, there is little evidence to suggest this will work in practice, or that the MOOCs have a sustainable business model. However, it should be noted that there are many claims made for MOOCs with new technology, the reality by ANU is likely to be more conservative, based on existing courses, using proven technology and practices, not just made up on the spot.

The MOOCs are obviously not conventional courses. MOOCs will be a blend of course, research project and marketing campaign. But policies and procedures will be needed to see that, whatever they are, MOOCs are well designed, tested and run. Institutional  policies and procedures may need to be adapted and to be applicable to the MOOCs. As an example, a MOOC may use the institution's standard new course proposal form, where the learning objectives, teaching techniques and assessment plan are outlined, but to this may be added a focus group, as usually used for a marketing campaign.

It should be kept in mind that while institutions are not charging for MOOCs, they are still required to conform to various consumer protection, educational and other laws. As an example, all video must have closed captions for the deaf and the web interface must meet accessibility requirements. Also student information is subject to privacy principles, which will limit its use and may prevent the MOOC being hosted in a country which does not have comparable privacy protection.

The format of the MOOC will need to suit its purpose. One model which might suit a broad audience is that of a TV documentary series. This could start with a shot of the professor in the classroom with students, to establish their credentials as an academic. But within a few seconds we would see them in the outside world, demonstrating the topic. This does not require the professor to go to India or outer space, they could be in the library showing a  manuscript, or standing next to the university telescope.

This was an interesting event, at which the nature of education was being redefined. The audience was made up of a who's-who of educational designers and researchers in Canberra, not just from ANU. I will be discussing some of these topics in a free talk on "MOOCs with Books" in Singapore 24 April 2013. This is a short version of my talk "Syncronisation of Large Scale Asynchronous e-Learning" for the 8th International Conference on Computer Science & Education (ICCSE 2013), 28 April 2013, in Colombo.

Wednesday, January 30, 2013

Students Cheat Less in Online Courses

In "Students Cheat More in Online Courses?", George Watson and James Sottile (Online Journal of Distance Learning Administration, 2010) report that students admit to cheating more in face to face courses than on-line ones. They suggest that this is because the students work together in the classroom and so are included to share answers. To stop the sharing of answers in on-line tests, the authors suggest having the test supervised, or better still change to other forms of assessment, such as assignments. One approach which they surprisingly do not suggest is group work. Students in the face-to-face class are working together should be seen a a positive outcome and could be encouraged by group work. I use online discussion forums as part of assessment, along with assignments.

Friday, November 16, 2012

Assessing for quality decision making at university

ANU Sciences Teaching Building
ANU Sciences Teaching Building
Greetings from the Australian National University  Science Teaching and Learning Colloquium on Research Led Education, where  by Professor Geoffrey Crisp, is talking on "Assessing for quality decision making at university". He began by pointing out that he had presented on the same spot in 2007, in a demountable building.

As with the 2007 presentation, Professor Crisp started with an image from a medieval painting of a lecture in a monastery. He then showed an image of a modern lecture theatre and suggested not much had changed. e argues that assessment needs to be authentic, being related to real world tasks. He pointed out that a three hour paper based examination is unlikely to relate to what the student needs to know.

Professor Crisp's 2007 talk and his book "e-Assessment Handbook" gave me the courage to decide to give up setting conventional examinations for the university courses I design. What I found surprising is that five years later there would be a need to make this case again. By now the idea that assessment needs to be aligned with the teaching and based on real world tasks should be conventional wisdom and three hour paper examinations relegated to history.

One theme emerging from the event today from several speakers is that learning is partly a socialisation process. University lecturers need to help students learn how to learn, so as to be academics and professionals.

Geoff showed is own  Transforming Assessment website (which I noticed is implemented with Moodle).

Also he recommended the Sbl Interactive scenario software. This was developed at University of Queensland for presenting a scenario to a student  who then makes a decision and the scenario displays the consequences. The software can be used for free if the developed scenarios are shared. I could not find any listed scenarios on ICT. The "Living Without Water" scenario from Engineers Without Boarders shows the potential.

Tuesday, January 24, 2012

Phasing Assessment to Help the Student Learn

In "Phases of assessment" educational designer discusses dividing a course up into three phases with difference assessment to suit the development of the student's understanding. For a typical 13 week university course, the phases are:
  1. Assessment for transition: weeks 1-4, to get the students ready to study with tasks that do not contribute to the final grade (or not much).
  2. Assessment for development: from week 5, more prating assessment exercises which can contribute to final marks.
  3. Assessment for achievement: Weeks 7 to 13, where the bulk of the summative assessment is contributing the bulk of final grades.
This is a useful analysis, but spending the first four weeks of the course in transition seems a long time. In my ICT Sustainability course I do the transition in the first two weeks, mostly to see if the students can write and can cope with e-learning. Then the assessment is for development each week, to ease them into a mid semester assignment (worth 38%).

Thirteen weeks is a very long time and if the assessment for achievement was left until the end, a student would have difficulty seeing the connection with the development. Also it would make me very nervous, if most of the assessment was at the end of the course. Software developers and other project managers are trained to place "milestones" so that there are no unpleasant surprises at the end of a project. As a student I avoid courses which have large end of semester assessments (and I would not design a course that way).

Deborah refers us to "Assessment in First Year University: A model to manage transition" (Taylor, J. A. 2008).

Tuesday, January 03, 2012

Evidence Based Evaluation of University Courses

Groundwater-Smith and Cusworth (1998) point out that opinions about education courses are formed by individuals. Given the large amount of time and money invested in education by the state, parents, teachers and students, something more than opinion is needed to decide on how good a course, program or university is. The educational buzzwords for this are evaluation. , with evidence gathering and collegiate interpretation. But in the end it comes down to opinion, admittedly the opinion of many students, teachers, practitioners, employers and others combined together, but it is still subjective opinion. There is no objective way to evaluate a course: in the end someone has to decide what is important, based on their opinion of what is important.

Groundwater-Smith and Cusworth describe evaluation as: "... a process that allows school professionals to gather evidence in an orderly manner..." and "... well-informed judgment ...". So this is not just a matter of collecting results and adding them up.

Real Time Course Evaluation

Ideally courses are designed and tested before they are run with students. Teachers should not be using their students as involuntary experimental subjects, to try out new educational material and techniques (that would be unethical). The course should have been first designed and run through some form of evaluation, such as peers reading through the material and trying out the assessment items. But some course evaluation will could be take place in real time as a course is run, checking if the course meets the needs of the cohort of students and if there are changes needed due to circumstances.

The teacher will be looking at how students are progressing, by seeing what actvities they are undertaking. For a face to face class this would involve looking out at the class to see how many are actively engulfed. With a computer assisted class I have found I can use the Learning Management System to help with this, as I can see what students are doing. The =same can be done with distance on-line students: using the system to see who is up to where. However, the temptation to check every little thing the students are doing needs to be resisted: if the student get the feeling they are being continually watched, they will start to act in a way they think will keep the teacher happy (or they may rebel against the intrusion).

Assessed can also be used for evaluation of the course: if students have not done some of the formative assessment or are having problems with it, then the fault may be with the teaching, not the students. In my e-learning coruses I have weekly exercises for students and can see if there is a problem.

Link Between Assessment and Evaluation

It may seem obvious, but there is a link between assessment and evaluation in education. Some of the same theory and techniques can be applied to both and in some cases the same data. If an external form of assessment is used for several different courses then this can be used as a form of evaluation of the courses (allowing for differences in the students). If the same students do different courses, then their results can be compared for consistency. Also the result from the same course in previous years can be compared with the present (this is done routinely by university course examiner's meetings). If a course produces consistently lower results than other courses on the same topic or with the same students, then there may be a problem.

Action research

Groundwater-Smith and Cusworth go into detail on problems with Action research (AR), but without first explaining what it is. As far as I can work out from the Wikipedia entry, AR involves practitioners working together (in a "community of practice") to investigate current practices and improve them. But what is not clear is where the research comes into this. The average teacher does have the resources to conduct experimental research and using their students as subjects of an experiment would require ethical clearance. Having a group of teachers discuss teaching practice could be useful, but dressing it up with fancy terms does not make this "research".

Link
Reference:

Groundwater-Smith, Susan. & Cusworth, Rosie Dobbins. (1998) Teaching : challenges and dilemmas Harcourt Brace, 1998:

Monday, December 19, 2011

Introduction to Rubrics

The book "Introduction to Rubrics: An Assessment Tool to Save Grading Time, Convey Effective Feedback and Promote Student Learning" by Dannelle Stevens and Antonia J. Levi gives a good, short (131 pages) overview of how to make marking for university assignments easier. A rubric is a table laying out what and how the assignment will be marked. This has particular value with e-learning, where the assignments are submitted on-line and the marking sheet can also be in electronic format (are there rubrics built as add-on modules for Moodle?).

But as with many educational innovations, rubrics require more up-from work by the teacher (or educational designer). The rubric may save time later when marking (and form not having to justify the marking to the students individually) but takes work in advance to create.

Also the idea of reducing marking to ticking or circling some items in a table may offend the academics' view of themselves. They want to be seen as providing detailed scholarly advice to students, not just doing tick and flick multiple choice marking. But as the book points out, the student's have difficulty understanding detailed comments and find detailed corrections of their work insulting.

Sunday, November 27, 2011

Making Assessment Matter

The book "Making Assessment Matter" (Graham Butt, Continuum, 2010) is a short, 145 page, introduction to assessment issues for teachers. While it is aimed at school teachers in the UK, I found it useful for my study of university assessment in Australia.

Description

Teachers often spend a considerable amount of their time monitoring and assessing their pupils’ performance. But what are we assessing for, and can assessment practices be changed to make them more useful to teachers and learners?

Assessment activities in schools are frequently criticised by government inspectors – often being reported as the least successful aspect of schools' work.

Drawing on established research, Making Assessment Matter focuses on the purpose of assessment, and suggests strategies for managing assessment in a more effective way. The author considers the role of assessment in promoting learning, rather than simply measuring it, provides tips on setting and attaining assessment targets, and brings together considerations of ‘high stakes’ assessment at the national level with day-to-day assessment practice in the classroom.

This timely and informative book will be essential reading for anyone involved with, or interested in, the role of assessment within schools, including teachers, trainee teachers and managers.

Table of Contents

Preface \ 1. Introducing assessment \ 2. Using assessment to promote learning \ 3. Can assessment raise standards? \ 4. The influence of 'high stakes' and 'low stakes' assessment \ 5. Formative and summative assessment: 'We need to talk' \ 6. Marking, feedback and self-assessment communicating the results of assessment \ 7. Achieving assessment targets \ 8. Equality of opportunity in assessment - the case of boys' underachievement \ 9. Making assessment easier? The role of e-assessment \ 10. Making assessment matter \ References \ Index

Author(s)

Graham Butt, Graham Butt is Reader in Geography Education, Director of Academic Planning and Deputy Head of the School of Education at the University of Birmingham, UK.

Wednesday, November 16, 2011

Benchmarking Education Quality

The Benchmarking of Education Quality came up on day two of the ANU Educational Research Conference at the Australian National University is concentrating on research and postgraduate education. The Australian National University has a "Benchmarking of Educational Quality and Standards" policy. ANU will be using the International Foundations of Medicine (IFOM) for testing medical students in the ANU Medical School.

Tuesday, November 15, 2011

Mining Learning Management System Data to Improve Education

Paul Francis from Astronomy at the Australasian National University is speaking at the ANU Educational Research Conference about data mining he did on data in the Learning Management System of his course. He found a smooth progression for his students. Students who did well in one form of assessment, tended to do well in others. He quipped that the assessment could have been reduced to one multiple choice test, noting more seriously that assessment has multiple purposes, not just giving one number at the end. These are similar results, using similar techniques to the work by Colin Beer at Central Queensland University, reported at Moodle Moot Au 2010.

Thursday, September 22, 2011

Why students are failing

Greetings from the University of Canberra, where Dr Jon Scott, from University of Leicester, is talking on "Why are students failing? What works to improve retention and success?". He was involved in UK research on the student experience. This showed that most of the problems for new students were not to do with their studies, but social issues of adjustments in leaving the family home and becoming young adult on campus.

Some of the issues also relate to the adult postgraduate students I deal with. As an example Jon's research shows the students do not have recent experience in writing so much. My students comment on the amount of reading they have to do and the need for the detailed referencing. Adult students have less difficulty with planning their work, but have the pressures of work and family.

One interesting aspect is where students get advice from during their university time and how helpful they find it. Jon's research show that the student's personal tutor is the primary source of advice, but students are not that happy with the quality of advice. Students instead are comfortable with advice from family and friends. Being a recent postgraduate student enrolled for face to face and on-line courses is that I have found the quality of advice much higher with an on-line course. I can ask a question and get an answer later and the person preparing the answer has my details, including the history of previous questions in front of them. With a face to face query, I have to find the staff member so I can ask a question, then remind them of who I am and what I am doing.

While Jon presented an interesting and well researched analysis of what the issues are for new students, the proposed way to improve this were disappointing. As an example, the research shows that the personal tutors are relatively ineffective. The proposed solution was better training for the tutors. Instead I suggest replacing the tutors with an on-line help services, with a group of tutors to help. Another example was that Jon identified the difference of school education which allows students to resubmit until they reach the required standard, with university examinations where the student has once chance to pass or fail. It coursers to me that the solution is to adopt the school approach.

Jon also presenting "Student and staff perceptions of assessment feedback: Myth and reality" Friday.

Student Cheating in the Digital Age

Greetings from the Unviersity of Canberra, where Dr Jon Scott, from University of Leicester, is talking on "Academic integrity: To cheat or not to cheat in the digital age?". One point he made was about the honor codes and student tribunals used in some US universities and if these would work in the UK/Australian university culture.

In my view the universities could learn from the work which the professions, particularly the Australian Computer Society, on the teaching and adoption of professional ethics.

I discuss this in my lecture on ethics "Professional Ethics and Social Issues in Networked Information Systems". The researchers conducted a survey and interviews of ICT professionals on their attitudes to ethics and the IT industry (Lucas, 2008). One finding was that those borne 1981 to 1999 (so called "Generation Y") thought:

  1. Ethical regulations should be less important,
  2. Job security made a difference to ethical behavior.
  3. They had more ethics education than previous generations.
I discuss this in my lecture on ethics "Professional Ethics and Social Issues in Networked Information Systems".

In my view, plagiarism can be dealt with by teaching research writing to the students and assessing it. If students fail to reference material correctly, they will fail those courses and other courses. There is no need to impose some sort of moral condemnation on the students, just identify where they need help with their work and provided it (and ensure those who will not or do not meet the stand never pass).

Jon e is also presenting "Why are students failing? What works to improve retention and success?" at 4.30-6.00pm and "Student and staff perceptions of assessment feedback: Myth and reality" tomorrow.

Saturday, August 27, 2011

Teaching Performance Bonus Evaluation

In "A Big Apple for Educators: New York City's Experiment with Schoolwide Performance Bonuses", RAND Corporation reports that providing extra money to schools as an incentive did not improve student results. This may be because the bonuses were not large enough, or because in most cases the money was not allocated to the top teachers in the schools. Or it may just be that such bonuses do not work.

A Big Apple for Educators

New York City's Experiment with Schoolwide Performance Bonuses: Final Evaluation Report

by Julie A. Marsh, Matthew G. Springer, Daniel F. McCaffrey, Kun Yuan, Scott Epstein, Julia Koppich, Nidhi Kalra, Catherine DiMartino, Art (Xiao) Peng

In the 2007–2008 school year, the New York City Department of Education and the United Federation of Teachers jointly implemented the Schoolwide Performance Bonus Program in a random sample of the city's high-needs public schools. The program lasted for three school years, and its broad objective was to improve student performance through school-based financial incentives. The question, of course, was whether it was doing so. To examine its implementation and effects, the department tasked a RAND Corporation-led partnership with the National Center on Performance Incentives at Vanderbilt University to conduct a two-year study of the program that would offer an independent assessment. This report describes the results of our analyses for all three years of the program, from 2007–2008 through 2009–2010. This work built on past research and was guided by a theory of action articulated by program leaders. Researchers examined student test scores; teacher, school staff, and administrator surveys; and interviews with administrators, staff members, program sponsors, and union and district officials. The researchers found that the program did not, by itself, improve student achievement, perhaps in part because conditions needed to motivate staff were not achieved (e.g., understanding, buy-in for the bonus criteria) and because of the high level of accountability pressure all the schools already faced.

Document Details

  • Format: Paperback
  • Pages: 312
  • ISBN/EAN: 9780833052513
  • Document Number: MG-1114-FPS
  • Year: 2011

Contents

  1. Introduction

  2. Background on Pay-for-Performance Programs and the New York City SPBP

  3. Research Methods

  4. Implementation of the Schoolwide Performance Bonus Program: Attitudes About and Understanding of the Program

  5. Implementation of the Schoolwide Performance Bonus Program: Compensation Committee Process and Distribution Plans

  6. Implementation of the Schoolwide Performance Bonus Program: Perceived Effects of the Bonus and Program Participation

  7. Effects on Progress Report and Student Test Scores

  8. Teacher Attitudes and Behaviors in SPBP and Control Schools

  9. Conclusions and Implications

Thursday, June 30, 2011

Learning University Teaching: Reflection

Last week I attended day four of an introductory course on university teaching. This was the last day of the course and it was time to reflect on what I learned and suggestions on how to improve the course:

Background to the Course

The course is a non-assessed introduction to university teaching intended for academics just starting their career. It is four days, one day per week, for four weeks, plus some on-line activities using the Moodle Learning Management System. The course is offered to staff by the university's academic education center.

One issue this raises is that all teachers at a university do not start out as "early career academics". Some are "late career professionals", who find they have a taste for teaching. Universities in Canberra, for example, have current and former public servants assisting with teaching. There are also staff of organizations which support the public service. These staff need an introduction to teaching for professionals who have a depth of experience in the workforce, but limited understanding of the academic environment and limited time to acquire it.

Recommendation for Improvements in the Course

The introductory course covers educational theory, but could better use the techniques discussed, to put theory into practice. This would be more efficient, and more credible, demonstrating how the educational theories work:

1. Offer a blended course, with optional face-to-face components: Research presented indicates that having oral presentations is not the most efficient teaching technique. Therefore it would be appropriate to reduce the use of live oral presentations and increase the use of student activities in this course. An appropriate format would be two hours per week optional face-to-face contact (down from five hours mandatory), with the rest of the course on-line.

Also the course covers Equal Opportunity Policy, emphasizing an inclusive approach in both education and employment. One way to do this is to provide flexibility for attendance at courses. Currently the course requires attendance at all four daytime events. These are only offered once per teaching period, at only one location in one city, and only in Australia. Offering the course at multiple times and locations would be prohibitively resource intensive. However, on-line participation could be offered as an alternative to attendance in person.

2. Convert the course materials to an accessible on-line format via the learning management system: Disability Policy is covered in the course. The policy "... incorporates the inclusion of people with disabilities in employment and education to enable them to perform at their best in University life ...". Implementing this policy in the course would be aided by checking that content of paper handouts and screen displays used were large enough for comfortable reading. Also providing the material in alternate formats via the Learning Management System would be useful.

A simple way to provide course materials is with an e-Book, available via the course web site at the beginning of the course. This can contain summaries of what is to be covered, links to readings and the work sheets for activities to be undertaken. The number of readings should be feasible to be read in the time allocated for the course and the class exercises should reference the readings to prompt the students to read them.

4. Add assessment to the course: The course emphasized that students value assessment and the assessment can be used for aiding learning, not just as a test at the end. It would therefore be appropriate to design assessment activities into the course, rather than as an optional extra at the end. At least weekly assessment should be mandatory, along with at least weekly feedback from the tutor to each individual student. The optional credit for "reflection" should be eliminated and replaced with mandatory assessable items.

5. Construct a showcase active learning classroom: It is suggested that the university commission the design and construction of an Active Learning Classroom (ALC) for teaching education techniques. This would be similar to the INSPIRE Centre for ICT Pedagogy, Practice and Research under construction now at the University of Canberra. It would be equipped with computer systems and screens for one large class and for breakout groups. Space for this might be found in the City West Precinct. A design similar to the University of Canberra Teaching and Learning Centre could be used.

5. Design an on-line learning environment: A web site for the course, which conforms to the same standards as other courses, should be developed. This should be tested before the course commences and the design not changed during the course. The web site should be integrated with the face-to-face content.

6. Put assessment first: The course referred to research indicating that assessment is important to students and should be integrated in course design, but this topic is left to last in the course. It is suggested that the topic of assessment be discussed alongside that of preparing course descriptions, early in the course.

My Background

Like many staff I came to university as a Visiting Fellow and ended up doing teaching as an adjunct. We teach material developed in our day-to-day work.

While having provided conventional lectures and examination based teaching for years, I was never comfortable with this mode, as I teach about on-line communication. In 2005 and 2007 I made some ad-hoc attempts at blended learning ("Workshop on the Use of Technology for Museums of the Pacific Islands Region" and "Electronic Document Management, Module 2 of Systems Approach to the Management of Government Information"). Later I was commissioned by the Australian Computer Society to design an e-learning course as part of the ACS Computer Professional Education Program, which is part of a globally accredited postgraduate program.

ACS provided training in techniques for mentored and collaborative e-learning, based on those used at the Open University. I then adapted the same course content for university. The course has worked well, winning an industry award and with one of the students now running it in Canada. My ambition is now to design more such courses, explore the theory behind them and teach others how to do this.

Goals for the Course

My reasons for enrolling in this course were:

  1. Cost: The course is free for current staff,
  2. Get in before it is compulsory: While there is no requirement for university staff to have training in teaching, this is likely to become more strongly encouraged.
  3. Validate vocational training: While I have undertaken a considerable number of vocational short courses, it was useful to check I was up to date with the latest thinking on educational theory for higher education.
  4. Help university implement e-learning: While I have been successful at implementing e-learning, my academic colleagues are skeptical of my approach, perhaps due to my not being able to explain it using the correct academic terms to describe it,
  5. Ease into postgraduate studies: To see if it would be worth undertaking the certificate in teaching.

Results

  1. Completed course: I was able to attend all four days and so was awarded an attendance certificate.
  2. Validated vocational training: I was able to verify that my previous vocational training is consistent with university thinking on educational theory.
  3. Ready to help advance university teaching practice: I was able to see that the university was striving to implement blended and e-learning.
  4. Applying for the certificate in teaching: I have applied to study the certificate in teaching.

Thursday, June 23, 2011

Framework for teaching standards in Australian universities

The discussion paper "Developing a framework for teaching and learning standards in Australian higher education and the role of TEQSA" has been released by the Tertiary Education Quality Standards Agency (TEQSA) on 22 June 2011. This is a 22 page report available in PDF (90kb) and RTF (2mb). Comments on the paper are invited by email, until 22 July 2010.
This paper initiates a process of discussion on possible approaches to articulating, reviewing and reporting on
teaching and learning standards in Australian higher education. It presents the policy context, including the role of the Tertiary Education Quality and Standards Agency (TEQSA); incorporates an analysis of relevant developments as background; and proposes a way forward.

The TEQSA legislation introduced into the Parliament of Australia in March provides, among other things, that a
Higher Education Standards Panel (Standards Panel) will be responsible for developing the Higher Education
Standards Framework, including teaching and learning standards. The Standards Panel must consult with interested parties when developing the standards.

The Standards Panel will be independent of the TEQSA Commission and will provide advice and recommendations directly to the Minister for Tertiary Education and the Minister for Research. This will ensure the separation of standard setting from the monitoring and enforcement functions carried out by TEQSA.

The Interim TEQSA Commission seeks feedback from higher education providers, professional associations, industry bodies and government agencies about directions for development before detailed work begins. The outcomes from this discussion process will be provided to the Standards Panel for further consideration once the Commission is formally established.

The contribution of Professor Richard James and Dr Kerri-Lee Harris of the University of Melbourne’s Centre for the Study of Higher Education to the preparation of this paper is gratefully acknowledged.

There are three sections in the paper, each with associated discussion points:
  1. The policy context for national teaching and learning standards, including proposed statements of principle for TEQSA’s approach. Feedback is sought on the proposed definition of teaching and learning standards.
    Feedback is also sought on the proposed statements of principle describing TEQSA’s approach to teaching and learning standards.
  2. A brief review of international and domestic developments, including student surveys, qualification frameworks, explicit statements of learning outcomes, common tests and peer review.
    Feedback is sought on the analysis of these developments in terms of their utility in developing a teaching and learning standards framework.
  3. Steps toward Australian teaching and learning standards, how Australian higher education, including TEQSA, might further develop a national approach to teaching and learning standards.
    Feedback is sought on the proposed structure of the framework, including on the relationships between the various elements. Feedback is also sought on the particular considerations and possibilities described for developing standards statements, measures and indicators, and processes for expert review. ...
From: "Developing a framework for teaching and learning standards in Australian higher education and the role of TEQSA", Tertiary Education Quality Standards Agency (TEQSA), 22 June 2011

Lack of on-line and global perspective in the framework

My interest is in on-line learning, so I was curious to see how prominently in the discussion paper. The words "Internet" and "computer" do not appear in the paper at all and there is no mention of the World Wide Web. "Online" occurs once, under the heading "Standards categories":
Standards categories

Broad categories are needed for identifying and locating teaching standards and learning standards within a
coherent, explicit framework. The categories are purely for organisational purposes and should be broad,
identifiable areas of significance that will bring structure to the standards framework. Within teaching standards,
for example and for illustrative purposes, such categories might be course design, course resourcing, quality of teaching, quality of learner support, quality of provision for student diversity, quality of provision for online learning and so on. It is feasible that some categories, once they are agreed to, may not be applicable to certain providers or certain courses, and thus a mechanism for diversity would be embedded within the framework. ...
Australian higher education needs to address new techniques in education. Accompanying on-line education are new approaches to student directed learning. This is similar to the situation with organizations failing to grasp that "social networking" is not a new media channel to market to their customers, but a way to genuinely involve the community in decision making.

The discussion paper failing also lacks a global perspective. The Australian tertiary sector does not have the option of setting its own standards for teaching, or for anything else. Australian institutions are part of a global system of education and so must comply with global standards, or go out of business. Australia can remain competitive by being involved in setting those standards, or remain aloof and decline.

Australia is competing with other countries for international students and, as online systems become establisher (particularly in India and China), Australian universities will be competing for Australian students with overseas institutions.

In my area of teaching IT professionals, the standards for education as well as technical standards, tend to come from the USA and the UK. Australia is a leader in the development of education standards in IT and able to influence those standards, by acting as a bridge to Asia. I suggest this is a strategy which could be adopted generally by Australian Higher Education.

As an example of how course standards are set against global standards, my course "Green Information Technology Strategies" run at ANU as COMP7310, addresses the Skills Framework for the Information Age (SFIA) Level 5 competencies:
"ensure, advise: Broad direction, supervisory, objective setting responsibility. Influences organisation. Challenging and unpredictable work. Self sufficient in business skills".

The course outline outline lists Category/Subcategory/Skill from SFIA.

Wednesday, June 22, 2011

Learning University Teaching: Lesson 4

Last week I attended day three of an introductory coruse in university teaching. The fourth and last day this week is about "Designing and Marking Assessment Tasks". Here is my thoughts in preparation:

Designing and Marking Assessment Tasks

It is useful to see what the public perception of university assessment is, or at least the media's perception. So here are the recent five top stories from Google News which mention "University Assessment":
  1. Two Mumbai varsity staffers caught stealing answer papers, Hindustan Times,23 Jun 2011‎: The University of Mumbai on Wednesday caught two of its temporary staff trying to steal engineering answer papers from the university's central assessment centre at Kalina. The duo had tied eight answer papers to their legs and was walking out when the ...
  2. Assessment and learning in the digital age, Media Newswire (press release): The symposium, Assessment and learning in the digital age, will take place at the University of Bristol's Graduate School of Education ( GSOE) on Friday 17 June from 1 to 5 pm in Room 4.10, GSOE, 35 Berkeley Square, Bristol. ...
  3. KCPE and KCSE face scrapping, The Standard, Augustine Oduor, ‎Jun 21, 2011‎: The exams whose hallmark has been cutthroat competition among schools, and which hold the key to good secondary places and lucrative university courses, might be replaced with a list of assessment tests spread across the learning system. ...
  4. Teaching quality under pressure as unis chase money, The Australian, Julie Hare, ‎Jun 20, 2011‎: SPIRALLING class sizes, overcrowding, tutorials replaced by seminars, few avenues for feedback and interaction, a shift to online and peer-assessment as a cost saving measure -- the dire state of teaching in Australian universities emerges from just a ...
These indicate that assessment issues are of concern globally, including: fairness, adaption to on-line delivery, reduction in the use of large end of semester examinations and peer assessment.

A search of Google Scholar, shows six documents featuring the words "university assessment" in the title for 2011:
  1. Talking the talk: oracy demands in first year university assessment tasks: C Doherty, M Kettle, L May… - Assessment in Education: …, 2011 - informaworld.com... 18, No. 1, February 2011, 27–39 ISSN 0969-594X print/ISSN 1465-329X
  2. Towards Fairer University Assessment: Recognising the
  3. Concerns of Students N Flint… - 2011 - books.google.com: After all the hours of studying, reading and preparation, the nights spent revising and the writing and re-writing of assignments, 'success' for university students can often be represented with a single grade or digit, ...
  4. 'Worldmarks': Web guidelines for socially and culturally responsive assessment in university classrooms, CE Manathunga, D MacKinnon - … Conference 2002: The …, 2011 - espace.library.uq.edu.au ... understanding. Yet university assessment in Australia is often based on a western template of knowledge, which automatically places International, Indigenous, as well as certain groups of local students at a study disadvantage. ...
  5. 'In Press' Measuring up? Assessment and students with disabilities in the modern university, J Bessant - International Journal of Inclusive Education, 2011 - researchbank.rmit.edu.au, International Journal of Inclusive Education, vol. TBA, no. TBA, pp. ...
  6. Teaching in the Corporate University: Assessment as a Labor Issue, J Champagne - 2011 - academicfreedomjournal.org ... Following the self-study, our provost established the Coordinating Committee on University Assessment. In March 2006, that ...
  7. Towards fairer university assessment: recognizing the concerns of students,: A Iredale - 2011 - eprints.hud.ac.uk: This book is aimed at higher education academics, administrators and managers, researchers, and to some extent undergraduate and postgraduate students. It explores assessment as a determiner of student satisfaction, and is based upon Nerilee Flint's PhD thesis. A ...
These articles address fairness, cultural responsiveness and dealing with disability in assessment. Most interesting is that two of the six refer to the book: "Towards Fairer University Assessment: Recognizing the Concerns of Students" by Nerilee Flint (Routledge, 2011). Dr Nerilee Flint is Education Advisor, Student Equity, ANU. In the paper "Unfairness in educational assessment: Modifiers that influence the response students have to a perception of unfair" (2007) and later in her book, she suggests assessment is important to universities and is a powerful way to influence student behavior.

Integrating Assessment with Course Design

While education theory and items in the media suggest assessment is important, in practice it tends to be left to later, both in design and delivery of courses. Design of assessment is generally left until after course content is decided. Also much of the assessment of a university course is by way of an end of course examination, where the results of that examination cannot be used to help the student with learning in that course (as the course is over). If assessment is important, then it should be designed alongside the content and delivered before the end of the course.

My approach is to provide all assessment items at the beginning of the course (or preferably before the student enrolls). As an example, all assessment items for my two e-leaning courses "Green Technology Strategies" and "Electronic Document and Records Management" are available before the student starts the course. The assessment items are based on real world tasks the student will be expected to be able to carry out after the course. This avoids the philosophical conundrum of attempting to assess what the student "knows", instead assessing what they can do. It also appeals to students looking to do the course in order to get a better job.

To promote a sense of fairness and to avoid unnecessary requests for remarking, when marking assignments I first make detailed comments, giving the students examples of what is good, what could be improved, how and why. Rather than add up some marking scheme to give an arbitrary total, I instead form an assessment of the grade of the work (fail, pass, credit, ...) and then a mark within that grade. I provide the student with the detailed comments, the grade and the mark. This is a way to clearly tell the student my assessment of their work (this "credit" level work). It avoids the time wasting scramble for marks, where the student is tempted to ask for a few more marks to push them up into the next grade.

What is assessment for?

The accepted wisdom in educational theory is that there is formative and summative assessment. Formative assessment helps the guide the student with their learning while it is in progress, whereas summative assessment is for an external report on the results at the end. While, Harlen & M. James ("Assessment and Learning: differences and relationships between formative and summative assessment", 1997) argue that things are not as clear cut in the real world and there is a creep to wards using assessment originally developed for formative use as summative, this still seems a useful distinction to make. However, I like to provide marks for formative work, as a way to motivate students to do it (24% of the total marks seems to be sufficient).

When deciding how much assessment and in what form to use in an ANU course, I did a quick survey of Australian university assesment. For the usual 13 week course (of 9 to 10 hours work per week), universities typically require 40 to 60 words per percent. That is 4,000 to 6,000 words of assignments written by students for a complete course, with a set number of words corresponding to a length of examination or oral presentation. As an example, University of Melbourne equates one hour of examination or ten minutes of individual oral presentation to 1000 words of assignment.

Automated Assessment

Learning Management Systems, such as Moodle, have provision for automated quizzes built in. This would seem to be ideal for formative assessed, particularly when assessing what the student already knows at the start of a course, so they can be guided to concentrate on what they don't know.
Professor Geoffrey Crisp, Director of the Centre for Learning and Professional Development, at University of Adelaide and author of the "e-Assessment Handbook" has given seminars in Canberra on how to automate assessment. This could be useful, but it takes considerable work to set up in advance and so far none of the organizations who commission me to design cruses have been willing to fund the work needed for this.

In designing automated assessment, it needs to be kept in mind that the system must, if possible, be designed to be usable by a wide range of students. As an example, the student may be remote from the campus, on a slow telecommunications link, have a low powered computer or have a disability. This needs to be taken into account when designing the assessment. As an example, if the assessment is in the form of a Flash animation, the student may not be able to use it, thus be disadvantaged and leaving the institution open to charges of unlawful discrimination.

In addition assessment should not be designed to make up for inappropriate course design. As an example, it is not possible to provide individual feedback in a live lecture to hundreds of students. "Clickers" can be used to conduct a quick quiz, students can submit questions via pieces of paper or Twitter. But one lecturer can't deal with all the questions from hundreds of students in a live environment and should not pretend they can.

Saturday, May 14, 2011

Making Course Assessment Palatable for Students and Teachers

New ABU staff member, Dr Nerilee Flint, Education Advisor, Student Equity, presented a talk on "Assessment--making it fair" at the Australian National University, 13 May 2011. This was based on her PHD research, now published in a new book "Towards Fairer University Assessment: Recognizing the Concerns of Students" (Routledge, 2011).

Nerilee has a background as a teacher, who then moved to the university sector. She used "grounded theory methodology" for her work. After extensive research she concluded the central issue with student's view of assessment was frustration. This frustration comes from different views of assessment by students and teachers and between different students.

Several examples of areas for misunderstanding and frustration were given. The first was assignment deadlines which have nor rational and re not consistently enforced. Another example was vague definitions of word counts for work. These seemed to me not to be fundamental issues with assessment, but simply examples of poor assessment design. Two ways to overcome this were better training for individual teachers and the sue of standards for assessment within and between institutions.

Nerilee presented a theoretical framework showing what the student takes into account in their response to assessment. One comment was that some students will check what the assessment is before enrolling and use this to help decide which course to do. To me this seems a statement of the obvious. As a student I avoid courses which have an semester examination as the primary form of assessment (also as a teacher I think this is not good for learning). But apparently many course designers do not realise that assessment is very important to students when choosing courses.

Factors in student's view of assessment Nerilee found included the teacher's perceived skill in teaching and the extent to which they are seen to be caring about students. One example is a tutor who does not moderate the mature age students contribution, so the younger ones get a chance to speak. This is not an issue for my tutorials, which are online and so everyone can talk (and behind the scenes I provide the hesitant students with individual encouragement to post). Also I provide an individual response to every student every week, which shows the tutor is taking an interest in the student as an individual.

One useful reminder was that not all students understand what the assessment process is. In my view the use of templates and particularly a Learning Management System (such as Moodle) can help with this. With an ad-hoc paper based approach, each teacher prepares their own assessment items and so it is easy to have inconsistency across an institution. If templates are used with standard wording this is much more difficult. Also if an LMS is used, which has links to the same assessment guidelines on every course, this can help.

While Nerilee concentrated on the student's view of assessment, the suggestions presented could equally help assessment more palatable to teachers. Assessment is mostly seen as a drain on time and resources by teachers, who would like to get on with the real teaching. In my view this is a misguided approach. Assessment is an essential part of learning (in one case I enrolled in a course with no assessment and found it frustrating). By integrating the assessment in the way they teach, staff can reduce their workload and also remove many of the day to day annoyances of students grumbling about assessment problems.

Before Nerilee's presentation, Professor John Dearn, talked about the new policy on assessment he is developing for ANU. It is unfortunate that this did not follow Nerilee's talk, as that would have informed the issues.

Professor Dearn mentioned the university had dozens of policies, mentioning "assessment" (I found 254). He discussed the difficulties with formulating a policy for an institution, with differences between disciplines on what assessment is and the link between policy and practice. In my view, one way a univerity could better link policy and practice would be by integrating the teaching of teaching and learning. The unviersity currently has a separate units dealing with teaching teachers, helping students with learning and provision of learning technology. Institutions such as the University of South Australia (which I visited last week) have a more integrated approach.

Professor Dearn then listed some issues:
  1. Hard to write learning outcomes and align them with teaching strategies: Students need to see a link between the assessment and what the course is about. Professor Dearn mentioned that students get cynical where the assessment is an examination at the end. To me the solution to this is obvious: stop putting so much assessment at the end of courses and use forms of assessment based on simulations of real world experience, not paper based tests. I now use assessment every week in my courses (starting from week one) and have mostly given up using end of semester examinations. Instead I get the students to do what they are being trained to do and asses that.
  2. Feedback for formative assessment: Students complain about the lack of feedback. Professor Dearn said that staff are concerned that early feedback in a course increases the workload. To me the answer is obvious: include regular small assessment items which are easy to mark but do contribute to the final result. The staff time needed to do this can be provided by eliminating traditional lectures, which have been shown not to be useful for learning, are not popular with students and waste resources.
  3. Blind marking: It is suggested marking should be blind, so that staff do not know which student's work is being marked. This should be reasonably easy to do using a Learning Management System, where submissions can be made electronically and the system can keep track of which assignment is from who, without showing that information to the assessor. An interesting issue not raised by Professor Dearn is if assessment should be double blind: that is should the student not know which staff member did the marking, so that the staff can give full and frank comments (as is done with reviewing of academic papers for publication).
  4. Marks for attendance: The university currently allows marks for attendance. Professor Dearn appeared to have concerns about this practice,which in my view this is not a good idea. There is a very simple alternative: provide a small mark for a small assessment item carried out after a session. The student is then not marked for attendance, but on what they learned from the attendance. This encourages the students to attend, and also helps them identify what they have to work more on. This can be easily done with the LMS and I use it routinely.
  5. Normative marking: Should students be assessed on objective criteria, not according to a marking distribution for the class? In my view a bit of both can be used. Assessment can be done objectively and then double checked to see how the student does against others in that class, the same student in other classes and students in the past. The University's Research School of Computer Science has a sophisticated locally developed computer system, which is used by examination committees to compare courses and students, to ensure consistency of marking. This is a process I was skeptical of (and a little afraid of) until I participated in it and saw the system in use in a collegiate environment. This is something which could be added to the LMS to improve results.

Thursday, November 11, 2010

Principles of good education practice

Lauren Kane, from the Flexible Learning Unit of the College of Engineering & Computer Science facilitated an Education Design Workshop this morning at the Australian National University. The topic was "How can educational technologies, including ANU's learning management system (Wattle), enhance your teaching practice?".

We worked through the "Seven Principles of Good Practice in Undergraduate Education" (Arthur W. Chickering and Zelda F. Gamson, The American Association for Higher Education Bulletin, March 1987):
  1. encourages contact between students and faculty,
  2. develops reciprocity and cooperation among students,
  3. encourages active learning,
  4. gives prompt feedback,
  5. emphasizes time on task,
  6. communicates high expectations, and
  7. respects diverse talents and ways of learning.
Lauren pointed out that this was written long before the current e-learning technology was developed, but the principles are still applicable. Also I though they could be equally applied to postgraduate teaching, as for example in my Green ICT course COMP7310 and in Unravelling Complexity VCPG6001.

Tuesday, November 02, 2010

Origins of University Assessment

Stephen DarwinStephen Darwin, Academic Developer, ANU College of Law, will speak on "Hunting the origins of assessment", at the Australian National University in Canberra, 12.30 pm, 11 November 2010:
ANU TEACHING FORUM

Hunting the origins of assessment: what shapes it (and is shaped by it).

A forum led by Stephen Darwin, Academic Developer, ANU College of Law

Facilitated by Luara Ferracioli and Ryan Bellevue

WHEN: 12.30 - 1.30pm, Thursday, 11 November 2010. Light lunch provided, starts 12.00.
Map and Directions to Research Student Development CentreWHERE: Seminar Room, Building 10T, Ellery Crescent

OVERVIEW

The origins of our assessment approaches are often unexplored and can even - on closer examination - prove to be quite mysterious. Yet understanding what has shaped the way we approach the design of assessment process and practice is essential to critically reflecting on the work assessment does for us in evaluating student learning.

In this forum, we will consider the range of influences that conventionally shape (and constrain) assessment design - such as powerful subject histories, influential discipline norms and the personal assessment experiences of academics. Based on our own practice in
course design and teaching we will debate the effect that assessment may have on shaping student approaches to learning, and consider whether, and how, different approaches to assessment design can be used to enhance students' learning experience.

RSVP (for lunch numbers): Peter Trebilco

Forum organisers: Ryan Bellevue, Guy Emerson, Luara Ferracioli, Jane Sisley, Lorna Tilley, Peter Trebilco.