- Development of Performance Measurement Instruments in Higher Education – exploring the balance of performance measurement instruments, potential uses, deployment methods and participation and selection of students;
- Review of the Australian Graduate Survey (AGS) – examining the strategic position of the AGS in its relationship with other survey instruments, administration methods, timeliness and capacity to measure the diversity in student experience; and
- Assessment of Generic Skills – focusing on the development of an instrument appropriate for Australian universities in the context of the OECD’s Assessment of Higher Education Learning Outcomes (AHELO) project.
Development of Performance Measurement Instruments in Higher Education
Discussion Paper
December 2011
Table of contents
1.Introduction 3
1.1.Performance Funding 3
1.2.Advancing Quality in Higher Education 3
1.3.Purpose of this paper 3
2.Principles and the student life cycle framework 4
2.1.Principles 4
2.2.Student life cycle framework 4
3.Existing surveys 7
3.1.National and cross-institution surveys 7
3.2.Links with new instruments 7
4.New instruments 8
4.1.University Experience Survey 8
4.2.Assessment of Generic Skills : The Collegiate Learning Assessment 9
4.3.Review of the Australian Graduate Survey 9
5.Issues 12
5.1.Administration of new instruments 12
5.2.Student selection 12
5.3.Central sampling of students 13
5.4.Uses of data 14
5.5.Intersection of existing and new instruments 15
6.Next Steps 16
6.1. The AQHE Reference Group 16
6.2.Discussion papers 16
6.3.Roundtable discussions 16
6.4.Next steps 17
Appendix 1 - References 18
Appendix 2 – How to make a submission 19
Appendix 3 – Terms of Reference for the AQHE Reference Group 21
Appendix 4 – Membership of the AQHE Reference Group 22
Appendix 5 – Summary of existing surveys 23
Introduction
In 2008, the Government launched a major review to examine the future direction of the higher education sector, its fitness for purpose in meeting the needs of the Australian community and economy, and options for reform. The review was conducted by an independent expert panel, led by Emeritus Professor Denise Bradley AC. The panel reported its findings to the Government in the Review of Australian Higher Education (the Review) in December 2008. The Review made 46 recommendations to reshape Australia’s higher education system.
In the 2009-10 Budget, the Government responded to the recommendations of the Review with a ten-year plan to reform Australia’s higher education system, outlined in Transforming Australia’s Higher Education System. The Government’s response was based on the need to extend the reach and enhance the quality and performance of Australia’s higher education system to enable it to prosper into the future.
To extend reach, the Government holds an ambition to increase the educational attainment of the population such that by 2025, 40 percent of all 25-34 year olds will have a qualification at bachelor level or above. The Government also seeks to increase the higher education participation of those people who are currently underrepresented in higher education. In particular, by 2020, the Government expects that 20 per cent of higher education enrolments at undergraduate level will be of people from low socio-economic backgrounds.
Reforms announced to achieve these ambitions included the establishment of the Tertiary Education Quality and Standards Agency, the introduction of the demand driven funding system and significantly improved indexation on grants, Performance Funding and mission based Compacts.
The Government’s response to the Review shares a number of features with reform agendas in other areas of “human capital development” such as health, employment services, and disability services that are being implemented in Australia and internationally. These common features include: opportunities for citizens to exercise greater choice between alternative providers; the introduction of funding that “follows the consumer” and thus gives them more power in the service relationship and strengthens incentives for providers to tailor their offerings to citizens’ requirements; improved regulation to ensure minimum quality standards; and improved information on performance to allow citizens to make better informed choices.
The Government’s efforts to improve performance reporting and transparency are aimed at enhancing the quality of information available to students, to give them greater confidence that the choices they make are the right ones for them. The performance of universities has a number of domains, including but not limited to: research, teaching, financial performance, student experience, the quality of learning outcomes and access and equity. Each of these domains has a specific mechanism or tool (sometimes more than one) designed to capture relevant information about performance in that domain. For example, the Excellence in Research for Australia (ERA) process captures information about research performance; access and equity outcomes are captured through student data collections that include markers for low-SES status; and TEQSA will be implementing teaching standards by which the performance of universities will be measured.
Similarly, the three performance indicators that are the subject of this paper are designed to capture information about how universities perform in the domains of student experience and the quality of learning outcomes. There are likely to be synergies and complementarities with other tools, for example, TEQSA’s teaching standards. They therefore should be seen as part of an overarching suite of performance measures and mechanisms that are designed to capture information across the most relevant domains of university performance, necessary for improving the information available to students as they seek to exercise the choices that are now open to them in the demand-driven system. It should be noted that the newly created MyUniversity website will be used for presenting information to students about performance across the various domains.
Performance Funding
In late 2009, the Department convened an Indicator Development Group comprised of experts in the higher education sector. The group assisted in the development of a draft indicator framework, outlined in the discussion paper, An Indicator Framework for Higher Education Performance Funding, which was released for consultation in December 2009. The paper proposed 11 possible performance indicators in four performance categories. 61 submissions from the sector were received in response to the discussion paper.
The Government considered the feedback received and refined the framework to include seven indicators in three performance categories: participation and social inclusion, student experience and the quality of learning and teaching outcomes.
The Government released draft Performance Funding Guidelines for discussion in October 2010. The draft Guidelines provided details on the proposed implementation of the Performance Funding arrangements. The Government received 44 responses to the draft guidelines.
In the 2011-12 Mid Year Economic and Fiscal Outlook (MYEFO) the Government announced that it would retain Reward Funding for universities that meet participation and social inclusion targets. The Government discontinued Reward Funding for student experience and quality of learning outcomes indicators in the context of the Government’s fiscal strategy and on the basis of feedback from the sector that there was no consensus on the issue of whether it is appropriate to use such indicators for Reward Funding.
Universities have acknowledged the need to develop a suite of enhanced performance measures for providing assurance that universities are delivering quality higher education services at a time of rapid expansion. The Government will focus on the development of student experience and quality of learning outcomes indicators for use in the MyUniversity website and to inform continuous improvement by universities. The Government has agreed that three performance measurement instruments will be developed over the duration of the first Compact period: a new University Experience Survey (UES), an Australian version of the Collegiate Learning Assessment (CLA) and a Review of the Australian Graduate Survey (AGS). The Government has removed the composite Teaching Quality Indicator (TQI) from the performance indicator framework since universities have made a reasonable case that university performance is best measured using output rather than input indicators.
Final Facilitation Funding and Reward Funding Guidelines and Administrative and Technical Guidelines will be released on the Department’s website in December 2011. These will provide an outline of the final Performance Indicator Framework and Performance Funding arrangements.
Advancing Quality in Higher Education
In the 2011-12 Budget, the Government released details of its Advancing Quality in Higher Education (AQHE) initiative designed to assure and strengthen the quality of teaching and learning in higher education. The announcement provided more information on the new performance measurement instruments being developed and the available funding for developing the instruments. The consultation processes for the initiative were also outlined including the establishment of an AQHE Reference Group to advise on the development and cohesiveness of the performance measurement instruments. The AQHE Reference Group will also assist in the development of discussion papers for each of the instruments. Roundtable discussions with universities, business and students will also be held later in 2012.
Purpose of this paper
This paper discusses key issues in the design of the performance measurement instruments, assesses their fitness for purpose and their ability to operate together in a coherent way to obtain a comprehensive view of the student’s undergraduate university experience and learning outcomes. This paper does not describe in detail how the measurement instruments will be implemented - these issues will be outlined in separate discussion papers on each of the instruments.
The second section of this discussion paper considers some principles to guide the development of new performance measurement instruments. It also proposes that a framework of the student life cycle be used to situate the development of new performance measurement instruments. The third section briefly discusses existing survey instruments. The development of new performance measurement instruments for use in performance reporting and for other purposes is discussed in the fourth section. The fifth section considers key issues that impact on the coherence and balance of performance measures. The final section outlines a proposed implementation strategy for the new performance measures. ...
Issues
This section discusses key issues that arise from consideration of existing and new performance measurement instruments. This includes the administration of performance measurement instruments, student selection, central sampling of students, uses and the intersection of existing and new instruments. Discussion of these key issues facilitates an assessment of the fitness for purpose of performance measurement instruments and their ability to operate together in a coherent way to obtain as comprehensive view as possible about the student’s undergraduate university experience.
Administration of new instruments
Student surveys tend to be conducted in Australian universities using one of two broad deployment approaches (ACER, 2011, p.14), specifically:
an independent (or centralised) deployment, in which most if not all survey activities are conducted by an independent agency; or
a devolved deployment (or decentralised), in which institutions and a coordinating agency collaborate on survey operations.
An independent deployment approach involves participating universities providing the independent agency with a list of all students in the target sample at their institution, including student’s contact details. After receiving institutions’ population lists, the independent agency would identify the target population, which could be either a census or sample of students, and invite students to participate in the survey. Responses would then be returned directly to the independent agency for analysis.
A devolved approach involves participating universities supplying the independent agency with a de-identified student list that excludes student contact details. A sample of students would be drawn, online survey links would be allocated to student records, and this list would be sent back to universities who would then merge in student contact details. Under a devolved approach, universities manage the deployment of the survey by sending invitations to sampled students and following up with non-respondents. Responses are provided directly to the independent agency for analysis.
Both deployment approaches have benefits and limitations. A devolved approach has benefits in that it can accommodate the needs and circumstances of a diverse array of universities. On the other hand, the fact that universities are primarily responsible for collecting data about their own performance can be seen as a conflict of interest, leading to perceptions that universities may ‘game’ the system. Given the stakes and uses to which the data collected from the new instruments will be used, on balance, an independent approach is favoured since this will promote validity, consistency and efficiency.
A key issue for any independently deployed survey is privacy and the responsibility of universities (and the Department) to preserve the confidentiality of student information they hold. Universities may be required to amend their agreements with students to permit disclosure of personal information to third parties for the purposes of conducting surveys. Providing privacy laws are satisfied in the development of an instrument, this approach has to date received broad support from the higher education sector, as measured through consultation in the development of the UES.
While the deployment approach is a significant consideration in terms of administration of a new survey instrument, there are other issues which should be considered. These include, but are not limited to, the administration method (online, telephone, paper-based), context and timing. These issues will be considered in the context of individual instruments.
Questions for Discussion
What concerns arise from an independent deployment method?
What are the obstacles for universities in providing student details (such as email address, first name and phone numbers) to an independent third party?
Would universities agree to change their privacy agreements with their students to permit disclosure of personal information to third parties for the purposes of undertaking surveys?
What are the other important issues associated with administration of survey instruments?
Student selection
The selection of students for the new performance measurement surveys is an issue which has raised a number of concerns. The burden of the existing range of survey instruments on both students and university resources is foremost in the minds of universities and therefore a balance needs to be struck between the need for the collection of new data for the purposes of performance reporting and to assure quality and the requirements on students and universities.
The surveys could be run as a census of all students in scope or by administering the survey to a sample of the students in scope. Deciding between a census and a sample is a complex process that necessarily takes into account many technical, practical and contextual factors.
Two major issues are non-response biases and general confidence in the precision of results, particularly at the sub-institutional level. The advantage of a sample survey approach is that a structured sample can be constructed that deliberately focuses on target groups for which it is necessary to generate meaningful results (for example at the course level or for particular demographic groups) and this may assist in overcoming non-response biases. A sample survey approach would require a relatively sophisticated sampling frame to give adequate coverage across fields of education and demographic characteristics. This process would be simplified if the Higher Education Information Management System (HEIMS) database could be used to construct the sample frame, given that it already records detailed information on student characteristics. Standard techniques to measure the precision of sample survey results could be systematically applied across all results, for example, calculating confidence intervals or standard errors.
On the other hand, given the small student populations sometimes under consideration (for example, courses where only a small number of students are enrolled at a particular institution), sample sizes needed to provide confidence in survey results would approach the total population. The intention to publish data from the new performance measurement instruments on the MyUniversity website disaggregated to subject level may influence the decision on whether to conduct a census or survey since a sufficiently large number of responses will be required to ensure data are suitably robust and reliable. In this case, it may be preferable to continue on a ‘census’ basis where the whole population is approached to participate.
Other issues for consideration in deciding between a census or survey approach include:
Support by participating institutions;
The size and characteristics of the population;
Providing students with opportunities for feedback;
Relationship with other data collections, in particular other student surveys;
Analytical and reporting goals, in particular sub-group breakdowns;
Anticipated response rates and data yield;
Consistency and transparency across institutions;
Cost/efficiency of data collection processes; and
The availability of supplementary data for weighting and verification.
The method of student selection may vary between instruments, and regardless of whether a census or sample approach is used, proper statistical procedures will be used to evaluate the quality and level of response in the long term.
Questions for Discussion
What are key considerations in choosing between a sample or census approach to collection of performance data?
Central sampling of students
As discussed above, an important issue regarding the introduction of new surveys within the higher education sector is the perceived burden on university resources and the students who are required to participate.
One method which has been proposed to assist in the reduction of this burden is the possibility of using DEEWR HEIMS (Higher Education Information Management System) data to better control student sampling and also to use stored student demographic level data to pre-populate survey questions where appropriate.
Using the HEIMS data in this way could potentially improve random sampling and avoid oversampling of students invited to participate in surveys, while also reducing the number of questions students are required to answer per survey through the ability to pre-fill and skip questions where data is already available. In addition, by having the Department involved at this level in the survey process, this could improve perceptions of the overall integrity of surveys through making clear that samples are independently constructed.
Note there are restrictions regarding the use of HEIMS data in the Higher Education Support Act. The Department, therefore, is investigating options for the use of this data in the context of individual performance measurement instruments and also as a suite of instruments.
Questions for Discussion
What are the advantages and disadvantages of central sampling of students?
Uses of data
The collection of data from the new performance measurement instruments will be of major benefit to universities for continuous improvement and to assure the quality of teaching and learning in the sector. The Government has also indicated that, subject to their successful trial and implementation, it is intended that data from the new instruments will be published on the MyUniversity website.
While a number of uses are proposed for the new data collections, other uses may be discovered throughout the development of the new instruments. It will be important, therefore, to consider potential future uses of the data while being aware of the potential for misuse of the data.
By utilising the principles referred to in section 2.1 of this paper in the development of the instruments, the data available should be relevant, reliable, auditable, transparent and timely. These principles will also provide guidance as to the appropriate uses of the data. Further, the range of consultations being undertaken in the development of the new instruments will include roundtables where stakeholders will be able to discuss how the results from the new instruments can be used while raising any concerns regarding the future use of the data. In addition, it may be appropriate to establish codes of practice that guide appropriate uses and interpretation of performance information.
Continuous improvement
It will be important that the data collected from the performance measurement instruments is useful for universities, so that they can implement processes and policies for continuous improvement and maintain a high level of quality of teaching and learning.
To ensure the performance measurement instruments collect data that is useful and relevant to universities, the instruments are being developed with significant sector consultation. Stakeholders will be invited to provide feedback throughout the development of the new instruments to allow these to take account of their data and measurement needs.
Using the student life cycle model, consideration needs to be given to what information can and should be collected from the new instruments given their implementation at different stages of the life cycle, and how this information will assist to assure the quality of teaching and learning in Australian universities.
MyUniversity
It is expected that in future releases, the MyUniversity website may include performance data from the new performance measurement instruments. This would allow prospective students additional information with which they can assess the performance and quality of different institutions in the three performance categories. How the data will presented on the MyUniversity website will be a consideration once the instruments have been tested and the level of data analysis is known.
Information regarding the use of performance data on the MyUniversity website will be made available throughout the development process for the instruments and the website itself.
Another issue that arises in consideration of the MyUniversity website is the level of reporting. A major purpose of the website is to inform student choice about courses and subjects. In this environment, more detailed reporting is likely to be desired, for example, at the field of education level. There is likely to be a trade off between collection and reporting of data from the new performance measurement instruments at a finer level of disaggregation and adding to the complexity and burden of reporting.
Questions for Discussion
What are appropriate uses of the data collected from the new performance measurement instruments?
Intersection of existing and new instruments
The development of the University Experience Survey (UES) has raised the issue of whether the new instruments should be focused instruments for the purposes of performance reporting, or whether they could potentially be expanded to replace existing surveys and institution/course specific questionnaires. For example, what is the potential overlap between the newly developed University Experience Survey and the Course Experience Questionnaire in measuring student experience?
This will be a consideration in the development of all new instruments to ensure there is balance between the additional burden on both students and universities of the new instruments and ensuring they are able to capture targeted and purposeful information. Further, there needs to be consideration of the data needs of individual universities and how these differ across the sector.
A key issue in considering the overlap between the new instruments and existing survey instruments is the uses to which performance measurement data will be put as discussed above. It is expected students will benefit from the availability of new data via the MyUniversity website. The new performance measurement instruments will potentially be used to enhance continuous improvement processes within universities. There is also the potential for international benchmarking.
A major dilemma is the need for broad level national data and the requirement for more detailed data to suit universities’ diverse needs and missions. Satisfying both of these goals raises the important issue of costs and resource burden. One suggested solution for the UES is that there could be a core set of items which would be asked of students at all universities, and an optional set of non-core items which universities could select to suit their individual requirements.
Ultimately, universities will decide in which instruments they participate, and this will hinge on a range of factors not limited to the type of data collected. By considering, however, the range of existing surveys, what data is most useful to universities, and what additional data universities would like to collect from the new performance measurement instruments, the development of the new instruments has the potential to make this decision by universities considerably easier.
Questions for Discussion
Are there other issues that arise when considering the overlap of existing and new instruments?
...
From: Development of Performance Measurement Instruments in Higher Education, Department of Education, Employment and Workplace Relation, 9 December 2011
Review of the Australian Graduate Survey
Discussion Paper
December 2011
Table of Contents
1.Introduction 3
1.1.Policy context 3
1.2.Consultation 3
2.Principles and the student life cycle framework 4
2.1.Principles 4
2.2.Student life cycle framework 4
3.Strategic position, role and purpose of the AGS 5
3.1.Overview of the AGS 5
3.2.Role and purpose of the CEQ 5
3.3.Role and purpose of the GDS 6
3.4.Future strategic position of the AGS 6
4.Administration issues 7
4.1.Administrative model 7
4.2.Timeliness 7
4.3.Funding 7
5.Survey methodology and data quality issues 8
5.1.Methodology and standardisation 8
5.2.Data quality 8
6.Aspects of student experience 10
7.Next steps 11
Appendix 1 – References 12
Appendix 2 – How to make a submission 13
Appendix 3 – Current AGS survey instrument 15
...
Consultation
The Australian Graduate Survey (AGS) is a national survey of newly qualified higher education graduates, conducted annually by Graduate Careers Australia (GCA). A strengthened AGS is part of the suite of performance measurement instruments that were announced as part of the AQHE initiative. The Department of Education, Employment and Workplace Relations (DEEWR) is working with GCA and the higher education sector to review the AGS. The review is examining the strategic position of the survey, and aims to improve the survey content, data collection methods and timeliness of reporting. The review is also considering how to better capture aspects of student experience for external, Indigenous, international and low socio-economic status students.
Consultation for the AQHE initiative includes the establishment of an AQHE Reference Group to advise on the cohesiveness of the three instruments, and the development of an overarching discussion paper, Development of Performance Measurement Instruments in Higher Education. In addition, the AQHE Reference Group will assist in the development of discussion papers on each of the instruments. Consultations and roundtable discussions with universities, business and students will also be held later in 2011 and in 2012.
The Department has prepared this discussion paper based on consultation with and advice from the AQHE Reference Group. The paper raises issues and options for the future of the AGS, with the aim of canvassing views from universities and other stakeholders in the sector. Information on how to contribute to the process can be found below. ...
Future strategic position of the AGS
The Government announced as part of the Advancing Quality in Higher Education initiative the development of a suite of performance indicators for the higher education sector (a summary of the new indicators can be found in the Development of Performance Measurement Instruments in Higher Education discussion paper). These indicators will provide greatly enhanced information on university performance. At the same time, the sector is moving towards a student centred funding model. To assist students in making informed decisions about their tertiary education, it is intended that the MyUniversity website will from 2013 include detailed results from the new performance indicators. Higher education sector stakeholders, including institutions, students and Government, will therefore be responding dynamically to a greater range of information sources than has previously been available. Importantly in this respect, institutional level and institution by field of education level data will be made public. While this does not accord with current AGS practice, it is consistent with approaches to publishing performance information previously undertaken by the Department.1
This changed environment presents a major challenge to the ongoing relevance and strategic position of the AGS. From being the prime source of nationally benchmarked data on university performance, the AGS will become one of several available data sources. In this context, the ongoing role and value of the AGS needs to be clearly articulated. The AGS may need to be modified to enable the survey to establish a coherent place among the range of new indicators, and to ensure it continues to meet the evolving needs of higher education sector stakeholders.
Given the increasing number of surveys in which university students are being asked to participate, and for which universities are being asked to provide administrative support, the additional value offered by the AGS needs to be clearly articulated. One option to reduce cost and respondent burden would be to move from the current census basis of the AGS, where all eligible students are invited to participate, to a survey sample. This question also has ramifications for data quality, as discussed below.
Consideration should also be given as to whether the CEQ should move to surveying students, rather than graduates, in line with the other performance indicators being developed. The CEQ was originally developed and tested for use with undergraduate students in the United Kingdom. In Australia, however, it has always been administered to graduates, which may lead respondents to focus on overall course experience (as intended), rather than specific subjects or instructors. Surveying graduates has also allowed the CEQ to be administered simultaneously with the GDS. Conceptually, however, the CEQ measures satisfaction across the whole of the student lifecycle, and there is no inherent reason why this need take place after graduation.
The most notable challenge to the ongoing relevance of the CEQ comes from the new University Experience Survey (UES). The UES will gauge student attitudes towards a number of aspects of their university course, initially at the end of their first year and potentially in their final year of study. The UES will measure aspects of student’s university experience associated with high level learning outcomes such as teaching and support, student engagement and educational development. While not identical to the information garnered by the CEQ, the UES will provide an alternative measure of student satisfaction and course experience perceptions across the student lifecycle. Consideration needs to be given to the value of continuing the CEQ as an additional survey instrument.
Information provided by the GDS will not be replicated by any of the new performance indicators. By its nature, the GDS is a measure of a university’s contribution to skill formation in relation to labour market outcomes and can only be administered at the end of the student lifecycle. Information on graduate outcomes will continue to be of value to the sector. Nonetheless, consideration should be given as to whether the GDS as currently configured is appropriate for the needs of the sector in the future.
Questions for Discussion
Is joint administration of the GDS and CEQ under the AGS still appropriate?
Will the GDS and CEQ adequately meet future needs for information in the student driven environment?
Should the basis of the AGS be modified to improve fit with other indicators or to reduce student burden? Would a survey sample be a more appropriate option? What are the implications for the development of the UES for the CEQ?
...
Funding
The AGS is primarily funded by direct grant from DEEWR to GCA, made under the Higher Education Support Act. In 2011, this grant was around $660,000. Funding is also sourced from Universities Australia and from subscriptions paid by individual universities. In addition, universities provide substantial in kind funding by administering the survey instrument. In recent years, GCA has incurred financial losses in administering the AGS, and additional funding would likely need to be found if current administrative arrangements are to continue. ...
Questions for Discussion
Is the current partially decentralised mode of delivering the AGS still appropriate?
How can the timeliness of AGS reporting be improved?
Are current funding arrangements for the AGS appropriate? What alternative funding arrangements should be considered?
...
Data quality
Consideration should also be given to the broader data quality issues. Conceptually, the AGS currently operates on a census basis, in that all eligible graduates are invited to respond. GCA procedures mandate that institutions achieve at least a 50 per cent overall response rate. For the 2010 AGS the national response rate was 53 per cent for the CEQ and 57 per cent for the GDS.1 This is a high response rate compared with other surveys of university students, but is still low enough to raise questions as to data reliability. The two main issues are non-response biases and general confidence in the precision of results, particularly at the sub-institutional level. ...
Questions for Discussion
Will AGS data continue to be reliable enough to meet the needs of the sector in the future? How can data reliability best be improved?
Would moving the AGS to a centralised administrative model improve confidence in results?
Would moving the AGS to a sample survey basis improve data quality?
Aspects of student experience
...
Questions for Discussion
Does the AGS adequately measure the diversity of the graduate population and how might it be strengthened in this regard?
1 Graduate Careers Australia, 2011(a), p. 2; Graduate Careers Australia, 2011(b), p. 3.
2 Graduate Careers Australia, 2006, pp. 61-92.
1 Department of Education, Training and Youth Affairs (1998); Department of Education, Science and Training (2001).
From: Review of the Australian Graduate Survey, Department of Education, Employment and Workplace Relation, 9 December 2011
Assessment of Generic Skills
Discussion Paper
December 2011
Table of Contents
1.Introduction 3
2.Policy context 3
2.Policy context 3
3.Consultation 3
3.Consultation 3
4.Principles and the student life cycle framework 4
5.Principles 4
5.Principles 4
6.Student life cycle framework 4
6.Student life cycle framework 4
7.Purpose 5
8.Direct Assessment of learning outcomes 5
8.Direct Assessment of learning outcomes 5
9.Uses 5
9.Uses 5
10.AHELO – progress report 7
11.Outline 7
11.Outline 7
12.Generic Skills 7
12.Generic Skills 7
13.Progress 7
13.Progress 7
14.Australia’s participation 7
14.Australia’s participation 7
15.Issues 9
16.Quality assurance framework 9
16.Quality assurance framework 9
17.Discipline-specific assessments 9
17.Discipline-specific assessments 9
18.Measurement 10
18.Measurement 10
19.Participation 12
19.Participation 12
20.Next steps 13
...
Purpose
Direct Assessment of learning outcomes
Direct assessment of learning outcomes represents the ‘holy grail’ of educational measurement. Objective data on student outcomes provides direct evidence that higher education is meeting economic, social and community needs. Knowledge of what students have learned and achieved and that they have attained the expected outcomes of their degrees provides assurance about the quality of higher education.
External assessment of student’s cognitive learning outcomes, at least in the higher education environment, is rare and to date has been difficult to achieve. In the schools sector, the Programme for International Student Assessment (PISA) is probably the most notable example on an international scale of the measurement of reading, mathematical and scientific literacy outcomes. Alternative measures of learning outcomes in the higher education sector, such as employment and further study, are thought to be problematic because they may be confounded by factors such as the influence of field of education and differences in state/regional labour markets. In the absence of robust direct assessment of outcomes, reliance is frequently placed instead on student self-reports, for example, as measured through the Course Experience Questionnaire (CEQ) generic skills scale which records student perceptions of their achievement of generic skills.
The Bradley Review of Higher Education argued that, “Australia must enhance its capacity to demonstrate outcomes and standards in higher education if it is to remain internationally competitive and implement a demand-driven funding model.” (DEEWR, 2008, p. 128). As part of the new quality assurance framework, Recommendation 23 of the Bradley Review of Higher Education proposed:
“That the Australian Government commission and appropriately fund work on ...........
a set of indicators and instruments to directly assess and compare learning outcomes; and
a set of formal statements of academic standards by discipline along with processes for applying those standards.”
Direct assessment of learning outcomes has many uses and benefits including providing assurance about the quality of higher education, encouraging continuous improvement among universities, meeting employer needs for more skilled graduates and informing student choice.
Quality assurance
External assessment and reporting of the attainment of generic skills provides assurance about the quality of graduates from the higher education system. With significant public and student investment in higher education, the community is entitled to understand that graduates have acquired the skills expected of them when they have completed their degree. The performance indicator framework proposes that the Collegiate Learning Assessment, or some variant of the instrument, be developed for use as an indicator of the acquisition of generic skills. A key principle in the design of performance measures is that they be ‘fit for purpose’, and it is intended that an instrument assessing generic skills be designed that is capable of being used for performance reporting.
Continuous improvement
Assessment of learning outcomes offers the prospect of a virtuous circle whereby assessment and reporting inform improved teaching and learning, in turn leading to improved assessment and reporting of learning outcomes. For example, the Collegiate Learning Assessment offers assessment tools and additionally resources to support improvement of curriculum and pedagogy to promote the development of generic skills. The Council for Aid to Education (CAE) in administering the Collegiate Learning Assessment also conducts professional development activities that train staff in in the process of creating better teaching tools (and classroom-level measurement tools) aimed at enhancing student acquisition of generic skills.
Arguably, the process of teaching and learning generic skills is more effective if undertaken at discipline rather than university level. The issue of the appropriateness of university and/or discipline-specific assessments will be addressed in more detail below. In part, the focus on discipline follows the recommendation of the Bradley Review of Higher Education of the need for more work to be undertaken on academic standards by discipline and processes for applying those standards.
Employer needs
To be successful in the workplace, graduates must acquire generic skills that enable them to fully utilise their discipline-specific knowledge and technical capabilities. As suggested by the Business Council of Australia:
“The challenges involved in adapting to new and changing workplaces also require effective generic skills. Generic skills including communication, teamwork, problem solving, critical thinking, technology and organisational skills have become increasingly important in all workplaces.” (BCA, 2011, p.8)
Reliable empirical studies of employer needs and satisfaction with graduates are few and far between which is both unfortunate and somewhat surprising given the strongly vocational orientation of much of Australian higher education. An earlier study of employers found that the skills in which new graduates appear most deficient are the generic skills of problem solving, oral business communication skills and interpersonal skills with other staff (DETYA, 2000, p.22). The measure of skill deficiencies used in this study was the gap between employer ratings of the importance of skills and their ratings of graduate abilities in these skills. A study conducted by the University of South Australia gave broadly similar findings about employer demand for graduate skills and capabilities (UniSA, 2008).
Informing student choice
Greater transparency will inform student choice in a demand-driven funding model of higher education. The Collegiate Learning Assessment is part of a suite of performance measurement instruments designed to improve transparency in university performance. Subject to the successful development and trial of the Collegiate Learning Assessment, it is intended that universities results on this performance indicator will be published on the MyUniversity website from 2013 onwards.
Another issue that arises in consideration of the MyUniversity website is the level of reporting. A major purpose of the website is to inform student choice about courses and subjects. In this environment, it would be appropriate to report information at the discipline/field of education level as well as the institution level. This is another factor that impacts on the development of an appropriate assessment of generic skills. The issue of the assessment of discipline-specific generic skills is discussed in more detail below.
Questions for Discussion
...
Are there other uses of the assessment of generic skills?
Issues
...Questions for Discussion
...
Which criteria should guide the inclusion of discipline specific assessments in the development of a broader assessment of generic skills?
Are there other criteria, not listed above, which need to be considered?
Questions for Discussion
...
What factors should guide the design of performance measurement instruments to assess generic skills?
Questions for Discussion
Is value-add an appropriate measure of generic skills?
Is it necessary to adjust measures of generic skills for entry intake and how should this be done?
...
Are there other more appropriate measures of generic skills?
Questions for Discussion
...
What would be an appropriate measure of generic skills for reporting university performance?
Questions for Discussion
...
What level of student participation is desirable and for what purposes?
What design features or incentives would encourage student participation?
From: Assessment of Generic Skills, Department of Education, Employment and Workplace Relation, 9 December 2011