Monday, October 18, 2010

The ERA era of measuring research output in Australia

Greetings from "RESEARCH ASSESSMENT AND PUBLICATION METRICS - THE BEGINNING OR END OF AN ERA?" at the Australian National University in Canberra. The question three speakers will answer is: "How is research excellence measured and evaluated? What are its key signs and indicators?" This is in the light of the Australian Research Council Excellence in Research for Australia scheme (ERA).

Colin Steele amused the audience with the story that the UK medical council showed that having a colon ":" in the title of a paper increased its ranking and that as a result more authors put colons in their titles (and Colin made the obvious pun about colons and medicine).

Andrew Calder, Director, Research Performance and Analysis, ARC, then spoke. He pointed out that not just one measure would be used for rating research, but several measures combined may be. This suggests a logical flaw in the argument: if each of the measures used is not a good one, then adding them together does not give a better result. He explained that hundreds of experts will be carrying out reviews, rather than some automated metric. As discussed later, I suggest an automated metric is likely to be more accurate and more accountable. Also the cost of carrying out the manual evaluations must be high. Ranked conferences and journals are used, with the proportion varying by disciplines (conferences are more important in IT for example).

John Wellard, Director ANU Research Office, pointed out that the ERA is likely t be used by the federal government for measuring the performance of universities and deciding funding to them. One flaw with the process John described was the use of the term "employed". As an Adjunct Lecturer I am not "employed" by the ANU, although I am regarded as a staff member. Clearly if I write a paper it should count as an ANU paper (the ANU staff keep asking me for papers to add to their census list).

Professor Andrew Cockburn, Director, College of Medicine, Biology and Environment, ANU, pointed out the many flaws in many research rankings. As an example he pointed out the "H-index" (aka "H-score"). He pointed out that such scores give distinguished pioneers in a filed a much lower ranking than later prolific but less distinguished authors. He pointed out that those who benefit from the rankings have a strong incentive to publish in a way to maximise the measures. As a result the ranking systems will need to be continually changed, removing the possibility of long term measures. Also papers are being written without a "methods" section, with a resulting loss of knowledge.

Professor John Houghton, Victoria University, talked about his extensive work on the economics of scholarly publishing, including the report on scholarly publishing for UK JISC: "Economic Implications of Alternative Scholarly Publishing Models", and more recent "Costs and Benefits of Research Communication: The Dutch Situation", "Costs and Benefits of Alternative Publishing Models: Denmark". He also mentioned related work in Germany and by Alma Swan in the UK. John went on to discuss the positive role of open access and how journal reference measures may inhibit this.

Dr Danny Kingsley, Manager Scholarly Communications, ANU, pointed out the previous speakers had already covered many of the issues. One problem she mentioned was that citations would be measured over only two years. So papers of long term worth will not count. A second problem is that the measures are of the quality journals, not individual papers. A third problem Danny identified is that high rejection rates in "quality" journals, results in rejected papers being recycled though lower ranking journals with duplicated reviewing effort.

As a solution Danny suggested that the measures should support the processes the disciple use, rather than forcing a rigid "publication" measure on them. She suggested "Web 2" measures. Danny also suggested that the social impact should be measured, pointing out this might be from practitioners using research results, rather than other researchers citing this. This all make sense to me, as I don't publish research papers myself, but do use them in industry publications. Therefore my work does not count towards the ANU's publication output, nor does my citations of that work. However, in terms of impact on the community, in the IT discipline, my publications and the papers I cite, probably have more effect on the community than all other IT papers published by ANU researchers. This is because my work is read by practitioners and government policy makers and implemented by them. Implementation is direct by IT professionals writing computer programs and implementing hardware using the techniques I suggest, by companies and government agencies implementing these as policies and through implementation in standards and laws.

Dr Claire Donovan, Lecturer in Sociology, ANU, talked about Research Impact – the Wider Dimension.

At that point there was supposed to be 40 minutes of questions and discussion. However, as the speakers ran over time there was only 4 minutes left for questions. This perhaps illustrated a problem with traditional unviersity publishing. Apart from the invitation to the event with the names and the topics of the speakers, the ANU published no materials as a part of this process. Apart from my blog posting, there are no details available as a result of this event. In terms of impact, this event will therefore have little effect as few people will be able to find details.

Some thoughts on the issue

The ARC has produced Ranked Journal and Conference Lists. Ranked journals require an ISSN (this is a problem for conferences). Andrew Calder pointed out that a publication in a "B" ranked journal may be better for the author, than an "A" ranked one, if it results in more local citations.

I did a quick search of the list to find publications I am familiar with. Curiously I could only find one of the hundred or so volumes of the "Conferences in Research and Practice in Information Technology":

17766BJournal of Research and Practice in Information Technology8Information and Computing Sciences



1443-458X0004-8917

19280BAustralasian Journal of Information Systems806Information Systems1702Cognitive Science

1449-86181326-22381039-7841
42358ACS/IEEE International Conference on Computer Systems and ApplicationsAICCSAC08Information and Computing Sciences12Built Environment and Design



In my view the answer to ranking papers is reasonably obvious and along the lines Danny suggested. As research publishing goes online it will evolve to include social networking techniques, which can measure the ranking of people based on peer assessment. Essentially the current publication metrics are a crude form of such rankings, but these can be improved, refined and made much cheaper and more audit-able. There is now a credible body of research literature on this topic, but which is unknown to all but a few IT researchers. An example of how to do this is Soo Ling Lim's work, reported at ANU on Thursday: "Using Social Networks to Identify and Prioritise Software Project Stakeholders".

1 comment:

  1. The transcripts of the presentations are now available:

    1. Overview: Research Excellence Evaluation, Colin Steele, Emeritus Fellow, ANU.

    2. The Australian Research Council -ERA National Perspectives, Andrew Calder, Director, Research Performance and Analysis, ARC.

    3. The ANU Experience, John Wellard, Director ANU Research Office.

    4. Perverse and Beneficial 0utcomes From Research Assessment, Professor Andrew Cockburn, Director, College of Medicine, Biology and Environment, ANU.

    5. Scholarly Publishing Economics: an International Research Perspective, Professor John Houghton, Victoria University.

    6. Future Publishing Metrics – The Way Forward?, Dr Danny Kingsley, Manager Scholarly Communications, ANU.

    7. Research Impact – the Wider Dimension, Dr Claire Donovan, Lecturer in Sociology, ANU.

    Also Colin Steele has pointed out that the event "RESEARCH ASSESSMENT AND PUBLICATION METRICS - THE BEGINNING OR END OF AN ERA?", was organised by the ANU Emeritus Faculty, not the university overall, as I suggested.

    ReplyDelete