ERA and the Ranking of Australian Humanities Journals

By Paul Genoni and Gaby Haddow

© all rights reserved. AHR is published in PDF and Print-on-Demand format by ANU E Press


In Australian Humanities Review 45 Guy Redden draws upon his experience with the Research Assessment Exercise (RAE) in the UK to warn Australian researchers of various dangers posed by the implementation of similar methods of evaluation that may be introduced under the banner of Excellence in Research for Australia (ERA). Redden is concerned with the tendency of emerging forms of research evaluation to privilege a small number of ‘high ranking’ journals, and of the distorting effect this has on research communication as authors obsessively target these journals. This in turn results in research funding being concentrated on a small number of institutions and research units that are (predictably) assessed as high-achievers.

It is the intention of this paper to focus more closely on journal ranking as the mechanism that will be central to evaluation of Australian research. Although the link between research evaluation and journal ranking has been at the heart of recent developments in accountability for the Australian research sector, journal ranking has remained something of a dormant issue for Australian humanities researchers. Despite some dissatisfaction with aspects of the existing form of evaluation and reward—in particular the sense that the so called ‘quantum’ was designed with science-based models of scholarship in mind—humanities scholars have generally continued to research and publish secure in the knowledge that all refereed journal articles were rewarded equally. However, as research evaluation narrows in ways that focus reward on a small number of elite journals, then issues related to the ranking of journals will have far reaching implications for the manner in which humanities research is valued and rewarded.

It is apparent that journal ranking creates particular problems for the humanities. One of the critical issues is the ranking of ‘local journals’—that is, journals with a local, regional or national focus and readership—in a system which is designed to establish international benchmarks for ‘impact’ or ‘quality’. The purpose of this paper is therefore to investigate aspects of the journal ranking undertaken for ERA to date, with an emphasis on the ranking of Australian local journals.

Journal ranking
Although journal ranking is novel to many Australian researchers it has a lengthy history in information science as an aid to selecting journals for academic library collections. The most established method of ranking depends upon using ‘bibliometrics’ to measure the volume and pattern of citations given to journals. The amount of citation is said to be an indicator of the impactof a journal within its discipline. The most frequently cited metric used in this regard is the Thomson Scientific Journal Impact Factor (JIF).1

The concept of impact is important to understanding the challenges faced when ranking humanities journals. As most researchers understand, impact is essentially a measure of the times a typical article in a specified journal has been cited in a preceding assessment period (two years in the case of the JIF). While the JIF is widely used, it is a matter of considerable dispute as to exactly what the metric means with regard to a journal’s status—in particular, the reliability of the JIF as an indicator of ‘quality’ as opposed to impact.

What is generally agreed is that the usefulness of the JIF and other citation-based metrics differ considerably between disciplines, and that they are of least value in the humanities. The reasons why this is the case have been long understood and discussed, and do not require repeating here in detail. Suffice to say, citation-based metrics are considered to have validity in the sciences, but far less so—to the point of being almost worthless—in the humanities, where citations are a substantially inferior predictor of either impact or quality. Indeed such is the lack of compatibility of the humanities with citation metrics that a meaningful JIF cannot be calculated for many humanities journals.

Given the recognised problems with the JIF (Bowman and Stergiou; Cameron; Monastersky) and the ongoing search for a metric that can be usefully applied in all disciplines, various other methods have been proposed for the ranking of journals. A number of these (for example, the journal diffusion factor; the g-index, and the h -index) are also based on citations. These ‘new’ measures have been the focus of a substantial amount of research which continues to indicate their inadequacy in reflecting scholarly communication in the humanities.

Another aspect of citation based evaluations causing concern is their bias in favour of English language journals published in North America and Western Europe. Journals originating in middle ranking countries in terms of research production—such as Australia—are demonstrably under-represented in the major citation indexes (Bordons, Fernandez and Gomez). This has led to a small body of research assessing the ranking of Australian journals (Butler), and Australian humanities and social science journals in particular (East; Genoni, Haddow and Dumbell; Haddow; Royle)

Ranking of local journals in Europe
Since its introduction in 1986 the RAE has used peer-review of individual articles as the preferred means of assessment. This article-by-article assessment has now succumbed due to the scale of the work, and the UK is revisiting the possible use of journal ranking as a means of evaluating research. This is one of a range of likely changes that will revamp the RAE in ways that will bring it closer to the Australian quantum in shifting from an outputs focus to a broader assessment of research quality. A 2006 report prepared for the Arts and Humanities Research Council (AHRC) and the Higher Education Funding Council for England (HEFCE) nominated ‘significantly lighter-touch peer review and greater use of metrical data’ (8) as the way forward for the RAE.

The issue of ‘lighter-touch peer review’ was taken up in a subsequent British Academy report (2007). This report discussed the reasons why citation metrics are flawed when applied to the social sciences and humanities, although it suggested they should nonetheless form part of the future research assessment. The report introduced the possibility of subjective journal ranking as a supplement to citation metrics, and discussed the widely cited attempt to do so by the European Science Foundation (ESF).

The ESF was concerned about the outcome for humanities of an over-reliance on citation metrics for journal ranking. They noted that citation-based assessment would severely disadvantage smaller journals, which were nonetheless essential to communicating research with a local, regional or national focus. The important development in the ESF method is that journals are not ‘ranked’ by impact or quality, but rather categorised using a third principle based on a journal’s ‘profile’. The ESF described the categories as being ‘determined by a combination of characteristics related to scope and audience’.

Journals have different profiles, and their scope of audience and field vary. Papers in journals with wide international prestige are not automatically of higher quality than papers in journals which are known and read only in a very specialised field. Similarly, a paper published in the native language in a journal which has only local importance can lead to a strong impact for a certain type of research. (European Science Foundation)

In 2007 the ESF produced the European Reference Index for the Humanities (ERIH), which used discipline experts to categorise humanities journals from fifteen subject areas. The ERIH categorised journals as A, B or C.

A: high ranking international publications with a very strong reputation among researchers of the field in different countries, regularly cited all over the world.

B: standard international publications with a good reputation among researchers of the field in different countries.

C: research journals with an important local/regional significance in Europe, occasionally cited outside the publishing country though their main target group is the domestic academic community.

It can be argued that these descriptions, with their reference to reputation and citation, invoke both subjective and metric-driven forms of evaluation, although it should be noted that the methodology for categorising journals does not include specified citation metrics.

Despite the ESF’s attempt to reconceptualise the ranking of humanities journals, the ERIH attracted a substantially adverse response (Gorman; Howard). It was argued that devising categories in this way remains antithetical to the process of humanities scholarly communication, and that it ‘creates the hierarchies it claims only to be describing’ (Howard). Critics also point out that despite the claim that the categories indicate ‘scope and audience’ they will inevitably be used as a guide to quality, thereby directing the flow of papers away from the local journals in category C.

The British Academy report was critical of the methodology employed by the ESF, concluding that ‘the ERIH does not at present represent a viable way in which summary measures of peer reviewed publications can be constructed’ (35). The report recommended that further development should be undertaken on metrics that ‘reflect the distinctive nature of the humanities and social science research’, and that the HEFCE ‘explore whether there is scope to modify the Thomson Scientific indices to accommodate the special features of humanities and social science research’ (35). The report concluded that, ‘metrics should remain an adjunct to the research assessment (RAE) panel peer review process rather than a substitute’ (35).

The lack of acceptance of the ERIH process means that questions regarding the best means for ranking or categorising humanities journals are unresolved. On one hand there is widespread scepticism regarding the available citation-based metrics, and little understanding of how they might be modified in such a way that they are suitable to the humanities. On the other hand, there is a seemingly failed attempt in the ERIH to devise a method that categorises journals in a manner that acknowledges that many humanities titles—in particular local journals—will be severely disadvantaged by the use of international benchmarks of either impact or quality.

Journal ranking in Australia: the policy
As it became clear that journal ranking would be part of research evaluation in Australia commentators interested in humanities and social science research expressed their concern. The most substantial report leading up to this period was produced for the Social Sciences and Humanities Research Council of Canada in 2004 (Archambault and Gagne) and highlighted the problems these disciplines faced if required to use citation-based ranking. This conclusion was echoed by Australian commentators, with Claire Donovan of the Research Evaluation and Policy Project of the Australian National University warning in 2005 that:

any policy move to amplify the importance of citation-based indicators will place HASS [Humanities and Social Science] research at a disadvantage, a concern now elaborated further using insights gained from the specialist bibliometrics literature on citation indicators. (17)

And in 2006 Steele, Butler and Kingsley concluded that:

Thomson Scientific bibliometrics can still be powerful tools to supplement peer-review in certain science disciplines. The same cannot be said for the social sciences and humanities. (281)

Probably influenced by these arguments the Australian Research Council (ARC) Consultation Paper on the proposed form of ERA issued in June 2008 made it clear that although journal ranking was to be an integral part of research evaluation, it would not be metric driven. The ‘overall criterion’ to be used in determining a journal’s rank was described as the ‘quality of the papers’. What ERA has in common with the ERIH therefore is a rejection of metrics as the primary means by which the rankings are determined, but unlike the ERIH, there is no suggestion that ERA ranking is anything other than a hierarchy determined by quality.

Four ‘tiers’ have been instituted for ERA ranking, with each tier incorporating an approximate percentage of the titles within a discipline. These are A* (top 5%); A (next 15%); B (next 30%); and C (next 50%).

What is notable are the descriptions of the characteristics of journals in each of the tiers. In the absence of a specified process for journal ranking, or of designated metrics, these descriptions are the sole guide to the ranking process. A* journals are described in the following terms:

Typically an A* journal would be one of the best in its field or subfield in which to publish and would typically cover the entire field/subfield.  Virtually all papers they publish will be of a very high quality.  These are journals where most of the work is important (it will really shape the field) and where researchers boast about getting accepted.  Acceptance rates would typically be low and the editorial board would be dominated by field leaders, including many from top institutions.

The emphasis on quality (‘one of the best’; ‘of very high quality’) is apparent. This appears to assist the humanities in that the requirements for an A* journal includes no reference to the troubling citation metrics. Conversely, however, this description of the A* tier indicates the advantage of a metrical component, in that in their absence evaluation becomes largely (or even entirely) subjective. Those responsible for the ranking have little or no guidance in determining what constitutes a measure of ‘virtually all papers’; or a ‘low’ acceptance rate (such figures are often unavailable); or ‘field leaders’, or ‘top institutions’.

The reference to journals where ‘researchers boast about getting accepted’ is also a subjective oddity. Not only because the thresholds required to induce boasting vary substantially between individuals, but because it highlights one of the criticisms made of the ERIH rankings. That is, ranking creates damaging divisions by assigning journals to ‘artificial’ tiers. Whereas previously authors may well have been satisfied (indeed proud) to have their work included in a peer-reviewed, well-regarded title, selected for its particular (local) focus and audience, they are now told that they have little to ‘boast about’ unless they are published in A* titles.

Some of the same problems are apparent in the description of the characteristics of Tier A journals.

The majority of papers in a Tier A journal will be of very high quality. Publishing in an A journal would enhance the author’s standing, showing they have real engagement with the global research community and that they have something to say about problems of some significance. Typical signs of an A journal are lowish acceptance rates and an editorial board which includes a reasonable fraction of well known researchers from top institutions.

In this case inclusion of the phrase ‘would enhance an author’s standing’, suggests that publishing in any of the 80% of more lowly ranked journals may be of no benefit or perhaps even detrimental to an author’s reputation—a judgment made not on the basis of the quality of his/her article, but on a perception of where a journal sits in the artificially constructed hierarchy.

The reference to having ‘real engagement with the global research community’ is also odd, in that it implies that articles appearing in ‘lesser’ journals have no such ‘engagement’. It should, however, be self-evidently difficult for work to be accepted in any refereed journal (irrespective of its tier) without the author demonstrating familiarity with international scholarship.

Those engaged in ranking are again faced with subjective assessments; in this case trying to ascertain what constitutes a ‘reasonable fraction’ and ‘problems of some significance’, and the need to distinguish between ‘low’ (A*) and ‘lowish’ (A) acceptance rates; and between ‘field leaders’ (A*) and ‘well known researchers’ (A).

The description of the Tier B journals indicates additional problems in constructing hierarchies of quality, and in doing so raises the issue of local (‘regional’) journals.

Tier B covers journals with a solid, though not outstanding, reputation. Generally, in a Tier B journal, one would expect only a few papers of very high quality. They are often important outlets for the work of PhD students and early career researchers. Typical examples would be regional journals with high acceptance rates, and editorial boards that have few leading researchers from top international institutions.

This description includes the concession that journals ranked as B will include articles that are of the same ‘very high quality’ as those found at tiers A* and A. In the absence of individual assessment, however, articles in Tier B journals will nonetheless be designated to be worthy of substantially less reward than those appearing in ‘better’ journals.

The reference to Tier B journals serving as ‘important outlets for the work of PhD students and early career researchers’ is also puzzling. These authors are free to submit their work to any journal irrespective of its ranking, and many achieve publication in the most prestigious journals in their field. The implication in the description is that some journals have an almost designated role in ‘training’ researchers for publication in better journals.

A potentially troubling element of the description of Tier B journals from a humanities perspective is the reference to ‘regional journals’. This problem arises because what constitutes a regional/local journal will differ significantly between disciplines. Whereas the term may have pejorative connotations for a science journal, this is far less likely to be the case for humanities titles. The international basis of most science disciplines suggests that authors should be seeking an international readership, and local journals (insofar as they exist) may by implication be outlets for minor work. In many humanities and social science disciplines, however, the nationally or regionally bounded areas of study mean that the readership of many journals will also be localised, and that journals servicing these disciplines will have limited international distribution. This critically impacts upon the prospects of these journals being ranked above B. This issue of local journals will be discussed further below in relation to Australian literature.

Finally, it should be noted that the very brief description of Tier C is in the form of a description of what these journals don’t have—the features of the journals in Tiers A*, A and B—rather than what they do.

Tier C includes quality, peer reviewed, journals that do not meet the criteria of the higher tiers.

It is, however, difficult to see how Tier C titles represent ‘quality’ if they don’t meet the requirements of A*, A or B journals. That is, the titles in Tier C are apparently journals in which articles don’t demonstrate engagement with international research; don’t deal with significant problems; are of a lower standard than that expected of early career researchers and postgraduates; do not shape their fields; and will not enhance an author’s reputation. It should be remembered that Tier C accounts for 50% of titles published in each discipline!

Therefore while the ARC can be seen to be attempting to accommodate the humanities by rejecting a metrics-driven assessment of impact, the attempt to devise meaningful descriptions of quality is also problematic. As with the ERIH, the ERA tier descriptions include elements of impact, quality and journal type, and in doing so reveal the largely subjective and contingent nature of the ranking enterprise.

The shift away from a metrics approach is not the only way in which ERA might be seen to be accommodating disciplinary differences. The ARC has also established an Indicators Development Group (IDG), tasked with devising ‘discipline-specific indicators, including metrics and other proxies of quality and activity’. In an acknowledgement of the particular problems faced by the humanities, the IDG appointed a Humanities Sub-committee and a Creative Arts Sub-committee.

An important development, incorporating the work of the IDG, was the December 2008 release of the ERA Indicator Principlesand the ERA Indicator Descriptors. These documents clarify a number of issues regarding ERA’s attempt to ensure that research evaluation is responsive to disciplinary differences. The Principles document states that in addition to journal ranking, assessment of outputs will include citation analysis, and peer review of articles in cases where there is an ‘insufficient number of valid quantitative indicators to provide a reliable evaluation of research quality’ (ARC Principles, 5). The ARC stress that these methods—including the ‘field-normalised’ citation metrics—have been devised to be sensitive to the needs of disciplines:

In developing the Principles, it is recognised that there are a wide range of discipline-specific behaviours and norms, and also that no single indicator could necessarily be applied across all disciplines. (ARC Principles, 1)

The use of metrics will be integral to the citation analysis, but not prescribed as an element of the journal ranking. What role (if any) citation metrics might play in ranking will be a decision made by the discipline-based bodies who undertake the task.

Journal ranking in Australia: the process
What is notably absent from the ERA Indicator Principles and the ERA Indicator Descriptors are details regarding the manner in which the journal ranking will be (or was) undertaken. The initial journal ranking exercise in Australia was conducted during 2008. The task was devolved to the four learned academies and a number of disciplinary peak bodies, with at least 27 organisations being engaged in the task (ARC Consultation). It is apparent from the available evidence that these organisations handled the task very differently.

For example the Computing Research and Education Association of Australasia (CORE) proceeded with an emphasis on metrics, using both the standard Thomson JIF and an amended metric referred to as the REPP (Research Evaluation and Policy Project) impact factor. The CORE process was managed by a ranking committee, with the assistance of ‘rankings from various Australian universities, from sections of the ICT community such as IS and from individual subdiscipline experts’.

The Australian Association for Research in Education (AARE) took a very different approach due to the poor coverage and inappropriateness of citation metrics for their discipline.

Whereas some fields of study (notably science, engineering and medicine) are well served by citation indexes, education is not. But even if it were, academic citation would be inadequate in ‘practical’ fields such as education, engineering and business. Although academic excellence of journal content is highly valued, in these fields research and publication are intended, ultimately, to benefit professional practice. (AARE)

The AARE instead undertook a subjective assessment of quality. They adopted a qualified concept of ‘quality’ that privileges the needs of practitioners over ‘academic excellence’. Members of the AARE were asked to complete a survey which collected respondents’ opinions on the ‘10 best journals’; ‘Journals you publish in or read’; and ‘Journals that impact on policy or professional practice’ (AARE). The concept of ‘impact’ is therefore retained, but only in so far as it affects ‘policy or professional practice’.

The Australian Political Studies Association (APSA) devised ‘a four-step process’. This included provisional ranking by a ‘small panel’; invited response to this provisional list from APSA members; collation and adjudication of responses by the panel; and final revision by the Association’s national office. The APSA relied upon a qualified use of citation metrics to inform this process, noting that:

we follow the science disciplines which are ranking their journals on this basis. There is one important difference. The sciences rely heavily on the impact factor. The social sciences and humanities cannot rely on impact factors to the same extent because ISI coverage is incomplete or non-existent. So, the judgement of the APSA panel which played a greater role in the final rankings and journal reputation in the political science community was an important factor. (Hamilton and Rhodes)

What can be concluded from the experience of CORE, AARE and APSA is that, in the absence of a standard process, disciplines will adopt a method for journal ranking that is suited to their own research culture. They may approach the task as an assessment of either ‘impact’ or ‘quality’, and they will use metrics in accordance with their perceived relevance.

Although bodies such as CORE, AARE and APSA have made their ranking process transparent, this is not a requirement. Knowledge about how the task of ranking was undertaken by other disciplines is generally lacking. This information would be valuable not only in terms of ensuring accountability, but because journal ranking is a new process to many disciplines and they will benefit by learning about the various methods used for accessing impact and/or quality. For a number of disciplines the ERA process has produced an outcome, but with little knowledge by stakeholders of the process, organisations or individuals involved.

Ranking humanities journals: the case of Australian literature
One important matter raised by the ERIH experience, and that has been little addressed by ERA, is that of the status of ‘local’ journals. As discussed, this issue is critical in the humanities and social sciences, as regional or national studies frequently form the basis of sub-disciplines with journals meeting the needs of a local readership. It is extremely difficult to assess such journals by their international standing or impact, as it will inevitably be low. Clearly, however, these local journals can be core within their sub-field of regional studies, within which they will have established their own hierarchy of quality.

The APSA report on the ranking of politics journals raises concerns about the way in which Australian journals of considerable importance to local political scientists fare in a process which attempts to establish international benchmarks of quality. APSA record that they undertook ‘rankings not weighted for or against local journals’, but they also note the problems this creates for local authors when highly-ranked international journals are not interested in publishing research on Australian politics.

The practice of applying international benchmarks has been criticised from within other disciplines. In his commentary on the preliminary ERA ranking of law journals David Hamer notes the underrepresentation of Australian titles in the upper tiers. Hamer argues that the explanation is that ‘the ARC’s bibliometrics have a strong built-in bias in favour of US law journals’. He suggests that a citation-based assessment was the sole method used, and that if allowed to stand the results have the potential to dilute Australian legal research and even lead to a ‘drain’ of researchers to North America. Hamer argues for a revised, subjective ranking process that would privilege Australian journals based on their essential contribution to national jurisprudence.

As noted, the only acknowledgement of local journals within the ERA tier descriptions is the reference to ‘typical’ inclusions in Tier B being ‘regional journals with high acceptance rates’. This raises the possibility that in the absence of more detailed guidelines on the status of local journals Tier B will likely become the default ranking for these titles.

The issue may also be raised somewhat obliquely in the ERA Indicators Descriptors with the instruction that:

The quality rating is defined in terms of how it compares with other journals and should not be confused with its relevance or importance to a particular FoR. (4)

This seems to imply that local journals should be subjected to comparison with international journals without consideration of their crucial function in supporting national scholarship. As noted by the APSA report and argued by Hamer, for ERA to insist that local journals have no particular status could have dire consequences for the viability of local journals, research and careers.

What appears to be the case thus far is that whereas some disciplines have strictly applied international benchmarks to the task, others have weighted the process in order to favour local journals. We can glean something of the way in which local journals have been favoured positively in some disciplines by considering the case of Australian literature titles. As there may be some uncertainty as to what constitutes a journal on ‘Australian literature’, eleven titles have been culled from the response submitted to the ERA rankings by the Association for the Study of Australian Literature (ASAL) under the headings ‘Australian Literature’ (nine titles), and Australian journals included under the heading ‘Postcolonial literature’ (two titles). These are all titles that have been ranked by ERA, and with one exception (Kunapipi) have 2005 (Literature) as the designated Field of Research (FoR). All of the journals are published in Australia with the exception of Antipodes, which is the journal of the American Association of Australian Literary Studies.

It should be noted that ASAL was not consulted in the creation of the initial ranking of literature journals. Indeed it is not clear which organisations or individuals were responsible or what process was followed in ranking literature journals, as there are no obviously relevant peak bodies amongst the 27 that were apparently consulted. This once again points to a lack of transparency in the ERA ranking process.

Table 1 lists the eleven Australian literature journals, together with citation metrics from the three most commonly cited sources. These are the large citation databases Thomson’s Web of Science (source of the JIF); Elsevier’s Scopus; and Google Scholar (as harvested by the Publish or Perish website: see Harzing). In each case the metric reported is the number of citations received by the journal for 2001-2006, a period of time equivalent to that which will be used in the first round of the ERA evaluation.

Also included in Table 1 is the ERA tier allocated to each journal, and the proposed tier suggested by ASAL in their response to the ERA ranking.2

Journal Web of Science Scopus Google Scholar ERA tier ASAL proposed
Antipodes 11 0 11 B A
Australian Literary Studies 32 3 52 A* A*
HEAT 5 0 0 B A
JASAL 3 2 12 A A
Kunapipi 17 4 42 B B
LINQ 0 0 0 B B
Meanjin 51 24 111 B B
New Literatures Review 0 4 0 B B
Overland 70 23 109 B B
Southerly 10 3 5 B A
Westerly 9 2 10 B A

Table 1: Australian Literature journals citation statistics and rankings

What is apparent from the citation data is the extent to which the three sources differ. This is largely explained by the scope of their coverage. Scopus is known to have poor coverage of the humanities and this is reflected in consistently low citation counts. Google Scholar (Publish or Perish) generally reports the highest number of citations for humanities and social science articles due to its better coverage of non-journal literature such as conference papers, edited collections and institutional repositories. It could be argued that Google Scholar provides the most accurate reporting for humanities in that its coverage includes the ‘grey literature’ that is important to these disciplines. However, critics have drawn attention to Google Scholar’s inability to prevent citations being ‘double-counted’ when they appear in several versions of the same article (for example a conference paper which also appears in a journal) (Harzing and van der Wal; Jasco).

It is clear that there is little correlation between the citation metrics and the ERA rankings. The single A* journal (Australian Literary Studies) is the third most frequently cited, while the single A journal (JASAL) is quite lightly cited. The two most frequently cited journals, Meanjin and Overland, are both ranked in Tier B. In part these results can be explained by the nature of the journals. Whereas both Australian Literary Studies and JASAL are outlets for ‘traditional’ academic literary scholarship,Meanjin and Overland have a somewhat different remit, including not only literary criticism, but also creative writing, and social and political commentary and opinion. These latter categories create problems with regard to peer-reviewing, in that it is difficult to ‘referee’ creative material or opinion pieces in the same way as traditional academic research. For this reason the academic status of these journals may have been questioned in the ranking process. There is, however, no doubt thatSoutherly (established 1939), Meanjin (1940), Overland (1954) and Westerly (1956) are not only long established, but have reputations for delivering high quality academic and creative writing.

What is most striking about the rankings is that none of these Australian journals has been placed in Tier C, despite the expectation that 50% of journals within each of the broad disciplinary groupings will fall within this tier. This is a very strong indication that those charged with ranking these journals have taken into account their significance within the context of a national scholarly discourse. Notwithstanding that the task is to rank journals according to criteria of quality using international benchmarks, there is nonetheless an ‘inbuilt’ recognition of the crucial role Australian journals play for their national audience. These results also suggest that the reference in the description of Tier B to ‘regional journals’ has indeed resulted in this being the default for these titles.

This ‘distortion’ is apparent in Table 2, which compares the ERA results for the eleven journals listed in Table 1 and the result achieved for the FoR overall (i.e. all Australian and non-Australian titles) within the ERA guidelines.

A*% A% B% C%
ERA guidelines 5 15 30 50
ERA ranking, all literature journals 9.9 14.4 23.9 51.8
ERA ranking, Australian literature journals 9.1 9.1 81.8 0
ASAL, proposed Australian literature journals 9.1 45.4 45.4 0

Table 2: Australian Literature journals ranking distributions

The preliminary rankings for ‘all journals’ reflects the recommended ratios for each tier in the ERA guidelines (although there are notably more A* journals). The ERA rankings for the Australian literature journals, however, produces a very different result, in that none of these journals are ranked in Tier C as opposed to the expectation that 50% will fall within this category. Somewhat paradoxically then—given the preference for ‘quality’ rather than ‘impact’ to rank humanities journals—this result can be read as a reflection of the ‘impact’ these journals have in contributing to localised scholarship, rather than being a reflection of their innate ‘quality’.

Table 2 also includes the adjusted figures that would result if the ASAL recommendation for the ranking of these journals was implemented. It can be seen that the deviation from the ‘norm’ is even greater, with four journals moved ‘up’ from B to A, and none relegated. This results in 54.5% of journals ranked as A* or A. The ASAL recommendations reflect the opinion of the user group for these local journals, and are further testimony to their importance to national scholarship.

Conclusion

There are good arguments that can be made to suggest that ranking of journals, particularly in the humanities, is a futile pursuit. Not only is it likely to produce questionable results in terms of the accuracy of the hierarchy it constructs, but also—as Redden and others have argued—it may lead to undesirable outcomes by inducing authors to seek publication in journals that are sub-optimal in terms of communicating with a specific audience, while threatening the existence of the bulk of B and C journals that depend upon a regular flow of quality submissions. These ramifications will be felt most keenly in disciplines that depend on local journals, and in middle-ranking research nations where there is an incentive to ‘export’ research publishing to leading international journals. In other words: the humanities in Australia. Scholarly communication is a complex and finely-tuned system, and the imposition of mechanistic forms of control (in the case of ERA through artificially-constructed hierarchies that fail to reflect the nexus between humanities scholarship and a local readership) may easily upset its optimal functioning.

Despite the attempts by the ARC to accommodate the humanities in its approach to journal ranking, the results have nonetheless met with criticism from the sector. In a submission to ERA responding to the preliminary rankings, the Deans of Arts, Social Sciences and Humanities (DASSH) argued that irrespective of the method used, the very concept of ranking continued to be problematic.

DASSH welcomes discussion and research into methods for determining academic quality and benchmarking. Journal ranking for this purpose is not yet as well developed for the Arts, Social Sciences and Humanities (ASSH) as for the natural sciences and engineering. It is of concern to DASSH that the ARC should be using this underdeveloped tool. (1)

In addition to complaints made about the preliminary ERA rankings on behalf of the humanities generally (Byron), attacks have also been mounted on behalf of particular disciplines, including mathematics (Trounson); French studies (Hainge); philosophy (Academics Australia); and law (Hamer). Despite these objections—which focus on both the concept of journal ranking and on specific outcomes—there is, from the humanities point of view, some cause for optimism regarding the way in which ERA has progressed. At the very least, the combination of non-metric driven tier descriptions combined with rankings determined by disciplinary peak-bodies based on their assessment of appropriate indicators suggests that the ARC has heeded advice given from within the humanities and social sciences. But while ERA has dispelled fears that journal ranking would simply be driven by citation metrics, it has not as yet resolved other issues with regard to the challenges faced in ranking humanities journals.

The immediate need for the humanities is for a means to be devised for treating Australian local journals equitably and consistently within the ranking framework. This should be done in a manner which acknowledges that assessments regarding the impact and quality of these journals are determined by the contribution they make in the context of local, regional or national scholarship.

The desired outcome can be achieved by revising the ERA tier descriptions in such a way that they articulate the critical role played by local journals, and by the Humanities Sub-Committee of the IDG developing detailed guidelines for the consistent treatment of local journals.

Addendum (2 April 2009) 3

The final ERA journal rankings for Cluster 2, Humanities and Creative Arts, were released in late March, after the completion of the article above. This brief addendum to the article will ignore some of the larger issues flowing from the revised list—which will take some time to absorb—and focus on the particular case of the Australian literature titles.

There were a number of changes from the draft ranking, with six of the eleven journals having their ranking amended. Three journals (HEAT, Meanjin and Southerly) had their ranking improved; and three (JASAL, Kunapipi and LINQ) had their ranking reduced. The big ‘winner’ from the revisions was HEAT, which had its ranking increased from B to A*.

Table 3 summarises the various changes, and also includes again the ASAL proposed rankings.

Journal ERA draft tier ASAL proposed ERA final tier
Antipodes B A B
Australian Literary Studies A* A* A*
HEAT B A A*
JASAL A A B
Kunapipi B B C
LINQ B B C
Meanjin B B A
New Literatures Review B B B
Overland B B B
Southerly B A A
Westerly B A B

Table 3: Australian literature journals, draft and revised ERA rankings

It is relevant to note that the ASAL proposed ranking and the ERA final tier ranking coincide on only four of the eleven titles. Of the four changes to the draft rankings recommended by ASAL only one (Southerly) is reflected in the final rankings. In addition four journals (JASAL, Kunapipi, LINQ, Meanjin) have had their final rankings amended when the ASAL recommendation had been that the draft ranking should remain. Whatever process has been followed in moving from the draft tier rankings to the final tier rankings, it is clear that the ASAL input has not been influential.

The other changes that have been made in the final version are in the FoR allocation. HEAT is now allocated an FoR of 1904 (Performing Arts and Creative Writing), and Kunapipi has now been allocated 2005 (Literature). A number of the other journals (JASAL, Meanjin, Overland, Southerly, and Westerly) are now allocated a joint 1904/ 2005. It is possible the changes of FoR have influenced changes in the tier rankings, particularly in the case of HEAT which is now better positioned to reflect its role in publishing creative writing. In the absence of an explanation, however, there can only be speculation. It is worth noting in passing that the FoR allocations can be erratic, even spectacularly nonsensical, with both UTS Law Review and History of Psychiatry receiving an FoR of 2005.

It is not easy, however, to learn much about the logic behind the revisions simply by studying the outcomes. The relegatedLINQ certainly fits into the category of a local journal, with its focus on literature from northern Queensland/Australia, and an editorial team largely drawn from James Cook University. Kunapipi is anything but local, a journal of international postcolonial writing with an international authorship and editorial board, but which may suffer from its association with a regional university (Wollongong).

What LINQ and Kunapipi have in common is that while they may be ‘small’ journals in terms of circulation, they are seriously ‘academic’ journals with a focus on literary scholarship. As noted in the preceding paper, this makes them unlike several of the journals now ranked above them, which have a remit to attract a wider non-subscription based readership, and therefore include a component of opinion, commentary and creative writing, all of which might be considered ‘non-scholarly’ content.

It is also worth briefly noting the quite different outcomes for Australian Literary Studies (A*) and JASAL (B). If there are two journals on this small list that might be compared it is these two. They share a similar purpose in providing outlets for interpretive scholarship relating to all aspects of Australian literature. Neither includes creative writing. They are both fully refereed and have distinguished international editorial boards. They both draw their authors from the same pool of established and emerging Australian (and some international) scholars. On an objective assessment they seem to be demonstrably of a similar ‘quality’, with the seniority of Australian Literary Studies seemingly acknowledged in the draft rankings (A* forAustralian Literary Studies and A for JASAL). The relegation of JASAL to tier B now creates a significant—and spurious—divide between these two journals to the point where the flow of papers will inevitably be distorted in a way that is detrimental to editors, authors and the discipline as a whole.

It is difficult to draw any safe conclusions from the final rankings as to what constitutes ‘quality’ in the ERA ranking process for Australian literature journals. Without far more comprehensive information regarding the logic by which the tiers have been determined (other than the inadequate tier descriptions) editors and authors alike are left wondering.

This puts those journals that wish to improve their ranking in an extraordinarily difficult position. To do so may inevitably compromise features of the journal that have been valued by contributors and readers. On the other hand, deciding not to play the ranking ‘game’ will have consequences for the sustainability of journals, as authors—and indeed editors—choose to pursue the rewards associated with more highly ranked journals. In the current environment, however, editors seeking a higher ranking for their journal do so with inadequate information regarding exactly what constitutes the vision of quality to which they should be aspiring.

The final ERA rankings for Cluster 2 do nothing to dispel the concern felt within the humanities about the concept of journal ranking, and only add to the urgent need for greatly improved clarity and accountability in the process.


Paul Genoni is a Senior Lecturer with the School of Media, Culture and Creative Arts at Curtin University of Technology. He has a PhD in Australian literature from the University of Western Australia, and is the author of Subverting the Empire: Explorers and Exploration in Australian Fiction (2004).

Gaby Haddow is a Lecturer with the School of Media, Culture and Creative Arts at Curtin University of Technology. She was previously Humanities Librarian with the Curtin University Library, where she was closely involved in ERA discussions and activities. Gaby has a PhD in information science from the University of Western Australia.

Notes

1 Thomson Scientific acquired the citation databases from the Institute of Scientific Information (ISI), and therefore the Thomson Scientific JIF is still referred to on occasion as the ISI JIF. Thomson Scientific has recently been acquired by Reuters and this may lead to a new variation on the name. References in this paper to the JIF are therefore to the ISI/Thomson Scientific/Reuters JIF.

2 The final journals rankings for the Humanities and Creative Arts cluster were due in March 2009.

3 Editor’s note (8 April 2009): Shortly after their release, the revised ERA journal rankings for Humanities and the Creative Arts (HCA) were withdrawn from the ARC website, with the following note:

Please note: The ARC is aware of issues with the HCA lists, and as such it [ sic ] is currently unavailable. The ARC is investigating the problems and will replace the lists as soon as possible.

Please check the ARC website at http://www.arc.gov.au/era/journal_list.htm for updates.

 

Works Cited

Academics Australia. The ERA Journal Rankings [Letter to the Honourable Kim Carr, Minister for Innovation, Science and Research. 11 August 2008]. http://www.academics-australia.org/ Accessed 10 March 2009.

Archambault, Eric, and Etienne Vignola-Gagne. The Use of Bibliometrics in the Social Sciences and Humanities. Montreal: Social Sciences and Humanities Research Council of Canada, 2004. http://www.science-metrix.com/pdf/sm_2004_008_sshrc_bibliometrics_social_science.pdf Accessed 10 March 2009.

Arts and Humanities Research Council and the Higher Education Research Funding Council for England. Use of Research Metrics in the Arts and Humanities. http://www.hefce.ac.uk/research/ref/about/background/group/metrics.pdf Accessed 10 March 2009.

Australian Association for Research in Education (AARE). Journal Banding Survey.http://www.newcastle.edu.au/forms/bandingsurvey/ Accessed 10 March 2009.

Australian Research Council. Consultation to Develop Outlet Journal Rankings. Canberra: Commonwealth of Australia, 2008.http://www.arc.gov.au/era/consultation_ranking.htm Accessed 10 March 2009.

—. ERA Indicator Descriptors. Canberra: Commonwealth of Australia, 2008.http://www.arc.gov.au/pdf/ERA_Indicator_Descriptors.pdf Accessed 10 March 2009.

—. ERA Indicator Principles. Canberra: Commonwealth of Australia, 2008.http://www.arc.gov.au/pdf/ERA_Indicator_Principles.pdf Accessed 10 March 2009.

Bordons, Maria, Maria Teresa Fernandez, and Isabel Gomez. ‘Advantages and Limitations in the Use of Impact Factor Measures for the Assessment of Research Performance in a Peripheral Country.’ Scientometrics 53.2 (2002): 195-206.

Bowman, Howard I., and Konstantinos I. Stergiou. ‘Factors and Indices are One Thing, Deciding Who is Scholarly, Why they are Scholarly, and the Relative Value of Their Scholarship is Something Else Entirely.’ Ethics in Science and Environmental Politics 8 (2008): 1-3.

Butler, Linda. ‘Identifying ‘Highly-Rated’ Journals—An Australian Case Study.’ Scientometrics 53.2 (2002): 207-227.

British Academy. Peer Review: The Challenges for the Humanities and Social Sciences: A British Academy Report. British Academy, 2007. http://www.britac.ac.uk/reports/peer-review/index.cfm Accessed 10 March 2009.

Byron, John. ‘The Measure of the Humanities’. The Australian. 16 July 2008.http://www.theaustralian.news.com.au/story/0,25197,24025202-25192,00.html Accessed 10 March 2009.

Cameron, Brian D. ‘Trends in the Usage of ISI Bibliometric Data: Uses, Abuses, and Implications.’ Portal: Libraries and the Academy 5.1 (2005): 105-25.

Computing Research and Education Association of Australasia (CORE). Homepage. http://www.core.edu.au Accessed 10 March 2009.

Deans of Arts, Social Sciences and Humanities. Submission to Excellence in Research Australia (ERA) from DASSH. August 2008.

Donovan, Claire. A Review of Current Australian and International Practice in Measuring the Quality and Impact of Publicly Funded Research in the Humanities, Arts and Social Sciences. Canberra: Australian National University. Research School of Social Sciences. Research Evaluation and Policy Project, 2005.http://repp.anu.edu.au/papers/2005_Setting_the_Scene.pdf Accessed 10 March 2009.

East, John. ‘Ranking Journals in the Humanities: An Australian Case Study.’ AARL: Australian Academic & Research Libraries37.1 (2006): 3-16.

European Science Foundation. Frequently Asked Questions. http://www.esf.org/research-areas/humanities/research-infrastructures-including-erih/frequently-asked-questions.html Accessed 10 March 2009.

Genoni, Paul, Gaby Haddow, and Petra Dumbell. ‘Assessing the Impact of Australian Journals in the Social Sciences and the Humanities.’ Paper presented at Information Online, 2009, Sydney Convention and Exhibition Centre, 20-22 January 2009.http://www.information-online.com.au/sb_clients/iog/data/content_item_files/000001/PresentationC16.pptAccessed 10 March 2009.

Gorman, Gary. ‘“They Can’t Read, But They Sure Can Count”: Flawed Rules of the Journal Rankings Game.’ Online Information Review 32.6 (2008): 705-08.

Haddow, Gaby. ‘Quality Australian Journals in the Humanities and Social Sciences.’ AARL: Australian Academic & Research Libraries 39.2 (2008): 79-91.

Hainge, Greg. ‘Society for French Studies Slams ERA Journal Rankings.’ The Australian. 12 August 2008.http://www.theaustralian.news.com.au/story/0,,24167883-21682,00.html Accessed 10 March 2009.

Hamer, David. ‘ARC Rankings Poor on Law.’ The Australian 25 June 2008.http://www.theaustralian.news.com.au/story/0,25197,23921819-25192,00.html Accessed 10 March 2009.

Hamilton, Margaret and R. A. W. Rhodes. ‘Australian Political Science: Journal and Publisher Rankings: Prepared for the Australian Political Studies Association.’ http://eprints.utas.edu.au/7236/1/AusPSA_RQF_final_2007.pdf Accessed 10 March 2009.

Harzing, Anne-Wil. Publish or Perish. http://www.harzing.com/pop.htm Accessed 10 March 2009.

—, and Ron van der Wal. ‘Google Scholar as a New Source for Citation Analysis.’ Ethics in Science and Environmental Politics 8 (2008): 61-73.

Howard, Jennifer. ‘New Ratings of Humanities Journals do more than Rank—They Rankle.’ The Chronicle of Higher Education.10 October 2008.

Jasco, Peter. ‘Deflated, Inflated and Phantom Citation Counts.’ Online Information Review 29.5 (2006): 297-309.

Monastersky, Richard. ‘The Number That’s Devouring Science.’ The Chronicle 52.8 (2005).http://chronicle.com/free/v52/i08/08a01201.htm Accessed 10 March 2009.

Redden, Guy. ‘From RAE to ERA: Research Evaluation at Work in the Corporate University.’ Australian Humanities Review 45 (2008): 7-26. http://epress.anu.edu.au/ahr/045/pdf/essay01.pdf Accessed 5 December 2008.

Royle, Pam. ‘A Citation Analysis of Australian Science and Social Science Journals.’ Australian Academic and Research Libraries25.3 (1994): 162-71.

Steele, Colin, Linda Butler, and Danny Kingsley. ‘The Publishing Imperative: The Pervasive Influence of Publication Metrics.’Learned Publishing 19.4 (2006): 277-90.

Trounson, Andrew. ‘Citations Not Only Measure of Quality.’ The Australian. 9 July 2008.http://www.theaustralian.news.com.au/story/0,25197,23990703-12332,00.html Accessed 10 March 2009.

If you would like to contribute to this discussion, please email [email protected]