AustLII Home | Databases | WorldLII | Search | Feedback

Legal Education Digest

Legal Education Digest
You are here:  AustLII >> Databases >> Legal Education Digest >> 2010 >> [2010] LegEdDig 11

Database Search | Name Search | Recent Articles | Noteup | LawCite | Author Info | Download | Help

Svantesson, D J B --- "International ranking of law journals – can it be done and at what cost?" [2010] LegEdDig 11; (2010) 18(1) Legal Education Digest 38


International ranking of law journals – can it be done and at what cost?

D J B Svantesson

Legal Studies, Vol 29, No. 4, 2009, pp 678-691

For a variety of reasons, attempts are made at creating international journal rankings. For example, in the USA, there are several journal ranking lists, such as the well-known ranking list provided by the Washington and Lee Law School. While it may be true that ‘Americans love rankings – of practically anything’, and while the issue of journal ranking has gained much more attention in the USA than elsewhere, international journal rankings are no longer a US phenomena that academics in other parts of the world can ignore. In Europe, the European Science Foundation is ranking humanities journals within its European Reference Index for the Humanities. Further, the Australian Government is seeking to adopt journal rankings for its Excellence in Research for Australia (ERA) scheme, and, in doing so, has convinced the Council of Australian Law Deans (CALD) to produce an international law journal ranking. If introduced, these ranking exercises will have a significant, long-lasting and possibly irreversible impact on scholarship.

With few exceptions, it seems that we are heading towards a global market place for legal knowledge. In such a climate, those engaged in the ranking of law journals are faced with a difficult decision; limiting their ranking to domestic journals and thereby making the ranking parochial and disconnected from reality, or attempting the impossible task of constructing a fair, comprehensive and internationally acceptable ranking of law journals.

One obvious purpose for journal rankings is to act as a guide to quality, both for authors and readers. Perhaps most importantly, ground breaking ‘must read’ articles are as likely to be published in less prestigious journals as in those held in particularly high regard.

The most dangerous purpose for which journal ranking can be undertaken is as a shortcut for assessing research quality. For example, the Australian ERA scheme ‘will use a range of indicators and other proxies to support the evaluation of research excellence. One of these indicators is discipline-specific tiered outlet rankings’.

A study assessing the perception of academic lawyers in the UK found that, ‘56 per cent of respondents believed that perceptions of how the RAE [Research Assessment Exercise] operates were important or very important in determining the type of publications they produced’. Where such perceptions lead to a strong focus on highly academic journals, with the result of nothing, or very little, being written, for example, for the practitioner market (which typically is served by more accessible and less ‘academic’ journals), the legal system suffers.

The simple truth is that the legal system, and ultimately society as a whole, benefits from a diversity of legal research published in a diverse range of law journals.

The law panel in the UK’s Research Assessment Exercise 2001 concluded that:

Work of internationally-recognised excellence was found in a wide range of types of outputs and places, and in both sole and jointly authored works (the Panel adhered to its published criteria in allocating credit for joint pieces). First-rate articles were found in both well-known journals and relatively little-known ones. Conversely, not all the submitted pieces that had been published in “prestigious” journals were judged to be of international excellence. These two points reinforced the Panel’s view that it would not be safe to determine the quality of research outputs on the basis of the place in which they have been published or whether the journal was “refereed”.

This reasoning has been maintained, and is now firmly established, in the UK with the law panel in the UK’s Research Assessment Exercise 2008 echoing the warnings issued by their colleagues seven years earlier.

The UK law panels’ observations and associated conclusions stand in stark contrast to the current thinking of the Australian ERA scheme. The author agrees with the UK approach – journal ranking cannot be viewed as a credible measure for assessing the quality of individual articles, or even their dissemination and impact.

There are several methods being applied for ranking journals. For example, the George and Guthrie ranking is based on the average perceived prominence of the authors of the articles appearing in the ranked journals – authors’ job titles/positions are ranked using a point scale and an average score is calculated for each journal. This approach is subjective and parochial.

The George and Guthrie approach is also providing undesirable incentives to journal editors as it encourages selection based on who the author is rather than based on the quality of the article in question.

Other methods focus on the journals’ perceived impact. For example, focus may be placed on citation frequency (also referred to as ‘bibliometrics’).

Furthermore, if one is to base ranking on citations, one has to decide whether to give any weight to the perceived status of the journal in which the citing article appears. This may lead to a circular reasoning where one needs to have ranked the journals to assess the value of the citations that will constitute the basis for the journal ranking.

Even ignoring these problems, basing law journal rankings on citation frequency is problematic as there are no accurate and comprehensive sources of citation frequency data.

Another possible ranking method focusing on the impact of the journals is studies of user statistics, such as how often the relevant journals are being taken off the shelf and/or how often articles from the journals are downloaded. Again the problem is getting access to reliable data.

Peer-review-based assessment is yet another alternative. For example, surveys in which journal users get to indicate how they rank the relevant journals can be used to rank the journals. Alternatively, a panel of experts can assess the quality of all the journals by reading all, or a selection of, the articles published in those journals. Korobkin suggests that these methods are unsuitable for generalist journals but might be feasible for some types of specialist journals.

Using the data from the Research Assessment Exercise 2001, Campbell, Goodacre and Little developed three different methods for ranking journals. ‘[T]he first method used to rank journals was to record the total number of articles from a particular journal that were submitted to the Law Panel by the 60 departments assessed’. However, as acknowledged by the authors, this method is problematic as it favours journals which publish more articles than others (eg by publishing more articles per issue or by publishing more frequently). Campbell, Goodacre and Little’s second, and more refined, method sought to address this problem ‘by dividing the volume of submissions by the number of publication outlets’.

The third method is even more advanced. It takes account of how highly the departments, at which the articles in the various journals were produced, were ranked in the Research Assessment Exercise 2001.

While the second method is better than the first method, and while the third method may be even better, all three methods are problematic if used for international journal ranking, and the inadequacies for such use cannot be avoided by combining the three methods. For example, as is recognised in the study, a journal that is very selective in what it publishes (perhaps due to it receiving a large quantity of excellent articles) may score poorly or not at all.

While the details may vary considerably as seen above, on the most fundamental level, there are only two possible methods for journal ranking – statistical data and a ‘wet finger in the wind’ approach.

The ‘wet finger in the wind’ type of studies involve journals being ranked by a selection of experts and/or journal users. In carrying out the ranking, the experts and/or journal users may look at a multitude of factors such as how highly they regard the respective journals, how highly regarded they believe the journals to be in the eyes of other experts/users, the status of any pre-publication review process, how well respected are the respective journals’ editors and editorial boards, and the ratio of accepted articles compared to total number of submissions.

When examining the accuracy of such an approach, perhaps the first question to ask is who is in a position to rank journals? Where the scope is meant to be international or even global, the answer is unfortunately that no one is in a position to rank all the journals.

Ranking exercises may also be complicated by vested interests. A law faculty’s journal is an important aspect of the faculty’s image and personality. It cannot, therefore, be expected that faculties, and faculty members, will rank their own journal(s) in an unbiased manner. For similar reasons, it cannot be expected that members of editorial boards and the like will rank the journal(s) they are associated with in an unbiased manner.

Statistical studies may appear to avoid the subjectivity that is so detrimental to ‘wet finger in the wind’ studies. However, any statistical study with the aim of creating a journal ranking will inevitably involve subjective decisions as is illustrated in the following example.

To assess the impact that various journals have had on Australian law, it would seem reasonable, for example, to examine the extent to which they have been taken into account by the High Court of Australia. A simple (possibly too simple) method of doing so is to use Australasian Legal Information Institute (AustLII) to search for the relevant journals’ names in the database of High Court decisions.

It would seem illogical to ignore citations by other significant courts such as the various State Supreme Courts, the Federal Courts and specialist courts such as the Family Court. One must also decide whether an instance of a court merely mentioning an article is as significant as an instance of the court specifically being guided by the reasoning of an article.

Finally, some areas of legal research may be more likely to get the attention of the courts than others. There is, consequently, a risk that a ranking focused on citations by courts favours publications in certain areas over publications in other fields.

An international ranking scheme with an aim of global validity will inevitably be accused of comparing the proverbial apples and oranges.

First, how can one compare a national journal (limited by language) with great impact in a small jurisdiction, with an international journal with limited impact? Instead of attempting the impossible task of comparing the quality of such diverse types of journals, it should simply be acknowledged that both types serve a significant purpose.

Even when leaving aside the international issues, the problem is serious as there simply is no good way of comparing generalist and specialist law journals, For example, a specialist law journal may typically have fewer subscribers than a generalist journal. However, while few scholars would ever read all the articles in the issue of a generalist journal, many scholars keep up to date with their field of study by reading all, or nearly all, the articles in reliable specialist journals.

Furthermore, specialist journals from different fields cannot by compared. With scholars generally being specialists within one or a few defined fields, it is not clear how the impact and significance of journals from different fields legitimately can be compared.

Indeed, it can be argued that not even different generalist journals can be compared to each other. A generalist journal normally consists of a number of specialist articles stapled together. As there simply are no generalist scholars left, there is no one available to assess and compare generalist journals in an informed manner.

To conclude, a journal’s ‘scope and readership say nothing about the quality of their intellectual content’, and it is dangerous to confuse ‘internationality’ with ‘quality’.

There is no lack of other controversial questions calling for subjective decisions to be made. For example, any ranking scheme must decide how it will approach the difference between student-edited and staff-edited journals. Many highly regarded law journals, such as the Harvard Law Review, the Yale Law Journal and the Columbia Law Review, are student-edited. While they enjoy strong reputations, it nevertheless seems reasonable to suggest that students typically are not as well placed as a law professor to edit a law journal. In other words, logic suggests that staff-edited journals ought to be more highly ranked than student-edited journals, where all other aspects of the journals are the same.

As there are substantial cultural differences between different countries, a choice of favouring one editorial structure over another (eg favouring staff-edited journals over student-edited journals) will automatically favour journals from certain countries over journals from other countries. While student editors are the norm in the USA, student editors are rare in, for example, the UK. Thus, favouring staff-edited journals over student-edited journals will make it comparatively more attractive to publish in the UK than in the USA.

In light of the above, the obvious solution would be not to take account of the editorial structure when ranking journals. However, such an approach may be unpalatable to both those who hold staff-edited journals in higher regard and to those who hold student-edited journals as superior.

Another problem stems from the fact that the journal market is constantly developing: new journals appear and old journals disappear. Consequently, it is necessary for any ranking scheme to include a system for the assessment of journals developed after the scheme is in place. Newly developed journals should be neither disadvantaged nor put in a better place than those journals that existed at the time of the ranking scheme’s introduction.

Ranking exercises can, by their nature, only produce historic data; even if an accurate and widely accepted ranking could be produced, it could tell us only which journals have published high quality articles in the past. Ranking will not necessarily predict where high quality articles will be published in the future, and there is not necessarily any guarantee that highly ranked journals will continue to publish high quality articles. A journal’s continued publishing of high quality articles is dependent on both the choices made by authors and the choices made by editors.

Finally, the focus on historical levels of quality brings attention to the circularity of journal ranking based on assessments of the quality of the articles published by the ranked journals – a journal is highly ranked because it contains high quality articles and a particular article is viewed as being of high quality as it is published in a highly ranked journal.

Even where the problems discussed above can be overcome, the ranking of journals may carry with it serious negative consequences. As some authors will target the highly ranked journals, which then automatically become more prestigious, the issuing of a formal ranking is a self-fulfilling prophecy. Consequently, it is of the outmost importance that the first official ranking issued, of any ranking scheme, is of the highest quality possible.

With few, if any, Australian journals being legitimately classed as top-tier journals, from a global perspective, Australian scholars will be encouraged to publish in overseas journals. As few foreign journals would be interested in publishing articles concerning purely Australian legal issues, authors will be forced to re-focus their writings to include universal or comparative elements. While a degree of internationalisation is to be encouraged, it is strongly undesirable for writing of domestic significance to be discouraged.

This problem can perhaps be avoided to a degree by overstating the international recognition of Australian journals, as was done by CALD, so as to provide a selection of them with top rankings. However, doing so will undermine the ranking scheme’s international legitimacy.

Further, as an international ranking will favour journals from larger jurisdictions, many European, African and Asian scholars would not even have heard of some of the top-ranked journals and would question the absence of journals crucial to them.

Australia currently enjoys a healthy law journal climate, with a variety of commercial and non-commercial, specialist and generalist, academic and practitioner-oriented journals. As authors are driven towards overseas journals in pursuit of high ranking publications, Australian journals will inevitably suffer.

A ranking scheme may also favour certain disciplines over others. For example, where the journals of one specialist area are all highly ranked, research in that area is automatically more highly valued than research in another area where the relevant journals are given a lower ranking. Looking at the latest draft ranking produced by CALD, of the five journals dealing with the law from a feminist perspective, one is ranked B and the other four are ranked A. Consequently, any published research in that area will automatically be highly regarded. In contrast, of 22 journals dealing with health and medical law, 13 are ranked C, six are ranked B and only three are ranked A. Consequently, the absolute majority of publications in the area of law, health and medicine is held in lower regard than the research relating to a feminist perspective of the law. If this is an intended consequence, it should have been debated openly, and if this is an unintended consequence, it highlights the need for more thought going into the ranking process.

To conclude, it is arguable that journal rankings can serve some legitimate interests, such as guiding authors and readers to journals of perceived quality, and possibly even work to increase quality of the articles being published in the journals. However, journal ranking exercises can never be used to assess research quality, and the very idea of using journal ranking to assess quality has, as discussed above, been rejected by leading UK experts.

In light of the above, it is the author’s view that it is impossible to create a fair, comprehensive and internationally acceptable ranking of law journals. Any attempt at such an exercise will at best showcase the hubris of those who try, and at worst be devastatingly harmful to the relatively healthy journal climate enjoyed in many countries.


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/LegEdDig/2010/11.html