Assessing journal quality – alternatives to JCR
Journal Citation Reports (JCR) is the definitive source for journal impact factors, probably the most widely recognised quality indicators for journals. But what do you do if your subject area is not well covered by JCR or you would like to see some alternative metrics?
There are a number of tools available. These use a combination of citation analysis, peer review and ranking algorithms to facilitate the evaluation of journals in a range of subject areas.
Summary of tools for assessing journal quality
|Source of data||Date produced and dates covered||Subject coverage||Metrics|
|ABS Academic Journal Quality Guide||Peer review, citation analysis and editorial judgement||Latest issue is version 4, 2010||Social Sciences, particularly Business, Management and Economics||Quality rating (4*, 4, 3, 2, 1)|
|ARC Ranked Journal List||Expert review and public consultation||Issued 2010,covering period 2003-2008||All disciplines||Quality rating (A*, A, B or C)|
|Eigenfactor.org||Thomson Reuters citations||Latest issue 2010, metrics based on citations of articles published in previous 5 years||Sciences and Social Sciences||Eigenfactor score; article influence score|
|Journal Citation Reports||Thomson Reuters citations||Latest issue 2011, metrics based on 2 year and 5 year citations||Sciences and Social Sciences||Journal impact factor (2 year and 5 year); citation counts; immediacy index; cited half-life; Eigenfactor score; Article Influence score|
|Journal Metrics||Scopus citations||Latest issue 2011, metrics based on previous 3 years||All disciplines||Source-Normalized Impact per Paper (SNIP); SCImago Journal Rank (SJR)|
|SCImago Journal and Country Rank||Scopus citations||Latest issue 2011, metrics based on previous 3 years||All disciplines||SCImago Journal Rank Indicator; H index; citations|
Coverage of the Association of Business Schools Academic Journal Quality Guide is, as you would expect, focused on the subject areas of business, management and economics, although it claims to take an ‘inclusive’ approach. The ratings shown in the guide are based “partly on peer review, partly on statistical information relating to citation, and partly upon editorial judgements following on from the detailed evaluation of many hundreds of publications over a long period” (Association of Business Schools). The most recent version of the guide was published in 2010; there are no plans to produce a new version before the REF 2014 results are published.
For a simple list of journals graded manually by subject experts, the ranked journal list commissioned by the Australian Research Council (ARC) may be of interest. Produced in support of the 2010 Australian research assessment exercise (ERA) and covering the period 2003-2008, the list covers 20,712 peer reviewed journals in all subject areas. The list was created on the basis of expert review and public consultation and attracted both interest and criticism. Although an updated list was prepared for ERA 2012 it was eventually decided that this would not be used.
A complementary Ranked Conference List was also produced.
Eigenfactor.org uses Thomson Reuters JCR data to produce a score which measures “the journal’s total importance to the scientific community” (Eigenfactor.org). Unlike JCR, which uses simple citation counts, the Eigenfactor algorithm incorporates weightings for the size of the journal (other things being equal, a journal with more articles will get a higher Eigenfactor) and the source of the incoming citations (a citation from a more prestigious journal counts for more than one from a lesser known journal). Because of its different algorithm, it is claimed that Eigenfactor scores are more directly comparable across disciplines than are journal impact factors.
The Eigenfactor website explains its methods in some detail and provides links to further information for those interested in the underlying statistics.
JCR’s key metric is the Journal Impact Factor. This is calculated from the number of citations in a single year to articles published over the previous two years, standardised by the total number of articles published in that two year period (more information in this earlier blog post). Although the Thomson Reuters databases from which JCR derives its citation counts cover all subject areas, only two editions of JCR are produced: Science and Social Science. Between them these cover approximately 12,000 journals.
Journal Metrics offers two bibliometric measures: the Source-Normalized Impact per Paper (SNIP) and the SCImago Journal Rank (SJR). Both are based on citation counts, but whereas SNIP measures citation impact within its disciplinary context (i.e. it takes into account the usual citation behaviour of the subject field), in calculating SJR the relative importance (or prestige) of the citing papers determines the eventual metric (i.e. a citation from a source with a relatively high SJR is worth more than a citation from a source with a lower SJR).
The SCImago ranking algorithm is quite different from that used by the JCR Impact factor, but it claims some correlation (González-Pereiraa, B. et al.). SCImago uses data from Elsevier’s Scopus database and coverage is broader than that of JCR – for example it includes Arts and Humanities. In addition to the SCImago journal ranking, the website also displays each journal’s H-index and a range of other metrics.
Using SCImago it is possible to rank journals by Subject Area (e.g. Arts and Humanities), Subject Category (e.g. History), Country and Date. This report is a typical example.
Other tools for citation analysis
This post has focused on tools for assessing journal quality. If you are interested in article or author level metrics then our earlier post on tools for citation analysis may be helpful.
Updated 26th March 2013: Added SNIP and SJR from Journal Metrics