The Metric Tide: Are you using bibliometrics responsibly?
Recent years have seen an increase in the use of metrics for research assessment. Whether using citation data to support REF scores, calculating an h-index to compare researchers or research groups, or choosing a journal according to its impact factor, researchers, their managers and their funders have become increasingly reliant on quantitative evaluation.
On the face of it, an h-index is an attractive proposition: just one number to sum up both the quantity and the quality of a researcher’s output. Likewise, a journal’s impact factor distils the reputation of the journal into a single metric. Unfortunately, for evaluating the quality of research, many would say that these are reductionist at best and, at worst, fundamentally flawed.
This backlash against the indiscriminate use of metrics has prompted three recent reports: the San Francisco Declaration on Research Assessment (DORA), the Leiden Manifesto and the UK’s Metric Tide report. The main messages from these reports are overlapping and quite clear:
- Use quantitative evaluation to support, not replace qualitative expert assessment;
- Set research goals at the start and choose from a range of appropriate indicators to measure performance against those goals (no single method of evaluation will apply to all contexts);
- Be mindful of the limitations of particular indicators;
- Base metrics on the best possible data and be transparent in the data and methods used for calculating them (this applies to rankings and league tables, funding decisions and researcher appointments);
- Focus on the research value of a paper, not where it it is published, and avoid using journal level impact factors to assess individual outputs;
- Allow for disciplinary differences in publication and citation practice;
- Avoid unwarranted concreteness and false precision;
- Use multiple indicators and review these regularly to detect ‘gaming’ and fitness for purpose.
The authors of the Metric Tide report have proposed the idea of ‘responsible metrics’ as a way of “framing appropriate uses of quantitative indicators in the governance, management and assessment of research” (Wilsdon et al. 2015, p.134). In their view, responsible metrics will be characterised by robustness, humility, transparency, diversity and reflexivity. This report is well worth reading, not only for the responsible metrics proposal and the 22 recommendations from the review group, but also for the wide-ranging discussion of the issues which surround the use of metrics in UK and international research.
If you would like to discuss any of these issues further then feel free to post a comment or get in touch.