The time has come, the OSC said, to talk of metric things.

Of JAR and JIF and H-index, Of citations and ratings. (with apologies to Lewis Carroll)

Love them or loathe them, metrics are here… and like the fabled buses, come along in groups. Navigating the code language and acronym soup (JIF? SNIP? DOI? SJR?) can be off putting at best. How can we engage with them, to get the best use from metrics and use them appropriately?

“Responsible Metrics”?

Bibliometric indicators can be a helpful tool in monitoring dissemination and engagement with a research output, but we must also recognise their limitations, particularly in certain discipline areas.

Responsible metrics have be defined by the Forum for Responsible Research Metrics as the following key principles:

  • Robustness – basing metrics on the best possible data in terms of accuracy and scope
  • Humility – recognising that quantitative evaluation should support, but not supplant, qualitative, expert assessment
  • Transparency –that those being evaluated can test and verify the results
  • Diversity – accounting for variation by research field, and using a range of indicators to reflect and support a plurality of research and researcher career paths across the system
  • Reflexivity – recognising and anticipating the systemic and potential effects of indicators, and updating them in response

There are also principles/frameworks outlined in the San Francisco Declaration on Research Assessment (DORA), the Leiden Manifesto and The Metric Tide, and some institutions (such as Loughborough) have developed their own statement

Recognise the limitations of metrics

  • Be careful with comparing across different disciplines because they have different publishing practices. Not all subjects publish research in journal articles, so citation counts and impact factors are less relevant. For example:
    • in Computer Science and Economics some research is made available as conference papers
    • in History and English much research is published in books
    • in the Social Sciences research is sometimes made available via discussion papers and report.
  • Bibliometrics are only as good as the databases they use – no database is complete or 100% accurate – providers make judgements when deciding what to include in their datasets.
  • Don’t assume that the very best articles only appear in the high impact journals. High ranking journals contain low performing articles and vice versa.

You can neutralise the worst of these limitations by:

  • Presenting quantitative data in context – a ‘3’ on its own is meaningless – a 3 in a field of 100 20’s is a vastly different thing to a 3 in a field of 100 0.1’s. By providing context, and, where possible, appropriately normalised scores, you can get a better picture of what the number reflects.
  • Comparing like to like – this can be challenging with so many variables (publication year, career stage, language, discipline) but being open about the parameters you use gives a more accurate reflection of the bigger picture.
  • Using a mixture of metrics and qualitative evidence – metrics are by no means yet at the stage where they can replace peer review or analysis of an output. Using the two in conjunction presents an accurate picture of your work.

Help is at hand!

Those of you who follow my twitter feed will have noticed that on Tuesday I was at the “Responsible use of Bibliometrics in Practice” event, and the next day, the very fabulous Metrics Toolkitwas launched. This is a brilliant resource (all available CC-BY). In their own words:

We developed the Metrics Toolkit to help scholars and evaluators understand and use citations, web metrics, and altmetrics responsibly in the evaluation of research.

The Metrics Toolkit provides evidence-based information about research metrics across disciplines, including how each metric is calculated, where you can find it, and how each should (and should not) be applied. You’ll also find examples of how to use metrics in grant applications, CVs, and promotion dossiers. (http://www.metrics-toolkit.org/about/)

The toolkit provides easy to access information on a very wide range of possible metrics, with the option of filtering these to discipline and output type.

At Kent?

For Researchers at Kent, we also have lots of information on ‘Impact and Sharing’ which includes introductions to the systems we use at Kent, ways of increasing your dissemination and academic impact. You might want to spend some time checking and tidying your author profile to make sure you are getting the most out of your citations, and if you would like to know more, please get in touch.