Welcome to Day 10 of the U-M Library Research Impact Challenge!
Today, the final challenge will explore several frameworks for the responsible use of research impact metrics. Our hope is that these inform your thinking about reporting on your work and also prepare you to discern best (and not-so-good) practices for research evaluation when you encounter them out in the world.
Let’s get started!
Skepticism and resistance to metrics are far from new! The familiar adage that “not everything that can be counted counts, and not everything that counts can be counted” was coined by William Bruce Cameron in 1963. Even within the specific realm of research impact metrics, criticisms of the Journal Impact Factor as a measure of excellence appear in scholarly journals starting in in 1997 (and continue to be published today).
However, there does seem to be a new wave of research experts coming together to advocate for the responsible use of research impact metrics in higher education, and to codify best practices for this work. These efforts date from within the last decade, a response to growing sense of the “metricization of the university.” Today’s challenge introduces major contributions toward the notion of responsible metrics, and invites you to consider how these align with your own work.
Let’s get started!
- The San Francisco Declaration on Research Assessment (DORA) was developed at the 2012 meeting of The American Society for Cell Biology, although the declaration aims to cover all disciplines. DORA makes a total of 18 recommendations, broken out to address six distinct stakeholder groups: funding agencies, institutions, publishers, organizations that supply metrics, and researchers.
- The Leiden Manifesto was published as a comment in Nature in April 2015. Organized around 10 general principles, the manifesto emphasizes openness and transparency, as well as honoring local/regional excellence and the missions of individual institutions, rather than conforming to a universal measure.
- In July 2015, the UK’s Independent Review of the Role of Metrics in Research Assessment and Management concluded 15 months of investigation into the reliability and robustness of research impact metrics and published its final report: The Metric Tide. Among other things, the report proposes in its final chapter (from p. 134) a system of Responsible Metrics, laying out recommendations to ensure a fair and appropriate use of research impact metrics in future iterations of the UK’s Research Excellence Framework (REF) process. This report makes reference to both DORA and the Leiden Manifesto.
Take a look at all three of these documents, and consider the following:
- Does any of these documents resonate more with you than the others?
- Do you see the recommendations in these documents being carried out in practice around you?
- What do these recommendations have to say about commonly used metrics you've encountered in your career, such as the Journal Impact Factor or the h-index?
Now that these statements have been around for a few years, they are starting to influence policy and action:
While these calls for responsible metrics aim to be broad and inclusive in their scope, they still come out of and depend upon the culture of STEM, journal publishing, and citation-based metrics. What is a responsible way to evaluate scholarship that does not fit this framework? Check out:
- Ochsner, M., Hug, S., and Galleron, I. (2017). The future of research assessment in the humanities: bottom-up assessment procedures. Palgrave Communications 3. https://doi.org/10.1057/palcomms.2017.20
- U-M Library Research Guide on Responsible Metrics
- Roemer, R. and Borchardt, R. (2015). Meaningful METRICS: A 21st-Century Librarian’s Guide to Bibliometrics, Altmetrics, and Research Impact. Chicago: Association of College and Research Libraries.
- And for a refreshing palate cleanser, which questions the entire concept of excellence as a benchmark for research and higher education, check out: Moore, S., Neylon, C., Eve, M., O’Donnell, D., and Pattinson, D. (2017). “Excellence R Us”: university research and the fetishisation of excellence. Palgrave Communications 3. https://doi-org/10.1057/palcomms.2016.105
You’ve completed Day 10 of the U-M Library Research Impact Challenge! It’s been a privilege to spend these two weeks with you, and we hope you found it useful as well as enjoyable.
Many thanks—and see you in the library!