1. Research metrics should only be used to inform and support and not supplant qualitative expert assessment.
Leiden manifesto principle 1.
Peer review remains the method of choice for assessment. Quantitative indicators can be a valuable source of additional information and assist in making more equitable judgements.
2. Journal Impact Factors should not be used as a surrogate measure of the quality of individual research articles.
DORA institutional commitment 1
This principle reinforces the University’s code of practice for REF2021 which requires each UOA coordinator to develop a process based on the following guidelines:
- Selection decisions should be based on peer review conducted by at least two subject matter experts;
- External peer review should be undertaken where possible;
- Interdisciplinary research and non-traditional outputs should not be disadvantaged.
3. Where research metrics are considered in assessment of individuals, including recruitment, probation, performance, reward and promotion, these should be clearly stated in the guidance and application documentation. In addition, this documentation should also confirm that the research content of a paper is much more important than publication metrics or the identity of the journal in which it was published.
DORA institutional commitment 1
This principle further expands on the University’s commitment in signing up to DORA, ie journal level metrics will not be used when assessing individuals.
4. Research metrics should be selected that best reflect the nature of the research discipline in terms of publication, citation and (external) funding practices, other types of research outputs and outcomes, impact, collaboration, supervision and career paths. Normalised metrics should be used where these are available and robust.
Leiden manifesto principle 6; DORA institutional commitment 2
How informative research metrics are is dependent on the discipline. Generally, research metrics are more informative in the natural sciences and medical sciences’ disciplines than in the social sciences and humanities’ disciplines. The aim of this principle is also to give recognition to the many different facets of research and therefore researcher career paths
Although normalised metrics are considered the best choice of metrics, these are not always available and through normalisation can lose some of their transparency. Furthermore, when assessing within the same discipline, normalisation is less important.
5. The selection of research metrics should be accompanied by information on the source, the format and level of precision (eg number of decimals), definition and context, including systemic effects, weaknesses.
Leiden manifesto principles 5, 8 and 9
Ability of verification is key in ensuring that research metrics are used properly and the ability to challenge and critique metrics is vital to the quality and credibility of metrics.
6. When choosing to use research metrics in assessment, no single metric should be used in isolation.
Leiden manifesto principle 7
This set of principles form best practice guidance for the assessment of individuals or a group of individuals. One metric provides one perspective and is unlikely to assist in robust evidence-based decision making. When evaluating an individual or group of individuals the aim should be to have a suite or basket of metrics or when only one (responsible) research metric is available, this metric should be used in combination with qualitative information.
7. Research metrics should be applied at the appropriate level of granularity. When evaluating individual researchers, metrics related to an individual’s performance should be used.
This principle is in line with the University’s commitment in signing up to DORA ie Journal Impact Factors or other journal level metrics will not be used when assessing individuals, but also applies to the selection of other metrics.
8. When employing research metrics for comparative evaluation, whether between individual researchers or groups of researchers, the methodology applied to the research metrics for the comparative evaluation should be made available to any individuals directly affected.
Transparency is key in responsible use of research metrics not only in the selection and presentation of metrics but also in their use in assessment.
9. The selection of research metrics that reflect or introduce bias (eg gender) should be avoided or otherwise addressed in the relevant assessment.
The University has an active duty to consider the impact on equality in all decision making and selection of metrics that could have a negative impact on equality should be avoided or mitigated.
10. Research metrics should be scrutinised regularly to make sure they are still fit for purpose, taking into account research metrics that have become recently available and ‘gaming’ practices.
Leiden manifesto principle 10
Systems of assessment can influence the research metrics employed; the research landscape is changing continuously; the priorities of assessment are subject to change; and due to advances in technology and data capture, new research metrics are coming available. It is therefore important that regular scrutiny takes place to ensure the selected metrics are still fit for purpose.