Skip to Main Content

Library Services



Research metrics

A guide to identifying and analysing research-related metrics, with support material and further guidance.for related tools

The UCL Bibliometrics Policy

UCL has developed a policy to govern the responsible use of bibliometrics. The policy outlines principles for the use of bibliometrics at UCL, setting out that while the use of metrics is not mandatory, in cases where they are used they should meet certain requirements. For example, the impact factor of journals should never be used to assess individual publications.

Information about the policy, and guidance to give practical advice on putting these principles into practice, is available below.

It was developed through a broad consultation process in 2018-19, and approved as a university policy in February 2020.

Key principles

The policy sets out eleven key principles:

  1. Quality, influence, and impact of research are typically abstract concepts that prohibit direct measurement. There is no simple way to measure research quality, and quantitative approaches can only be interpreted as indirect proxies for quality.
  2. Different fields have different perspectives of what characterises research quality, and different approaches for determining what constitutes a significant research output (for example, the relative importance of book chapters vs journal articles). All research output must be considered on their own merits, in an appropriate context that reflects the needs and diversity of research fields and outcomes.
  3. Both quantitative and qualitative forms of research assessment have their benefits and limitations. Depending on the context, the value of different approaches must be considered and balanced. This is particularly important when dealing with a range of disciplines with different publication practices and citation norms. In fields where quantitative metrics are not appropriate nor meaningful, UCL will not impose their use for assessment in that area.
  4. When making qualitative assessments, avoid making judgements based on external factors such as the reputation of authors, or of the journal or publisher of the work; the work itself is more important and must be considered on its own merits.
  5. Not all indicators are useful, informative, or will suit all needs; and metrics that are meaningful in some contexts can be misleading or meaningless in others. For example, in some fields or subfields, citation counts can estimate elements of usage, but in others they are not useful at all.
  6. Avoid applying metrics to individual researchers, particularly metrics which do not account for individual variation or circumstances. For example, the h-index should not be used to directly compare individuals, because the number of papers and citations differs dramatically among fields and at different points in a career.
  7. Ensure that metrics are applied at the correct scale of the subject of investigation, and do not apply aggregate level metrics to individual subjects, or vice versa. For example, do not assess the quality of an individual paper based on the impact factor of the journal in which it was published.
  8. Quantitative indicators should be selected from those which are widely used and easily understood to ensure that the process is transparent and they are being applied appropriately. Likewise, any quantitative goals or benchmarks must be open to scrutiny.
  9. If goals or benchmarks are expressed quantitatively, care should be taken to avoid the metric itself becoming the target of research activity at the expense of research quality.
  10. New and alternative metrics are continuously being developed to inform the reception, usage, and value of all types of research output.Any new or non-standard metric or indicator must be used and interpreted in keeping with the other principles listed here for more traditional metrics. Additionally, consider the sources and methods behind such metrics and whether they are vulnerable to being gamed, manipulated, or fabricated.
  11. Bibliometrics are available from a variety of services, with differing levels of coverage, quality and accuracy, and these aspects should be considered when selecting a source for data or metrics. Where necessary, such as in the evaluation of individual researchers, choose a source that allows records to be verified and curated to ensure records are comprehensive and accurate, or compare publication lists against data from the UCL IRIS/RPS systems.

Further guidance

Detailed guidance on various aspects of putting the policy into practice is available from UCL Research. This covers a range of topics including:

  • recommended metrics, and metrics to avoid;
  • recommended approaches for comparisons between departments and between institutions;
  • responsible use of new or alternative metrics;
  • general statistical guidance on interpreting metrics; and
  • interpreting metrics for multi-authored papers