The ‘core’ or ‘institutional’ funding universities receive to do research of their own choosing is being subjected to quality control and competition in a growing number of countries, apparently with significant impact on their behaviour and productivity – but also with unanticipated negative effects.

During the post-War period, state universities were generally financed through a single block grant. But over time they have additionally been gaining external income from research councils, other government agencies and industry, so the block grant is a declining proportion of their income. Curiously, despite wide variations in the share of total income provided by the part of the block grant dedicated to research (for example Danish universities received 72% block grant funding in 2009, the UK 48% and Ireland 31% according to Eurostat) there is no clear relationship between the share of institutional funding and research performance.

Since 1986, education ministries have begun to use performance-based research funding systems (PRFS) in awarding institutional funding. In general, they aim to stimulate performance by reallocating resources to the strongest universities and research groups. The UK Research Assessment Exercise (RAE – recently renamed REF) was the first of these, and like other first-generation PRFS it relies heavily on peer review of submissions from the universities. From roughly 2000, more countries adopted a second generation of PRFS, mostly using bibliometric indicators rather than peer review. By that time the cost and difficulty of doing bibliometric analysis had fallen enough to be affordable – and, indeed to provide a much cheaper solution than peer review. Up to this point, PRFS focused on scientific quality, typically viewed through the lens of scientific publication. But a third generation of PRFS now aims to incorporate aspects that consider the influence of research on innovation and society more widely. Evidence
used ranges from patents through counting innovation March 2015 – N° 13 3 outputs like prototypes to text descriptions of research impacts in the REF.

Key design decisions for PRFS include

Despite their widespread adoption, there is little evidence about whether and how PRFS work. What we know is mostly based on evidence from the UK, Norway and the Czech Republic, which suggests that effects depend both on policy purposes and on the effectiveness of implementation.

Norway introduced a PRFS in 2004 as part of the university ‘Quality Reform’ and subsequently set up a similar but separate system for the university hospitals and research institutes. Both act upon a very small fraction of total institutional funding. The PRFS distributes money in a linear way and in practice rewarded the newer and weaker universities for their increasing research efforts, building national capacity rather than reallocating resources to the existing winners. It drove up the number of publications but not their average quality. (An early Australian PRFS did the same, a few years before.) The PRFS for the institutes had similar effects but failed to increase either the amount of institute-university collaboration or international income – perhaps because both were already high.

In response to dissatisfaction with peer review based approaches, the Czech Republic introduced a metricsbased PRFS in 2008, which ran annually and was intended over a short period to become the only mechanism for allocating institutional research funding. Universities doubled their academic paper production in three years and the production of innovation-related outputs grew even faster. Allocations to individual organisations and fields became very unstable. Gaming was widespread and despite repeated attempts to refine the formulae used, the system was abandoned in 2012 and is currently undergoing radical redesign.

Nyheter

Alla artiklar Alla nyheter