News Release

Report reveals potential of AI to help Higher Education sector assess its research more efficiently and fairly

Reports and Proceedings

University of Bristol

Topline summary

* Study indicates generative AI tools are being used widely by UK Universities for the REF

* Findings show disparate level and nature of usage

* Results highlight need for national oversight and guidelines

* With innovative mindset and structured support…scope to improve efficiency and equitable access

Full release

A new national report has shown for the first time how generative AI (GenAI) is already being used by some universities to assess the quality of their research – and it could be scaled up to help all higher education institutions (HEIs) save huge amounts of time and money.

But the report, led by the University of Bristol and funded by Research England, also reveals widespread scepticism among academics and professionals working in the sector about its usage for this purpose, and highlights the need for national oversight and governance.

The UK’s system for assessing the quality of research in higher education institutions (HEIs) is known as The Research Excellence Framework (REF). Its outcomes influence how around £2 billion annually of public funding is allocated for universities’ research.

Lead author Richard Watermeyer, Professor of Higher Education at the University of Bristol, said: “GenAI could be a game-changer for national-level research assessment, helping to create a more efficient and equitable playing field. Although there is a lot of vocal opposition to the incorporation of it into the REF, our report uncovers how GenAI tools are nevertheless being widely – if currently, quietly – used, and that expectation of their use by REF panellists is high.”

The last REF took place in 2021 and, following a review, changes to guidance for the next one – REF2029 – are expected to be announced this month. Total costs of REF2021 are estimated to be around £471million, with an average of £3million per participating HEI, and REF2029 finances are anticipated to be much higher.

Prof Watermeyer said: “The report is timely given the immense financial pressures facing the sector. It’s widely accepted that the regulatory burden of the REF is high and will, in all likelihood, only increase. Our report demonstrates that GenAI has the potential to alleviate some of this, but offers no complete solution. It could also create new bureaucratic challenges of its own, including establishing new requirements and protocols for its appropriate use.”

Key findings

The report investigated the usage of GenAI at 16 HEIs, including Russell Group universities and more recently established universities, across the UK. Findings indicated evidence of GenAI being widely deployed to prepare REF submissions in some capacity. But the extent and way it was being used greatly differed.

For instance, some universities were using the tools to gather evidence of their research impact in the wider world and to help craft related stories. In others, there was evidence of in-house tools being developed to streamline REF processes or GenAI being used to review, assess, and score their research.

Prof Watermeyer added: “There is clear variation, in fact, disparity in how HEIs can and might use these tools for competitive advantage in the REF. The extent to which institutions might profit from these tools is, as you might expect, linked to their level of resourcing and local expertise.”

The study also included a survey of nearly 400 academics and professional services staff, which asked how they felt about GenAI tools being used for various aspects of REF2029.

For all aspects the majority of academics and professional services staff were shown to strongly disagree, with the level of strong opposition varying between 54% and 75% of respondents for different parts of the REF process. There was most support for GenAI tools being deployed, among almost a quarter (23%) of respondents, to support universities in the development of impact case studies.

What participants said

The study also included interviews with 16 university Pro Vice-Chancellors. Some were very positive and regarded its usage as inevitable: “This is the future…We need to lead into it…We’ve got to understand it and experiment a bit,” and “I do think that just to put our heads in the sand and say it’s not going to happen or not on our watch I think is very limiting of what the future might look like…I think there’s a lot of moral panic about I think we’re not going to stop it happening.”

Other feedback voiced doubt, mistrust, and raised the possible limitations of AI: “I do just wonder if we’re just in a bit of an AI bubble at the moment, where we think everything’s going to be driven by AI and suddenly over the next six, seven years, actually we’re going to have a much greater clarity of what the limitations of AI are.” Another respondent commented: “We don’t trust them enough in order to really start backing away from conventional tools and supplementing them with AI tools…there are still very large numbers of administrators and researchers who have had no real experience of AI yet.”

Prof Watermeyer said: “Our findings show that opposition to the tools is concentrated among certain academic disciplines, chiefly arts and humanities and social sciences, and non-users. Professional services staff tend to be much more enthusiastic about the potential of GenAI for REF as are those in post-92 institutions, which have considerably less resource to devote to REF processes.”

Several HEI REF Lead respondents were receptive, or at least resigned to its usage. Comments included: “I think AI is here to stay and so we need to work out how we adopt it in a way that we’re comfortable with” and “I think, certainly from a professional services point of view, the best way to keep your job and not get replaced by AI is to be the person that can manage AI.”

Co-author Professor Lawrie Phipps, Senior Research Lead at Jisc, said: “The focus group findings highlight an urgent need for strong governance and shared standards. It is clear without institutional policies and national guidance, universities risk fragmented approaches, uneven capacity, and growing inequity between those able to develop bespoke systems and those reliant on public tools.

“Participants consistently called for transparent REF guidelines to define what constitutes responsible AI use and how such use should be disclosed. Equally, there is consensus that human oversight must remain central to all evaluative processes.”

Next steps and closing remarks

The report makes a host of recommendations, including that all universities should establish and publish a policy on the use of GenAI for research purposes, encompassing REF; relevant staff should receive full training on the responsible and effective use of AI tools; and appropriate security and risk management measures should be implemented.

It also calls for robust national oversight, comprising sector-wide guidance on usage for REF29 and a comprehensive REF AI Governance Framework. To help achieve equitable access to the technology among all HEIs, it advises that a shared, high-quality AI platform for REF could be developed and made accessible to all institutions.

Co-author Tom Crick, Professor of Digital at Swansea University, said: “The current disparate use of these tools needs clearer co-ordination into a standardised framework through which the sector can adopt and develop open, transparent and responsible practice. This will be critical to mitigating inequitable practice by HEIs and across disciplines, and for even possibly levelling the playing the field for this and future national research assessment exercises.”

From a global perspective, other comparable performance-based research funding models have recently been discontinued, namely Excellence Research Australia and the Performance Based Research Fund in New Zealand. 

Prof Watermeyer said: “We can see that methods of national research assessment are no longer fit for purpose and fail to meet the needs of the datafied age. GenAI might not be the solution, but it also cannot be discounted from the necessary reform the UK is in prime position to lead on.”

Dr Steven Hill, Director of Research at Research England, said: “These findings offer both a caution and a call to action. It warns against haste and complacency alike, while inviting the sector to lead with principle, collaboration, and well-informed critique. With the right safeguards, the integration of GenAI can help us uphold excellence, fairness, and trust in the assessment of UK research.”

Professor Guy Poppy, Pro Vice-Chancellor for Research and Innovation at the University of Bristol, said: “It’s vital that UK research excellence can be robustly identified and measured, which is the primary purpose of the REF. This is a key report which sheds important new light on how AI tools are currently being deployed in the process, as well as the wider potential for its future application.

“As global leaders in AI research and expertise, it’s great that Bristol is playing a pivotal role in raising both the opportunities and challenges presented by this rapidly advancing technology at a sector-wide level. The work will no doubt fuel balanced, evidence-based discussions concerning REF, and inform carefully considered decisions about how UK university research is assessed and evaluated so processes remain rigorous, responsible and transparent.”

Report

‘REF-AI: Exploring the potential of generative AI for REF2029’ by Richard Watermeyer, Lawrie Phipps, Rodolfo Benites, and Tom Crick


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.