News Release

Official measures of research ‘impact’ are failing to keep pace with socially-networked academics

A survey of how academics in the UK use social media to encourage people to interact with their research argues that much of the public value of their work is probably being overlooked in official ‘impact’ assessments.

Peer-Reviewed Publication

University of Cambridge

A survey of how academics use social media to encourage people to interact with their research argues that much of the public value of their work is probably being overlooked in official ‘impact’ assessments.

The study analysed more than 200 examples of how academics discuss and encourage the uptake of their scholarship on social media. Based on the usage patterns it uncovered, it suggests the present approach to assessing universities’ public ‘impact’ enshrined in the Research Excellence Framework (REF) is due an update, since academics are now more socially networked than they were when the model was devised.

The REF is the official system for measuring the quality of university research in the UK and informs the distribution of research funding. The results from the most recent assessment round were published last week (12 May).

As part of the exercise, university departments are asked to demonstrate the ‘impact’ of their work: effectively showing how it has enriched society. While the new study is supportive of requiring impact case studies, it questions how they are assessed. It argues that a gulf is opening up between how impact is measured in the REF, and the true scope and range of scholarly engagement on social media platforms – some of which did not even exist when it was first devised.

In particular, the REF focuses on the extent to which the final outputs from completed research projects are received by public audiences. Contrastingly, the study found that today’s academics are often engaged in ongoing ‘feedback loops’ with organisations, community groups, policy actors and other publics during a project’s lifetime. These lead to opportunities to collaborate and share expertise while the research is still underway, often in ways that the REF is unlikely to cover.

The study’s author, Dr Katy Jordan, from the Faculty of Education at the University of Cambridge, said: “The official language presents impact as a top-down, outward flow from universities to a waiting public, but this is an outdated characterisation – if it was ever valid at all. Ask researchers about their most impactful interactions on social media, and you’ll get a much wider range of examples than the REF covers.”

“You could argue that this means too many researchers are misunderstanding what impact is; but it’s also potentially evidence that times have changed. There’s a huge amount to be said for asking universities to demonstrate their value to wider society, but it may be time to rethink how we measure this.”

The REF measures impact through two principal dimensions: ‘significance’ (the meaningful difference a project makes) and ‘reach’ (the quantifiable extent to which it does so). The definition of impact beyond this is very open-ended, varies across disciplines, and is often considered ambiguous.

The study points out that the REF also offers somewhat confusing advice on public engagement, encouraging this in general but discouraging it in the assessment metrics. Official guidance states: “Engaging the public with research does not count as impact. Impact is what happens when people interact with the research, take it up, react, or respond to it. Public engagement doesn’t just happen when the research is complete.”

Jordan’s survey invited academics to provide examples of strong impact they had achieved through social media. She received responses from 107 scholars in 15 different countries, but most participants, who ranged from postgraduate researchers to established professors, were UK-based. Her research analysed 209 of the examples they submitted.

Significantly, fewer than half related to cases in which research had been disseminated ‘outwards’ to the public, as products, in the way the REF presumes. In such cases, the academics had typically used social platforms to share their findings with a bigger audience, to stimulate discussions with colleagues, or to generate evidence of positive engagement with the research.

About 56% of the responses, however, spoke about impacts arising from exchanges that were not simply one-way. In particular, participants used social media to test out research ideas, report interim findings, crowdsource information and data, or advertise for research participants.

These discussions appear to have generated more than just ‘impact’ in the official sense. As a result of the exchanges, researchers were invited to give public lectures, participate in panel discussions, give evidence and advice to organisations, or run training sessions.

Crucially, these opportunities did not always focus on the research that had stimulated the initial interaction. In many cases, researchers who posted about their project were then asked to share their broader expertise – often with advocacy organisations or policy actors who were interested in finding out more about their research in general. For example, in one case, a post on social media led to a senior civil servant from the Cabinet Office visiting an entire group of academic colleagues, to explore how their work as a whole might inform and shape policy.

Jordan argues that social media is blurring the distinction between ‘impact’ and ‘public engagement’. As information flows into academic projects – from people, companies and organisations who are contributing ideas, questions and feedback through social platforms – so these generate both formal and informal opportunities for ‘outward’ exchange. This circuit of interaction seems to be influencing and benefiting society in multiple ways not tracked by the REF.

These more nuanced impacts are, however, difficult for assessors to monitor or measure. “One solution may be to amend the assessment so that it asks universities not just to provide evidence of research outcomes, but to explain the research process across a project’s lifetime,” Jordan said. “This isn’t a call for yet more ambiguity about what impact is, but for more open-mindedness about what researchers achieve. In an increasingly complex, socially-networked culture, this would help to ensure that the broader effects of their work are not forgotten.”

The research is published in Learning, media and technology.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.