The Evaluators' Eye
eBook - ePub

The Evaluators' Eye

Impact Assessment and Academic Peer Review

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

The Evaluators' Eye

Impact Assessment and Academic Peer Review

About this book

This book offers an empirical analysis of how academic peer review panels mediate the traditionally non-academic criterion of societal impact. The UK's 2014 Research Excellence Framework (REF2014) for the first time included an "Impact" criterion that considered how research had influenced society, beyond academia. Using a series of interviews with REF2014 Main Panel A evaluators, the book explores how a dominant definition of Impact was constructed within panels and how this led to the development of strategies around valuing it as an ambiguous object. By doing so, Derrick brings a unique perspective to Impact that is currently overlooked in the dominant Impact evaluation discourse. Through examining the evaluation procedure as a dynamic process it is argued that the best models, strategies and insights for Impact evaluation are those constructed in practice, within peer review groups. By exploring the legitimacy of peer review as a tool to assess the societal impact of research, Derrick states that the future for Impact evaluation is not to seek alternative tools where peer review seemingly fails, but instead to highlight ways in which peer review panels can work smarter. The book will be essential reading for students, academics and policy-makers working in Education, as well as researchers interested in peer review processes and the research evaluation frameworks and audit exercises globally.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access The Evaluators' Eye by Gemma Derrick in PDF and/or ePUB format, as well as other popular books in Education & Education General. We have over one million books available in our catalogue for you to explore.

Information

Ā© The Author(s) 2018
Gemma DerrickThe Evaluators’ Eyehttps://doi.org/10.1007/978-3-319-63627-6_1
Begin Abstract

1. Impact from the Evaluators’ Eye

Gemma Derrick1
(1)
Lancaster University, Lancaster, UK
Critiquing peer view doesn’t always win friends among academic colleagues!
Personal correspondence sent to the author, July 2016
End Abstract
If you read nothing else about the UK’s Research Excellence Framework (REF2014) and the Impact criterion, then let it be this book.
Not because it is a critique of the REF2014 , it isn’t, but because this book is about the most important mechanic of the REF2014 and one that has been vastly overlooked: the evaluators. The evaluators had a mammoth task. In light of no precedent, little experience and monstrous professional and political pressure, they embarked on evaluating an object that was considered a new influence using the very traditional evaluation tool of peer review. Specifically, this book examines how evaluators navigated this object, together, and the importance of group dynamics in attributing value to ambiguous evaluation objects such as the Impact criterion.
I do not question the evaluation outcomes, but by examining how these outcomes were reached, I do question how these evaluators worked. To clarify, it is not just a question of whether these evaluators came up with the right answer or not, but instead to focus how they worked and in the future how they can work smarter.
So while this is not a book about the REF2014 per se, it is a book about what goes on behind the REF2014 and its evaluation processes. This is the evaluators’ story.
This book is also about Impact.
When the UK government first announced its plans to not only recognise the importance of the societal impact (Impact) of research, but to award funding on the basis of its evaluation as part of the 2014 Research Excellence Framework (REF2014) , there was an explosion of dissent from the academic community. Part of this discontent was based on a fear that a focus on Impact would steer research in undesirable directions, and another part stemmed from misgivings surrounding the nature of Impact, its assessment and how value can be attributed to such a broad concept. Despite numerous studies on the aspects of Impact and its evaluation, understandings of the concept and models of Impact evaluation remain merely theoretical. This book turns the focus away from Impact as the subject of an evaluation, and towards Impact as a process of valuation through the eyes and actions of REF2014 evaluators. Because for me the value of Impact cannot be made independent of the process used to assess it.
So, finally we have peer review, the domain where a value was assigned to Impact. Peer review, as with most evaluations, is a construct. It is not a naturally occurring process, but instead is constructed from the public’s need for accountability and transparency , the academic community’s desire for autonomy, and a political need for desirable outcomes achieved through a fair process (Chubin and Hackett 1990; Dahler-Larsen 2011). An ingrained pillar of academic life and governance, group peer review works by allowing for contesting and conflicting opinions about a concept to be played out, and negotiated in practice. All academics are conditioned towards the importance of peer review; we question its outcomes (How could I not get that grant!) but accept them because we believe that our peers and experts have valued our proposals as worthy (or not) based on a shared understanding of what is considered as excellent in research. This shared understanding is less clear for Impact, which is a new, uncertain and ambiguous evaluation object, one that as a concept is forever in flux, and one where our regular peers are not necessarily Impact experts. In theory, peer review appears the perfect tool for evaluating Impact as, during this flux, it provides an excellent forum where competing ideas can be aired in practice. However, as a construct, the practical necessities of the evaluation, where mechanics are used to frame and potentially infiltrate debate, question the purity of the process as expert-driven, as well as the suitability of peer review as a tool to value ambiguous objects.
Combining these three concepts is a difficult marriage to make, but by considering them together I bring the field out of the theoretical and hypothetical, and into an empirical world. For this book, all previous (and current) debates about Impact including how to measure it, what it is and how to capture it are put to the test within a peer review evaluation panel. Within these groups, panellists interpret and define these conceptual debates and meanings of Impact among themselves before producing evaluation outcomes. For this study, I was motivated by an overarching objective of exploring the suitability of peer review as a tool for assessing notions of Impact. Specifically, I focused on whether I could understand how the group’s dominant definition influenced the strategies developed to value Impact (the evaluators’ eye); the extent to which peer review as a constructed exercise helps or hinders the evaluation of ambiguous objects ; and the extent to which the Impact evaluation process was at risk of the drawbacks associated with group behaviours. By considering the attribution of value about Impact as a dynamic process, rather than one that is static and dependent on the characteristics of the submissions alone, this book alters the focus to go beyond sedentary debates about the definition, nature and pathways to impact, and instead look at how notions of research excellence beyond academia are played out within groups of experts. What emerges is a totally different focus of how to understand Impact, one that considers that the real value of Impact cannot be divorced from how evaluators play out their evaluation in practice, within their groups. Viewing the challenges facing Impact evaluation on the group level, rather than solely at the individual evaluator or individual case study level, changes (for the better) the types of recommendations that are available for future assessments.

Why Study Peer Review and Impact Together?

Plenty is already known about how experts straddle the concept of excellence or scientific impact in peer review panels . Likewise, there has been a large amount of new research concerned with models of research impact assessment; however, few pieces of research bring these concepts together in order to study them empirically. As two difficult and, until now, independently considered areas of study, this book has its work cut out for itself in bringing these together. However, this book also testifies that there is no way of understanding Impact that can be separated from the practice of its evaluation and valuation by peer review panels . Within panels, concepts and meanings are assigned to submissions that demonstrate Impact, and the result of these evaluations is as much to do with the social interplay of evaluators as it is with the attributes of the submissions themselves.
Too many studies have focused too much on the attributes of the submissions (REF2014 Impact Case studies) and cross-referenced these with the results of the evaluation, labelling these as examples of Impact without understanding how such assessments were formed. This rather simplistic assumption, where too much attention is paid to a submission’s attributes, overlooks the importance of the group-based dynamics around the outcomes. It is somewhat foolish and perhaps naĆÆve to assume that the value of difference in Impacts can be determined without considering how this value is deliberated by the peer review panel . In this way, the book takes you on a journey with the REF2014 Impact evaluators as they reason among themselves about what constitutes excellent and, by proxy , valuable Impact.
This is interesting because not only are academics essentially novices when it comes to evaluating Impact, so too in essence are the ā€œusersā€ or the non-academic experts included in the evaluation process. In fact, previous studies have found that if the choice of evaluating Impact is made by peers, rather than by indicators, then it is difficult to find peers with ā€œexperienceā€ of this type of evaluation. Research has shown that ā€œscientists generally dislike impacts considerationsā€ as it ā€œtakes scientists beyond the bounds of their disciplinary expertiseā€ (p. 244) (Holbrook and Frodeman 2011), and, as such, many scientists and stakeholders alike struggle with evaluating the concept.
Whereas the involvement of experts and peers brings status and credibility to the process (Gallo et al. 2016), evaluation is highly subjective and even more so when dealing with a unique and problematic criterion such as Impact. The incorporation of ā€œsocietal impactā€ as a criterion for peer review can be described as a Kuhnian revolution for research evaluation...

Table of contents

  1. Cover
  2. Front Matter
  3. 1.Ā Impact from the Evaluators’ Eye
  4. 2.Ā Peer Review of Impact: Could It Work?
  5. 3.Ā Evaluation Mechanics
  6. 4.Ā Introducing Impact to the Evaluators
  7. 5.Ā Peers, Experts and Impact
  8. 6.Ā Risking Groupthink in Impact Assessment
  9. 7.Ā Working Smarter with Multiple Impacts and One Eye
  10. Back Matter