Reconceptualising Feedback in Higher Education
eBook - ePub

Reconceptualising Feedback in Higher Education

Developing dialogue with students

  1. 218 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Reconceptualising Feedback in Higher Education

Developing dialogue with students

About this book

Feedback is a crucial element of teaching, learning and assessment. There is, however, substantial evidence that staff and students are dissatisfied with it, and there is growing impetus for change.

Student Surveys have indicated that feedback is one of the most problematic aspects of the student experience, and so particularly in need of further scrutiny. Current practices waste both student learning potential and staff resources. Up until now the ways of addressing these problems has been through relatively minor interventions based on the established model of feedback providing information, but the change that is required is more fundamental and far reaching.

Reconceptualising Feedback in Higher Education, coming from a think-tank composed of specialist expertise in assessment feedback, is a direct and more fundamental response to the impetus for change. Its purpose is to challenge established beliefs and practices through critical evaluation of evidence and discussion of the renewal of current feedback practices. In promoting a new conceptualisation and a repositioning of assessment feedback within an enhanced and more coherent paradigm of student learning, this book:

• analyses the current issues in feedback practice and their implications for student learning.
• identifies the key characteristics of effective feedback practices
• explores the changes needed to feedback practice and how they can be brought about
• illustrates through examples how processes to promote and sustain effective feedback practices can be embedded in modern mass higher education.

Provoking academics to think afresh about the way they conceptualise and utilise feedback, this book will help those with responsibility for strategic development of assessment at an institutional level, educational developers, course management teams, researchers, tutors and student representatives.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Reconceptualising Feedback in Higher Education by Stephen Merry, Margaret Price, David Carless, Maddalena Taras, Stephen Merry,Margaret Price,David Carless,Maddalena Taras in PDF and/or ePUB format, as well as other popular books in Education & Adult Education. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2013
Print ISBN
9780415692359
eBook ISBN
9781134067626
Edition
1
Part I
Current thinking
Overview
Maddalena Taras
Part I provides an overview of the wider aspects of the student voice, current engagement, thinking and conceptualisations of feedback. Feedback is a much used and abused term, particularly in its adoption at the services of learning: exploring our collective conceptualisations and understandings of feedback is necessary at every level of education for future developments. This book serves such a function.
Government offices have developed questionnaires to measure and monitor the efficacy of institutional support of feedback to students in an attempt to link feedback processes with learning and ‘customer’ satisfaction.
In Section A (‘The student voice’), Yorke, and Bols and Wicklow evaluate inter/national surveys on aspects of assessment feedback. Yorke, at an international level, examines student surveys and the limited use of feedback data that impact on major national and institutional decisions. Bols and Wicklow delve deeper at national level into student wishes on feedback.
More specifically, Yorke examines government-led surveys to enable accountability and comparability in higher education institutions. He highlights the politicisation of feedback in the national undergraduate student surveys in Australia and the UK and comes to two main conclusions: First, two or three questions on feedback in a survey cannot hope to provide meaningful data on such a complex process. Second, despite the paucity of these data, results are often used as stop-gap, short-term remedial measures by institutions, often bypassing and neglecting long-term planning to support and sustain developmental strategies that are embedded and part of learning. This perceived official student voice, despite the limited data that is used to represent it, has substantial impact, particularly at institutional level.
Bols and Wicklow evaluate what students want in relation to feedback as reported in the NUS/HSBC UK Student Experience Research. In the context of a wider assessment and learning context, they seek feedback integrated into a developmental and communicative framework to support both formative and summative assessments. There are five essential things that students want: feedback situated within support of learning, effective feedback, innovative assessment approaches, anonymous marking, learning focused assessment and accessible, legible submission methods.
In Section B (‘The wider picture’), Taras examines theories of formative assessment across educational sectors, with the ensuing implications for feedback in a world where divisions are meaningless with the internationalisation of research communications and communities. Price et al. provide a detailed rationale for a new understanding of feedback that has provided the Osney Grange Group agenda and instigated this book.
Taras explores differences in theories of formative assessment across sectors through an evaluation of the definitions of feedback. Feedback appears to be described in its own terms, that is, unrelated to standards, outcomes and criteria (which are often implicit and covert understandings) or feedback against a standard, outcomes and criteria (explicit). Across sectors, the roles of feedback and formative assessment are linked variously to learning and require differing roles from tutors and learners. This is problematic as we have common terms and literature. Furthermore, five self-assessment models are identified and used to demonstrate degrees of learner-centredness and participation in dialogue. How feedback is arrived at impacts on the quality, ethical aspect and the communicability of the consequences.
Price et al. outline changing epistemologies of feedback as product to feedback as a dialogic process for negotiation of meaning within assessment, learning and teaching. Mandatory peer- and self-assessment are part of the contextual factors that support our new epistemological understandings of learning and negotiation of meaning. Learners cannot be excluded from assessment (of which feedback is just one aspect), otherwise they are excluded from learning. All artefacts that are used to mediate and reflect quality need to be ‘peer’ assessed to clarify these elusive ‘qualities’.
In Section C (‘Principles and practices’), Sadler and Bloxham examine the details of new processes, practices and conceptualisations. Sadler focuses on peer feedback in order to develop expertise on criteria, which is necessary for learners to become cognisant of evaluative processes, initiated into the higher education assessment community and develop an understanding of standards. Similarly, Bloxham examines how students can be enabled to develop a comparable understanding of standards as tutors in order to be helped with assignment production.
Section C provides two examples of supporting students’ initiation to assessment cultures and understanding of standards. Sadler uses peer feedback in a discussion process to develop shared understandings of criteria, standards and quality. Students prepare, share and compare their work, as does the tutor. Through assessment practice and discussion, they make implicit understandings of their own standards into explicit comparisons. Without initial explicit criteria or guidance as to what to do, students learn to use their past knowledge and experience and so become clearer on their own perceptions and understandings as a basis for new shared and explicit learning. Bloxham uses research to clarify and support best practices in enabling students to understand requirements of assessment, task compliance and knowledge of standards. The natural processes of learning and acquiring this knowledge and skills through practice and experience are the usual means by which lecturers are initiated into the assessment community. This process is also recommended as being the most efficient and expedient for inducting students to assessment processes, understandings and protocols, so that these align with those of their tutors.
Section A: The student voice
Chapter 1
Surveys of ‘the student experience’ and the politics of feedback
Mantz Yorke
The politics of feedback
Governments around the world acknowledge the importance of higher education for national economic performance. With varying levels of intensity, they also appear to view students as acting as rational ‘consumers’ or ‘customers’ of higher education, asserting that an individual investment in higher education pays off in terms of better career prospects and other life-benefits. The idea of student choice has, as a consequence, become a significant element in political rhetoric. The consumerist perspective, however, underplays two major considerations – that higher education contributes significantly to the social good, and that students have to contribute their own energies to their development.
In Australia and the United Kingdom, the focal countries of this chapter, governments invest considerably in higher education, even though students are required to contribute substantially to the costs of their studies. National surveys of students’ perceptions of their experience have been undertaken since 1993 in Australia (focusing on graduates a short time after they have completed their programmes, and based on the Course Experience Questionnaire [CEQ]) and from 2005 in the UK, though in this case focusing on students towards the end of their final year’s studies. These surveys were introduced for various reasons relating to accountability, such as justifying the investment of public money and assuring the quality of provision to a range of stakeholders, as well as making a contribution to informing potential students regarding their choice of institution.
As part of the ‘consumer choice’ agenda, the data are made available to the public, although in differing amounts of detail: CEQ data are published annually by Graduate Careers Australia (e.g. GCA 2010), and annual data from the UK National Student Survey (NSS) (previously made available through the national ‘Unistats’ website) is integrated into ‘Key Information Sets’pertaining to all courses.
These surveys are politically and economically important to institutions as they seek to position themselves in the market for students. It is unclear how much use students make of the survey data, but ‘league tables’ or rankings of institutions (which incorporate survey data with varying degrees of detail) may for many be treated as proxies for the quality of ‘the student experience’. Feedback on students’work is a significant component of the perceptions of the value obtained from the money invested in higher education programmes by students or their sponsors. Satisfaction with feedback is one of the ‘measures’ incorporated into the league table of universities published by the Guardian, and that newspaper’s disaggregated tables focusing specifically on subject disciplines. In contrast, the league tables published by The Times and the Independent, along with their subject-specific disaggregations, use merely a global ‘measure’ of student satisfaction, to which perceptions relating to feedback contribute. In Australia The Good Universities Guide (Hobsons 2010) gives an indication of whether a field of study at a university has teaching quality, ‘generic skills’ achievement of its students and overall satisfaction that are average, or better or worse than average, based on CEQ ratings.1
Institutional performance in general, as measured by national surveys, is palpably of national and international political and economic significance. There is also an intra-institutional micropolitical dimension to institutional performance in that differences between ratings given to academic organisational units can affect institutional strategy, with some units being privileged and others penalised.
This chapter focuses on surveys of first degree students in Australia and the UK, though surveys are also administered to postgraduate students in both countries.
Brief political histories
Australia
The National Inquiry into Education, Training and Employment raised the twin issues of the quality and efficiency of the Australian educational system, on which public expenditure had increased greatly (Williams 1979, Volume 1, para. 18.1). In the Williams Report there are hints of the interest in performance indicators that was to develop over the succeeding decade and a half: for example, Question (h) of a suggested checklist relating to tertiary education asked:
What arrangements are there to review methods of teaching and examining, and curricula, in the light of examination results, and comments from students, professional or para-professional associations, and employers?
Williams (1979, Volume 1, para R18.23)
In the succeeding years, political interest in evaluating the higher education system increased. Amongst the relevant policy documents were:
• reports from the Commonwealth Tertiary Education Commission (CTEC) for the 1979–81 and 1982–84 triennia, which stressed the need for improved evaluative practices within higher education;
• two studies supported by the CTEC – Linke et al. (1984) and Bourke (1986), the latter of which noted ‘the absence of systematic and routine scrutiny of performance at the departmental level’ (p.23);
• the Review of Efficiency and Effectiveness in Higher Education (Commonwealth Tertiary Education Commission 1986), which was charged, inter alia, with examining ‘measures to monitor performance and productivity in higher education institutions, to assist institutions to improve their efficiency and accountability’ (p. xv);
• the government White Paper, Higher Education: a policy statement (Dawkins 1988), which supported the development of a set of indicators that would include, inter alia, the quality of teaching (see pp.85–6);
• a response from the two bodies representative of the leaders of Australian universities and colleges (AVCC/ACDP 1988) in which a number of possible indicators were set out. Of relevance to this chapter is the suggestion that the quality of teaching should be evaluated through the use of a short student survey, though feedback was not explicitly mentioned in this broadly couched document (see pp.10–11).
This steadily intensifying policy interest in performance indicators led to the commissioning, by the Commonwealth’s Department of Employment, Education and Training, of a feasibility study into a number of possible indicators. The outcome was the two-volume report Performance Indicators in Higher Education (Linke 1991).
One of the indicators whose utility was researched was the Course Experience Questionnaire, a 30-item instrument focusing upon students’ perceptions of teaching quality in higher education. The Linke Report recommended that an indicator (initially along the lines of the trialled version of the CEQ) be included in any national system of performance indicators: this instrument could be incorporated in, or administered in conjunction with, the annual survey of graduates that was conducted by the (then) Graduate Careers Council of Australia (Linke 1991, Volume 1, pp.63, 65). The policy substrate to the CEQ was made apparent when its designer, Paul Ramsden, acknowledged that performance indicators entailed
the collection of data at different levels of aggregation to aid managerial judgements – judgements which may be made either within institutions, or at the level of the higher education system as a whole.
(Ramsden 1991, p.129)
In a paper circulated in 1989 to clarify various issues relating to the CEQ during its trial period, the importance was acknowledged of the CEQ producing findings that would allow appropriate comparisons to be made across the higher education sector:
[The CEQ’s] guiding design principle has been a requirement to produce, as economically as possible, quantitative data which permit ordinal ranking of units in different institutions, within comparable subject areas, in terms of perceived teaching quality.
(Linke 1991, Volume 2, p.81)
United Kingdom
The political desire for performance indicators relating to higher education in the UK can be tracked back at least as far as the Jarratt Report (CVCP 1985), which recommended that the then University Grants Committee (UGC) and the Committee of Vice Chancellors and Principals (CVCP, the representative body of the universities of that time) should develop performance indicators. A joint working group of the two organisations produced two statements on performance indicators (CVCP and UGC 1986; 1987) and during the ensuing decade the statements were followed up with compilations of university management statistics and pe...

Table of contents

  1. Cover Page
  2. Half Title page
  3. Title Page
  4. Copyright Page
  5. Contents
  6. Figures
  7. Tables
  8. Contributors
  9. Foreword
  10. Preface
  11. Part I Current thinking
  12. Section A The student voice
  13. Section B The wider picture – challenges to preconceptions
  14. Section C Principles and practices
  15. Part II Enhancing the student role in the feedback process
  16. Section A Students
  17. Section B Tutors
  18. Part III Fostering institutional change
  19. Index