Part I
Current thinking
Overview
Maddalena Taras
Part I provides an overview of the wider aspects of the student voice, current engagement, thinking and conceptualisations of feedback. Feedback is a much used and abused term, particularly in its adoption at the services of learning: exploring our collective conceptualisations and understandings of feedback is necessary at every level of education for future developments. This book serves such a function.
Government offices have developed questionnaires to measure and monitor the efficacy of institutional support of feedback to students in an attempt to link feedback processes with learning and âcustomerâ satisfaction.
In Section A (âThe student voiceâ), Yorke, and Bols and Wicklow evaluate inter/national surveys on aspects of assessment feedback. Yorke, at an international level, examines student surveys and the limited use of feedback data that impact on major national and institutional decisions. Bols and Wicklow delve deeper at national level into student wishes on feedback.
More specifically, Yorke examines government-led surveys to enable accountability and comparability in higher education institutions. He highlights the politicisation of feedback in the national undergraduate student surveys in Australia and the UK and comes to two main conclusions: First, two or three questions on feedback in a survey cannot hope to provide meaningful data on such a complex process. Second, despite the paucity of these data, results are often used as stop-gap, short-term remedial measures by institutions, often bypassing and neglecting long-term planning to support and sustain developmental strategies that are embedded and part of learning. This perceived official student voice, despite the limited data that is used to represent it, has substantial impact, particularly at institutional level.
Bols and Wicklow evaluate what students want in relation to feedback as reported in the NUS/HSBC UK Student Experience Research. In the context of a wider assessment and learning context, they seek feedback integrated into a developmental and communicative framework to support both formative and summative assessments. There are five essential things that students want: feedback situated within support of learning, effective feedback, innovative assessment approaches, anonymous marking, learning focused assessment and accessible, legible submission methods.
In Section B (âThe wider pictureâ), Taras examines theories of formative assessment across educational sectors, with the ensuing implications for feedback in a world where divisions are meaningless with the internationalisation of research communications and communities. Price et al. provide a detailed rationale for a new understanding of feedback that has provided the Osney Grange Group agenda and instigated this book.
Taras explores differences in theories of formative assessment across sectors through an evaluation of the definitions of feedback. Feedback appears to be described in its own terms, that is, unrelated to standards, outcomes and criteria (which are often implicit and covert understandings) or feedback against a standard, outcomes and criteria (explicit). Across sectors, the roles of feedback and formative assessment are linked variously to learning and require differing roles from tutors and learners. This is problematic as we have common terms and literature. Furthermore, five self-assessment models are identified and used to demonstrate degrees of learner-centredness and participation in dialogue. How feedback is arrived at impacts on the quality, ethical aspect and the communicability of the consequences.
Price et al. outline changing epistemologies of feedback as product to feedback as a dialogic process for negotiation of meaning within assessment, learning and teaching. Mandatory peer- and self-assessment are part of the contextual factors that support our new epistemological understandings of learning and negotiation of meaning. Learners cannot be excluded from assessment (of which feedback is just one aspect), otherwise they are excluded from learning. All artefacts that are used to mediate and reflect quality need to be âpeerâ assessed to clarify these elusive âqualitiesâ.
In Section C (âPrinciples and practicesâ), Sadler and Bloxham examine the details of new processes, practices and conceptualisations. Sadler focuses on peer feedback in order to develop expertise on criteria, which is necessary for learners to become cognisant of evaluative processes, initiated into the higher education assessment community and develop an understanding of standards. Similarly, Bloxham examines how students can be enabled to develop a comparable understanding of standards as tutors in order to be helped with assignment production.
Section C provides two examples of supporting studentsâ initiation to assessment cultures and understanding of standards. Sadler uses peer feedback in a discussion process to develop shared understandings of criteria, standards and quality. Students prepare, share and compare their work, as does the tutor. Through assessment practice and discussion, they make implicit understandings of their own standards into explicit comparisons. Without initial explicit criteria or guidance as to what to do, students learn to use their past knowledge and experience and so become clearer on their own perceptions and understandings as a basis for new shared and explicit learning. Bloxham uses research to clarify and support best practices in enabling students to understand requirements of assessment, task compliance and knowledge of standards. The natural processes of learning and acquiring this knowledge and skills through practice and experience are the usual means by which lecturers are initiated into the assessment community. This process is also recommended as being the most efficient and expedient for inducting students to assessment processes, understandings and protocols, so that these align with those of their tutors.
Section A: The student voice
Chapter 1
Surveys of âthe student experienceâ and the politics of feedback
Mantz Yorke
The politics of feedback
Governments around the world acknowledge the importance of higher education for national economic performance. With varying levels of intensity, they also appear to view students as acting as rational âconsumersâ or âcustomersâ of higher education, asserting that an individual investment in higher education pays off in terms of better career prospects and other life-benefits. The idea of student choice has, as a consequence, become a significant element in political rhetoric. The consumerist perspective, however, underplays two major considerations â that higher education contributes significantly to the social good, and that students have to contribute their own energies to their development.
In Australia and the United Kingdom, the focal countries of this chapter, governments invest considerably in higher education, even though students are required to contribute substantially to the costs of their studies. National surveys of studentsâ perceptions of their experience have been undertaken since 1993 in Australia (focusing on graduates a short time after they have completed their programmes, and based on the Course Experience Questionnaire [CEQ]) and from 2005 in the UK, though in this case focusing on students towards the end of their final yearâs studies. These surveys were introduced for various reasons relating to accountability, such as justifying the investment of public money and assuring the quality of provision to a range of stakeholders, as well as making a contribution to informing potential students regarding their choice of institution.
As part of the âconsumer choiceâ agenda, the data are made available to the public, although in differing amounts of detail: CEQ data are published annually by Graduate Careers Australia (e.g. GCA 2010), and annual data from the UK National Student Survey (NSS) (previously made available through the national âUnistatsâ website) is integrated into âKey Information Setsâpertaining to all courses.
These surveys are politically and economically important to institutions as they seek to position themselves in the market for students. It is unclear how much use students make of the survey data, but âleague tablesâ or rankings of institutions (which incorporate survey data with varying degrees of detail) may for many be treated as proxies for the quality of âthe student experienceâ. Feedback on studentsâwork is a significant component of the perceptions of the value obtained from the money invested in higher education programmes by students or their sponsors. Satisfaction with feedback is one of the âmeasuresâ incorporated into the league table of universities published by the Guardian, and that newspaperâs disaggregated tables focusing specifically on subject disciplines. In contrast, the league tables published by The Times and the Independent, along with their subject-specific disaggregations, use merely a global âmeasureâ of student satisfaction, to which perceptions relating to feedback contribute. In Australia The Good Universities Guide (Hobsons 2010) gives an indication of whether a field of study at a university has teaching quality, âgeneric skillsâ achievement of its students and overall satisfaction that are average, or better or worse than average, based on CEQ ratings.1
Institutional performance in general, as measured by national surveys, is palpably of national and international political and economic significance. There is also an intra-institutional micropolitical dimension to institutional performance in that differences between ratings given to academic organisational units can affect institutional strategy, with some units being privileged and others penalised.
This chapter focuses on surveys of first degree students in Australia and the UK, though surveys are also administered to postgraduate students in both countries.
Brief political histories
Australia
The National Inquiry into Education, Training and Employment raised the twin issues of the quality and efficiency of the Australian educational system, on which public expenditure had increased greatly (Williams 1979, Volume 1, para. 18.1). In the Williams Report there are hints of the interest in performance indicators that was to develop over the succeeding decade and a half: for example, Question (h) of a suggested checklist relating to tertiary education asked:
What arrangements are there to review methods of teaching and examining, and curricula, in the light of examination results, and comments from students, professional or para-professional associations, and employers?
Williams (1979, Volume 1, para R18.23)
In the succeeding years, political interest in evaluating the higher education system increased. Amongst the relevant policy documents were:
⢠reports from the Commonwealth Tertiary Education Commission (CTEC) for the 1979â81 and 1982â84 triennia, which stressed the need for improved evaluative practices within higher education;
⢠two studies supported by the CTEC â Linke et al. (1984) and Bourke (1986), the latter of which noted âthe absence of systematic and routine scrutiny of performance at the departmental levelâ (p.23);
⢠the Review of Efficiency and Effectiveness in Higher Education (Commonwealth Tertiary Education Commission 1986), which was charged, inter alia, with examining âmeasures to monitor performance and productivity in higher education institutions, to assist institutions to improve their efficiency and accountabilityâ (p. xv);
⢠the government White Paper, Higher Education: a policy statement (Dawkins 1988), which supported the development of a set of indicators that would include, inter alia, the quality of teaching (see pp.85â6);
⢠a response from the two bodies representative of the leaders of Australian universities and colleges (AVCC/ACDP 1988) in which a number of possible indicators were set out. Of relevance to this chapter is the suggestion that the quality of teaching should be evaluated through the use of a short student survey, though feedback was not explicitly mentioned in this broadly couched document (see pp.10â11).
This steadily intensifying policy interest in performance indicators led to the commissioning, by the Commonwealthâs Department of Employment, Education and Training, of a feasibility study into a number of possible indicators. The outcome was the two-volume report Performance Indicators in Higher Education (Linke 1991).
One of the indicators whose utility was researched was the Course Experience Questionnaire, a 30-item instrument focusing upon studentsâ perceptions of teaching quality in higher education. The Linke Report recommended that an indicator (initially along the lines of the trialled version of the CEQ) be included in any national system of performance indicators: this instrument could be incorporated in, or administered in conjunction with, the annual survey of graduates that was conducted by the (then) Graduate Careers Council of Australia (Linke 1991, Volume 1, pp.63, 65). The policy substrate to the CEQ was made apparent when its designer, Paul Ramsden, acknowledged that performance indicators entailed
the collection of data at different levels of aggregation to aid managerial judgements â judgements which may be made either within institutions, or at the level of the higher education system as a whole.
(Ramsden 1991, p.129)
In a paper circulated in 1989 to clarify various issues relating to the CEQ during its trial period, the importance was acknowledged of the CEQ producing findings that would allow appropriate comparisons to be made across the higher education sector:
[The CEQâs] guiding design principle has been a requirement to produce, as economically as possible, quantitative data which permit ordinal ranking of units in different institutions, within comparable subject areas, in terms of perceived teaching quality.
(Linke 1991, Volume 2, p.81)
United Kingdom
The political desire for performance indicators relating to higher education in the UK can be tracked back at least as far as the Jarratt Report (CVCP 1985), which recommended that the then University Grants Committee (UGC) and the Committee of Vice Chancellors and Principals (CVCP, the representative body of the universities of that time) should develop performance indicators. A joint working group of the two organisations produced two statements on performance indicators (CVCP and UGC 1986; 1987) and during the ensuing decade the statements were followed up with compilations of university management statistics and pe...