Introduction
A recent search in scientific databases identified an increase of research publications focusing on assessment from the 1950s to the 2020s by over 380%. Despite an intense debate over the past seven decades, the distinction between formative and summative assessment has not resulted in a precise definition, and the distinction between the two remains blurry (Newton, 2007). To the contrary, other terms have been introduced, such as learning-oriented assessment (Carless, 2007), emphasizing the development of learning elements of assessment; sustainable assessment (Boud, 2000), proposing the support of student learning beyond the formal learning setting; or stealth assessment (Shute et al., 2016), denoting assessments that take place in the background without the user noticing it.
More recently, technology-enhanced assessments enriched standard or paper-based assessment approaches, some of which hold much promise for supporting learning (Webb et al., 2013; Webb & Ifenthaler, 2018 b). While much effort in institutional and national systems is focused on harnessing the power of technology-enhanced approaches in order to reduce costs and increase efficiency (Bennett, 2015), a range of different technology-enhanced assessment scenarios have been the focus of educational research and developmentâhowever, often at small scale (Stödberg, 2012).
For example, technology-enhanced assessments may involve a pedagogical agent for providing feedback during a learning process (Johnson & Lester, 2016). Other scenarios of technology-enhanced assessments include analyses of a learnerâs decisions and interactions during game-based learning (Bellotti et al., 2013; Ifenthaler et al., 2012; Kim & Ifenthaler, 2019), scaffolding for dynamic task selection including related feedback (Corbalan et al., 2009), remote asynchronous expert feedback on collaborative problem-solving tasks (Rissanen et al., 2008), or semantic rich and personalized feedback as well as adaptive prompts for reflection through data-driven assessments (Ifenthaler, 2012).
Accordingly, it is expected that technology-enhanced assessment systems meet a number of specific requirements, such as (a) adaptability to different subject domains, (b) flexibility for experimental as well as learning and teaching settings, (c) management of huge amounts of data, (d) rapid analysis of complex and unstructured data, (e) immediate feedback for learners and educators, as well as (f) generation of automated reports of results for educational decision making (Ifenthaler et al., 2010).
With the increased availability of vast and highly varied amounts of data from learners, teachers, learning environments, and administrative systems within educational settings, further opportunities arise for advancing pedagogical assessment practice (Ifenthaler et al., 2018). Analytics-enhanced assessment harnesses formative as well as summative data from learners and their contexts (e.g., learning environments) in order to facilitate learning processes in near real time and help decision makers to improve learning environments. Hence, analytics-enhanced assessment may provide multiple benefits for students, schools, and involved stakeholders. However, as noted by Ellis (2013), analytics currently fail to make full use of educational data for assessment.
This chapter critically reflects the current state of research in educational assessment and identifies ways to harness data and analytics for assessment. Further, a benefits matrix for analytics-enhanced assessment is suggested, followed by a framework for implementing assessment analytics.
Current State of Educational Assessment
Tracing the history of educational assessment practice is challenging, as there are a number of diverse concepts referring to the idea of assessment. Educational assessment is a systematic method of gathering information or artefacts about a learner and learning processes to draw inferences of the personsâ dispositions (Baker et al., 2016). Scriven (1967) is often referred to as the original source of the distinction between formative and summative assessment. However, formative and summative assessment are considered to be overlapping concepts, and the function depends on how the inferences are used (Black & Wiliam, 2018).
Newton (2007) notes that the distinction between formative and summative assessment hindered the development of sound assessment practices on a broader level. In this regard, Taras (2005) states that every assessment starts with the summative function of judgment, and by using this information for providing feedback for improvement, the function becomes formative. Bloom et al. (1971) were concerned with the long-lasting idea of assessment separating learners based on a summative perspective of knowledge and behaviorâthe assessment of learning. In addition, Bloom et al. (1971) supported the idea of developing the individual learner and supporting the learner and teacher towards mastery of a phenomenonâthe assessment for learning.
Following this discourse, Sadler (1989) developed a theory of formative assessment and effective feedback. Formative assessment helps students to understand their current state of learning and guides them in taking action to achieve their learning goals. A similar line of argumentation can be found in Black (1998), in which three main types of assessment are defined: (a) formative assessment to aid learning; (b) summative assessment for review, transfer, and certification; (c) summative assessment for accountability to the public. Pellegrino et al. (2001) extend these definitions with three main purposes of assessment: (a) assessment to assist learning (formative assessment); (b) assessment of individual student achievement (summative assess-ment); and (c) assessment to evaluate (evaluative assessment).
To facilitate learning through assessment, Carless (2007) emphasizes that assessment tasks should be learning tasks that are related to the defined learning outcomes and distributed across the learning and course period. Furthermore, to foster learnersâ responsibility for learning (Bennett, 2011; Wanner & Palmer, 2018) and self-regulation (Panadero et al., 2017), self-assessments are suitable means. In general, self-assessments include studentsâ judgment and decision making about their work and comprise three steps: definition of the expectations, evaluating the work against the expectations, and revising the work (Andrade, 2010). Consequently, as Sadler (1989) argues, self-monitoring and external feedback are related to formative assessment, with the aim to evolve from using external feedback to self-monitoring to independently identify gaps for improvement. Hence, self-assessments enable learners to develop independence of relying on external feedback (Andrade, 2010).
However, self-assessment demands but also fosters evaluative judgment of learners (Panadero et al., 2019; Tai et al., 2018). Thus, self-assessments might be particularly challenging for learners with lower levels of domain or procedural knowledge (Sitzmann et al., 2010). Hence, the feedback generated internally by the learners could be complemented and further enhanced with external feedback (Butler & Winne, 1995). Such external feedback may help learners to adjust their self-monitoring (Sitzmann et al., 2010). Among others, the feedback provided should clearly define expectations (i.e., criteria, standards, goals), be timely, sufficiently frequent and detailed, be on aspects that are malleable through the students, be on how to close the gap, in a way learners can react upon it (Gibbs & Simpson, 2005; Nicol & MacfarlaneâDick, 2006). Furthermore, assessment and feedback processes shall actively include the learner as an agent in the process (Boud & Molloy, 2013). However, offering formative assessments and individual feedback are limited in many ways throughout higher education due to resource constraints (##Broadbent et al., 2017; Gibbs & Simpson, 2005).
Assessment as learning is a concept that reflects a renewed focus on the nature of the integration of assessment and learning (Webb & Ifenthaler, 2018 a). Key aspects of assessment as learning include the centrality of understanding the learning gap and the role of assessment in helping students and teachers explore and regulate this gap (Dann, 2014). Thus, feedback and the way students regulate their response to feedback is critical for assessment as learning, just as it is for assessment for learning (Perrenoud, 1998). Other active research areas focus on peer assessment (Lin et al., 2016; Wanner & Palmer, 2018). Especially the opportunities of technology-enhanced peer in...