The first four chapters in the book will provide essential background information about the literature underlying the study, its design, and the main participants in the project. These chapters set the stage for the results chapters that follow in the second section of the book
What is formative assessment, and why might it serve as an important mechanism to support student learning? Why is formative assessment a challenging practice for teachers, and how might new representations of how student thinking develops over time—also called learning progressions (Corcoran, Mosher, & Rogat, 2009)—support teachers as they conduct formative assessment with their students? This chapter introduces the Elevate study and the literature on which the study was based.
Before getting started, though, it’s important to note that the foundations of the Elevate study were set years before the research described in this book was funded, growing out of prior research on the effects of formative assessment on student learning and motivation (Yin et al., 2008), an experimental study of middle school science teaching (Furtak & Kunter, 2012), and a pilot study (Furtak, 2012). As I begin this chapter, I’ll go back in time to revisit earlier studies I conducted that helped me understand, both by talking with other teachers as well as through my own teaching experience, how a representation of student ideas like a learning progression might help to support teachers as they listen and attend to student ideas during formative assessment.
Challenges in Listening and Attending to Student Thinking
A few years ago, as part of my post-doctoral research project, I had the opportunity to do some middle school teaching. The students weren’t mine—I had arranged to teach some local seventh graders as part of the research project—but because I was working in Berlin and wanted to conduct the study in English, I needed to step in as one of the instructors for the study.
The project we were doing was about self-determination theory (Deci & Ryan, 1985), specifically looking at the types of support teachers gave to their students (Stefanou et al., 2004) and the effects of this support on students’ learning and motivation. To our surprise, we found that students were more likely to positively perceive controlling, traditional instruction and learned more in the conditions in which we more directly told them the answers. However, when the students took a follow-up test, we saw the statistical significance between the conditions disappear. In fact, the students who had more control over their learning, and who got supportive and informational feedback when they shared their ideas, actually ended up scoring higher on the follow-up test, whereas the students in the traditional conditions saw their scores drop (Furtak & Kunter, 2012).
When I had been a classroom teacher, prior to attending graduate school, I tried to teach in ways that honored student ideas, taking up student thinking and giving students helpful feedback that would move them forward toward learning goals. This type of instruction invited them to work together in solving problems that were relevant to their lives as they learned important science concepts along the way. I’ll be honest: it was a demanding way to be a teacher, taking large amounts of time and energy to realize each day. I would go to bed at an unheard-of hour for someone in their early 20s; get up early and arrive at school when I could still see stars in the sky; and stay long after my more senior colleagues in the afternoon to grade papers, try out activities, and make plans for the next day. I only lasted for two years before graduate school drew me into educational research.
But in my post-doctoral study, due to the design of the project, I got to directly compare what it felt like to be the teacher I had been—one supportive of my students’ thinking—with what it felt like to be a more traditional teacher. For the traditional, controlling conditions, I wrote a script that involved giving students answers and evaluating their responses to the questions I asked. I remember vividly coming home to my Kreuzberg apartment every evening after teaching the controlling lessons full of energy, eager to get out and explore my adopted city.
In contrast, on the nights after I had taught the autonomy-supportive conditions in my post-doctoral study—the ones where I asked students open-ended questions, attended to their thinking, and gave them constructive feedback—I came home ready just to watch TV and let my brain take a break. I would watch TV in English, not in German; I didn’t have enough brainpower left for the cognitive load of understanding The Simpsons auf Deutsch.
So what was different? In the controlling conditions, I didn’t put any effort into understanding students’ thinking. I would ask questions, students would give an answer, and then I’d tell them if they were right or not. The excerpt of the transcript that follows shows how I responded to a student who was asking a question about the scale of the graph we were making and whether we would be talking about velocity in meters or centimeters per second.
Erin: [T]he amount of time is how long between each picture?
Mathias: 0.5 seconds.
Erin: 0.5 seconds. Okay. That’s correct. Okay. And so if it went this far in 0.5 seconds, what’s the difference between this and this? Where is it going faster? Here or over there? Where is the velocity greater? Franz?
Franz: I would say there [pointing].
Erin: Here?
Franz: Yeah.
Erin: Okay. That’s correct. Why do you say that?
Franz: Because when it hasn’t used so much energy because then it’s just landed and then it’s already bounced three times, so it’s losing the energy.
Erin: Okay. So we’re not talking about energy. Right now, we’re just talking about velocity.
In this exchange, I was driving the conversation in a particular direction and didn’t hesitate to tell students when they were right or wrong. I steered them away from ideas that, from other perspectives on teaching (such as responsive teaching, see Robertson et al., 2016), might have been quite fruitful (like Franz’s idea about losing energy); instead I focused them on calculations of velocity in the units that I wanted them to use. It all happened on my terms.
Contrast this type of exchange with the following transcript, which illustrates the way that, in the more supportive conditions, I had to listen carefully to students’ thinking and then give informational feedback that pushed them forward in their understanding. This meant attending to the substance of these students’ ideas (Coffey et al., 2011), making inferences about what they knew (Pellegrino, Chudowsky, & Glaser, 2001), and then carefully crafting a response that included both a diagnosis of the student’s idea as well as information designed to help the student improve his or her performance (Wiliam, 2007).
In the transcript excerpts that follow, I was working one-on-one with a student making a graph of distance versus time for balls traveling on flat and angled ramps. The student was clearly struggling to figure out what increments to use as he numbered a graph that would represent distance traveled by a ball on a ramp over increments of time, but in the cognitive autonomy-supportive condition, I was trying to walk him through figuring it out on his own rather than giving him the answer.
Jamil: I’m having problem.
Erin: Problem, okay.
Jamil: So should I do with the 20 or?
Erin: I think it’s good to use all this space, so yeah that might be a good idea. I mean I think because right now this is 60 and this space here is 30.
Jamil: So it likes 20, 60 and stuff.
Erin: Well if you did, you could do 20 and then if you were numbering it by 20s what would be the next number, 20 and then?
Jamil: Forty.
Erin: Forty. Okay. That’s good thinking. Okay, because I think you’re skipping again so let me show you a little trick. Okay? I think we’re okay. I can give her another pencil too, but thank you. Okay so we were saying that the largest number is three and so maybe put three like here, and then you can count how many spaces you have between, so it’s like one, two, three, four, five, six, seven, eight. Now you divide eight—divide three by eight and then I’ll tell you—
Jamil: Three by eight.
Erin: —how to make your spaces. Because the idea is just to have each of these have an equal value, so that’s kind of like a little trick to do it Jamil: But that’s like zero point three seven five.
Erin: Wow, that’s tricky. Okay. So maybe you might want to try rounding it up to like point five or something.
Jamil: But what is when I want to write my three here. Then I have like, the space or the line?
Erin: So you—it’s always on the line.
Jamil: So it’s one, two, three, four, five, six, seven, eight.
Erin: But if you do point fives then I guess it would be more like, it might not be exactly that. Because point three seven is kind of a hard number to work with.
Jamil: Or maybe I just like—I just like—like this and then in the middle of each I like draw line there.
Erin: You could try that, yeah.
Jamil: See and so how much should I do?
Erin: Until the highest measurement, so I’ll be right back [walks to check on another student].
When I read back through this transcript, I can see all the effort I was going to as I tried to help Jamil figure out on his own a method for numbering his graph. I was doing some heavy intellectual lifting as I listened to his and every other student’s ideas intently and responded in ways to help them move forward in their thinking themselves rather than telling them the answer. When this kind of cognitively demanding instruction is scaled to classrooms of 30 kids (or more), for a secondary science teacher working with five classes a day, it gets even more daunting. The classes might all be the same content area, but more likely there are two to three different courses to prepare for (or more for teachers in smaller or rural schools). The scale of the cognitive load and preparation necessary to perform this type of instruction daily is staggering, so it’s no wonder that really asking students what they know, and then listening and attending to student thinking, poses challenges to teachers.
I had spent months preparing to teach these lessons, reading foundational literature on student thinking about uniform and nonuniform motion (e.g., Trowbridge & McDermott, 1981) and carefully designing the tasks for students to complete so that they included opportunities for students to make their thinking visible (Furtak, 2009). Despite all of this preparation, supporting students was not straightforward. When students shared their ideas, I still had to think on the fly of the kinds of feedback to give them.
These experiences bring to light what we know from other studies—teachers find formative assessment, or listening and responding to student ideas during the course of instruction, to be a challenging practice to realize in their classrooms. My experiences in this study, as well as other studies, set me on a new research agenda—how might w...