As we delve deeper into the work that supports a tinkering mindset and a “maker” environment, we inevitably find ourselves circling the drain of “assessment.” How do we know what students are learning? How do we evaluate student performance?
Perhaps the first paradigm shift is getting our heads around what learning looks like in a makerspace. If we don’t understand the learning terrain that underlies problem-based design challenges, it’s going to be a super leap into a dialogue about what assessment might look like. When students are engaged in authentic challenges, learning opportunities spew like lava. As we “kidwatch” our students at work, here are just a few of the learning patterns we explore that push us to understand our learners and their levels of depth and complexity:
What is the level of learner independence? Who is working solo? Who is collaborating? What do student conversations around the activity sound like? What do student work configurations look like?
Who is continually engaging with a resource such as a notebook sketch, a bit.ly video resource or a critical peer? Who is engaging with the challenge, building background knowledge and probing to understand?
Who is innovating, using materials in a unique way to prototype a possible solution to the problem? Who watches the person next to them and replicates that model? Who reads the room for ideas and then makes a hybrid?
Who is “stuck” and when they are, what is their next move? Who can successfully get “unstuck?” Who continually tests and iterates? Who wants to be “done” and call it a day?
Who is working with a sense of urgency? Who appears to have a plan? Who is seemingly shooting in the dark? Is one method more successful than another? Who appears to be working within a fixed mindset? Who appears to be using a growth mindset?
The list goes on and as teachers compiling notes on students learning, the anecdotal jots cover a vast landscape, one that doesn’t necessarily lend itself to a cohesive educational design. We might find out that John can’t persist when the challenge level is difficult and he is working independently while we discover that Risa is skillful at making her 2D sketches into 3D prototypes while Sid is fantastic at offering his peers design feedback without leading them down the path towards a “set” solution. How in the “time famine” of teaching, are we to make patterns from our observations that help all students to grow as learners? How do we meaningfully communicate this work to parents and administrators?
One of the most common ways that educators have solved this problem is by forging rubrics. In their book, Invent to Learn, Making, Tinkering, and Engineering in the Classroom, Sylvia Libow Martinez and Gary Stager offer the following thinking on rubrics:
The rubric is intended to provide firm guidelines about what the student is supposed to do, and the credit (grade) they will earn. It sets expectations so there is no mistake or misunderstanding about how the teacher expects the project to turn out.
However there are reasons rubrics may be counterproductive:
A rubric imposes the teacher’s vision of what the student work should look like at the end.
A rubric becomes the checklist for the project. It is difficult to argue that students will be creative when the rubric is very clear about how many words, how many slides, or how many photos need to be included in the student’s work.
Rubrics reinforce student dependency on how a teacher defines their work. (p. 82)
Alfie Kohn has also written extensively on how rubrics inform learning versus inspire learners. He notes, “ Children learn how to make good decisions by making decisions, not by following directions.” One of Kohn’s big tenets is that the current educational focus on assessment and achievement has sold learners short, hence students are gaming the system to get the grade, not students intrinsically involved in learning and problem-solving. Kohn comments on the importance of the social-emotional learning experience: “A preoccupation with achievement is not only different from, but often detrimental to, a focus on learning. Thoughts and emotions while performing an action are more important in determining subsequent engagement than the actual outcome of that action.” Our experience with rubrics over the years coincides with Kohn’s studies and the reflections of Martinez and Stager. Basically if you guide learning with a rubric, you get what you ask for but you never really see your students for who they are, nor are you able to glean their full potential.
If not the rubric, then what? Those of us who believe in creating tinkering opportunities for students somehow need to justify the work, make it credible to the administrators who fund it and accessible to parents who aren’t always convinced of its educational value. The work of Jay McTighe and Grant Wiggins may provide a tangible framework from which to begin. In their book, Understanding by Design, McTighe and Wiggins lay out a process for designing curriculum. The UbD model is based on seven key beliefs; two of these are particularly compelling when engaged in conversations about project-based learning:
Understanding is revealed when students autonomously make sense of and transfer their learning through authentic performance. Six facets of understanding—the capacity to explain, interpret, apply, shift perspective, empathize, and self-assess—can serve as indicators of understanding.
Effective curriculum is planned backward from long-term, desired results through a three-stage design process (Desired Results, Evidence, and Learning Plan). This process helps avoid the common problems of treating the textbook as the curriculum rather than a resource, and activity-oriented teaching in which no clear priorities and purposes are apparent.
The trick to synthesizing these two beliefs is that “effective curriculum” must be planned by the teachers using the “six facets of understanding.” If we want our students to be able to have the capacity to “explain, interpret, apply, shift perspective, empathize and self-assess,” then we must have curriculum that creates opportunities for all of these components to come into play. Design engineering challenges provide rich fodder for all of these learning options. If we create challenges that are buoyant enough to engage all learners and open our lenses for assessment wide enough to embrace ideas such as shifting perspectives, empathy and self-assessment, we then have a rigorous curriculum model to build upon. Now the question becomes, how do we assess our students’ abilities to “explain, interpret, apply, shift perspective, empathize and self-assess?”
We have gained the most insight from the students themselves as they thoughtfully reflect on their design challenge experiences. Through experimentation over time, we’ve discovered that we can support student reflection by creating scaffolds that push their thinking and also help them to map their social-emotional experience throughout the challenges. We’ve also discovered by analyzing student reflections over time and through conversations with students, that using a similar reflection tool time and again with some predictable elements, (and one that is open-ended enough to provide room for serendipity and surprise), helps students to develop a deeper understanding of themselves as learners. When they look at their reflections over a period of time, they can begin to see their personal learning patterns, build on their strengths or set goals to grow out of patterns that don’t serve them as learners. This is still work we’re exploring deeply, often using our students as the guides. Alfie Kohn reminds us:
Assessment literally means to sit beside, and that’s just what our most thoughtful educators urge us to do. Yetta Goodman coined the compound noun “kidwatching” to describe reading with each child to gauge his or her proficiency. Marilyn Burns insists that one-on-one conversations tell us far more about students’ mathematical understanding than a test ever could — since all wrong answers aren’t alike. Of course this assumes that we’re really interested in kids’ understanding, not merely their level of phonemic awareness or ability to apply an algorithm.
In addition to self-reflection, one thing we have our students do consistently is evaluate challenges. We continually ask them to give us feedback on their experience: “What did you discover? Would you recommend we do this again with students? If so, what would you change?” Their thoughtful comments and analysis continually help us to frame our design challenges and shape our teaching moves.