Tag grading

Loss Aversion and Schools

Traditional economic models hold that we, being the rational beings we are, love getting things just as much as we hate losing them. Not actually the case, as Daniel Kahneman (Thinking, Fast and Slow) and others have pointed out. We are far more sensitive to actual and perceived losses than we are to gains.

What does this mean in practice? Well, it means all kinds of things, and in Kahneman’s words, “The concept of loss aversion is certainly the most significant contribution of psychology to behavioral economics” (300). If you compare the goodness (i.e., utility) of gaining two dollars with the badness (i.e., disutility) of losing two dollars, losing two dollars carries greater weight in our minds because we are more loss averse. We feel the pain of loss more deeply than we celebrate gains. Some might react to tax cuts with a smile and relief, but they might then react to tax hikes with pitchforks and protests (tax cuts being seen as a gain, tax hikes as a loss) . It’s an interesting asymmetry, with many consequences.

This loss aversion works at deep levels of our consciousness. Kahneman cites a study of golf putts by Devin Pope and Maurice Schweitzer at the University of Pennsylvania. They compared the success rates of golfers putting for a birdie with those of golfers putting to avoid a bogey. If you look at a birdie as a “gain” and a bogey as a “loss” (with par seen as breaking even), then you can see loss aversion at work here. After analyzing 2.5 million putts (!), Pope and Schweitzer found that golfers putting for par (i.e., avoiding the “loss” of a bogey) were 3.6% more successful than those putting or a birdie. Though it might seem small, that’s an enormous difference of more than 90,000 successful putts. It’s also one of the clearest, most interesting, most powerful illustrations of loss aversion that I’ve read.

How can the idea of loss aversion affect educational practice? One general way lies just in rethinking how we frame and conceive of things. Let’s say, for example, that your school has mandatory study hall, which students can liberate themselves from after one term of good grades. In that way, liberation from the study hall might be seen as a gain of more free time–a motivating incentive, right? But loss aversion would suggest that students might be more motivated to escape study hall in the first place (i.e., avoid losing their free time to study hall). Let’s say that only students with low grades after the first few weeks have to attend study hall, and you publicize that fact quite clearly. Research suggests that in this framework students will work much harder and seek more extra help to avoid the loss of free time that study hall represents.

I’ve also been playing around with how assessment practices could incorporate loss aversion. In such a framework assessment might end up looking something like scoring golf. First, we’d need clear rubrics for every type of assignment we give. Students would have to know exactly what the minimum expectations for an assignment should be, and the bar should be set high. Think of it as “par” for every assignment. If students meet all expectations, their grade stays the same. If they exceed expectations, their grade goes up one or two points. But if they fail to meet expectations, their grade decreases a bit, depending on how many standards they failed to meet.

In order for this to work, however, all students have to start with some grade that they can work to defend against loss. It’s up to the teacher, of course, what that number should be. In my mind I’ve been playing around with the number 90 (on a 0-100 scale), but maybe that just shows that I’m the product of a grade-inflated generation. Maybe it should be 85. Or lower. I don’t know–it’s open for discussion. It needs to be high enough for students to want to preserve it, but not so high that everyone walks away with A’s just for meeting expectations.

If the theory of loss aversion holds true, then such a grading system should be a powerful motivator for students. Students who need to get an A+ on everything have a way to aim high. Students who often struggle start the term with a high grade and a high incentive to keep working hard. This is not a fully formed idea, I concede, but I think it has some potential. I also recognize that this idea resides within a fairly traditional school framework, with grades, teacher-generated rubrics, etc. As with all my ideas, I wouldn’t be surprised to hear that many other people have already thought of it. If so, I’d love to hear about other people who might be doing something similar.

Works Cited

Kahneman, D. (2011). Thinking, fast and slow. New York, NY: Farrar, Strauss, and Giroux.


Grading for Skills, Not Scores

The usual way of categorizing things.

The usual way of categorizing things.


I take great pride in my gradebook. It’s a kinda-fancy, multi-colored collection of Excel spreadsheets that I made myself (with training and inspiration from my friend and former colleague Dan Lyons, @dan_lyons). I tinker with it year after year to get it just right. I can enter a test score in the test scores section, a quiz score in the quiz scores section, and some other grade in the miscellaneous section. All the formulas are there to spit out the students’ averages to however many decimal points I want. It’s precise, it’s easy, and it’s organized.

But is it organized the right way? The basic underlying structure is just an imitation of my understanding of what a gradebook should look like: tests count for X percent, quizzes count for Y percent, homework or class participation get factored in to some significant–but not too significant–extent.

During one of my classes at Teachers College last semester, a classmate told me about an idea he’d been playing around with: the skills-based gradebook. Why not keep track of the kinds of questions our students get wrong or the exact types of questions that keep tripping them up? For example, which is more useful for the math teacher and student: recording an 82% on the Unit 2 test, or recording that a student ran into trouble on the word problems section but did well with simplifying expressions? Of course, we might go over the test with the student afterwards and point out the difficulty with word problems, but it would likely be helpful to record the information and look for patterns throughout the year.

Paul Bambrick-Santoyo’s Driven by Data: A Practical Guide to Improve Instruction provides a helpful way for reconsidering how we design our curricula and how we keep track of student performance. If one of our students scores a 75% on his test but misses most of his questions in the antonyms section, do we record that in our gradebooks, or do we simply record the score and tell him to study harder next time? By keeping careful track of specific areas where our students are having trouble, we can offer guided and highly specific instruction to help them improve their performance and achieve mastery going forward. Bambrick-Santoyo offers a highly developed approach to “teaching to the test,” whereby teachers home in on the areas of weakness on one test to prepare their students for the next one. Skills-based mastery is the name of the game.

I have no doubt that many teachers already record student performance this way. Skills-based assessment may be one of the many areas in which primary-school teachers have much to teach those of us who have older students.  I also have no doubt that it requires more work to keep such careful records. But the benefit for the student–and for the teacher’s effectiveness–could be huge.

If we take this idea one step further, how could such thinking about assessment affect our report cards? What if students received joint comments from the English and history teachers about the students’ ability to write effective thesis statements and clear topic sentences? Or if the foreign language and English (and math!) teachers joined forces to assess the students’ understanding of grammatical and linguistic structure?

Addressing, assessing, and recording skills strength instead of simply recording grades on quizzes and tests…I’m still playing around with this basic idea and how I can incorporate it into my teaching. But I know that before I return to the classroom next year, I have an awful lot of work to do on my gradebook. Just when I thought I had it right, time to rethink…*

%d bloggers like this: