At least twice a semester, I try (operative word) to run an informal rubric-norming session with all the instructors of our online credit course. The purpose of this is to determine what sort of papers “constitutes excellent, good, satisfactory, and poor papers” (Bean, 2011, p. 287).
Writing is often thought of as a subjective task, and evidence has shown that grading can be inconsistent in large courses where student work is graded by multiple teaching assistants. The purpose of norming a rubric is to try to improve both its validity and reliability.
While I don’t believe it’s necessary that our individual instructors need to grade the same way, I think the discussion helps us focus our priorities when it comes to grading these assignments. It can help move instructors beyond possibly shallow preferences as well as potentially help us clarify our expectations, perhaps even revise established outcomes or the rubric itself.
The quick and dirty steps that I’ve Frankensteined from various sources (including rubric superstar Megan Oakleaf) over the past year or so (though goodness knows that I usually run some sort of stripped-down variant on this):
- Model and offer a think aloud about how rubrics should be applied using student samples.
- Distribute copies of two papers and informally discuss the qualities that make one better than the other. They should be fairly similar in order to force instructors to discuss and articulate important nuances.
- The facilitator lists these qualities on the board to prime instructors; or to further match them to rubric dimensions; or has participants vote on the top traits. In the most recent session, I also wrote out the assignment outcomes on the board to help us stay focused.
- Distribute copies of another paper for instructors to grade on their own using the rubric.
- Individually report out ratings.
- When differences inevitably emerge, facilitator tries to dig into why one person awarded a D vs. the other who awarded an A on some dimension. The goal here is to reach a consensus, and it’s accomplished in a variety of ways. Someone might realize they’ve misunderstood the rubric, or maybe the group realizes that it’s a problem with how the rubric is worded.
- Repeat process from Step #3 with revised assignment and a new sample until there’s consistency in how everyone is grading.
That being said, I never actually get to that last step. This is a time-consuming process that allows us to only repeat the exercise a couple of times at the most each time. If we can at least get people to award similar grades (i.e. low A vs high B on the same dimension), then I will have considered that session a success.
Even if it never ends up being as robust a process as it should be, it gets better each time. The subsequent discussion is always very stimulating as we resolve our differences or learn about the unique mindsets that we bring to grading, and it is one of the things that helps make our group of instructors a true community of practice.