Learning & Teaching Portal

toolbox

Written tests
Assignments

Grading Pitfalls 

Reviewing open-ended questions or assignments takes a lot of time and it is strenuous work. You want to do it as well as possible and, above all, reliably. There can be factors that affect reliability imperceptibly and unintentionally and it is good to be alert to this. Some obvious ones are: getting tired and less concentrated or getting irritated or frustrated. Then a break is the best advice. But there are more and less obvious factors that may influence your scoring. Below is a list of common so-called "grading pitfalls". Being aware of the pitfalls, will already help.

Some suggestions to avoid the pitfalls:

  • Make sure you don't see students' names. This already avoids a lot of the pitfalls below. 
  • Take a break when you get tired or loose your concentration. You can set e.g. an alarmclock to make sure you have your break timely. Reward yourself for all your efforts. An extra tasty lunch, for example. You deserve it.
  • If possible, don't review the work of students a whole day long. Divide it over several days, so you don't get bored and exhausted too much. 
  • For written tests with open questions: assess work by question and not by test. This provides more focus and therefore often goes a bit faster.
    This is easier when the test is taken digitally via Remindo, for example. You can then select by questions.
  • If more assessors are involved: do it together at the same moment in the same room. Then you can discuss when you are not sure about an answer and you can take joint breaks for variety. It also helps to allocate targeted time for review and it is probably more fun doing it together.
  • Use a well-developed answer model - with scoring and points for partially correct answers - or rubric. This ensures that you remain consistent in your assessment and stay focused on the key assessment points. Nb. If you make changes in the answer model during the review process, make sure to check previous reviewed work to see whether points should be adapted.
  • If you use a rubric, you can use a (digital) rubric form on which you indicate the assessment in points or otherwise directly, with some feedback. Students can download their form later via Canvas. [Example]  NB. In Canvas there is a tool to make rubrics. 
  • If you give feedback, you can make "macro's" (standard texts) for common errors. That is also a way to stay consistent. 
  • If there are more assessors, compare the results. If the grades differ significantly, find out whether this is explicable, or whether some have judged more strictly or more leniently than others. Make also notes about common mistakes. This is valuable for evaluation purposes. 

Pitfalls

Pitfall

Description

First impression

Tendency to rush to judgment: 
-     “One glance is enough to see whether this presentation has any real potential”

Halo effect

A favourable impression based on certain criteria generally results in a favourable assessment on other criteria:
-    “Wow, what a great presentation, I’m sure the report is fine too.”

Horn effect

An unfavourable impression based on certain criteria generally results in an unfavourable assessment on other criteria:
-     “During the lectures, those students spent all their time chatting to one another, their work is bound to be rubbish.”

Logical error

See halo/horn effect: If part A is right/wrong, then B will also be right/wrong: 
-     “This student can express himself very well, so his analysis will also be fine.”
-     “She didn't do the calculation right, so her other answers on the questions chaching insight won't be correct either."

Sympathy

Giving a favourable assessment because you get on well with the student: 
-    “She really is keen and did her best, we should reward an attitude like that.”

Antipathy

Giving an unfavourable assessment because you do not get on well with the student
-     “These students never show up during the tutorials, their work will reflect this and can't be okay."

Projection / similar to me

Ascribing your own (good or bad) characteristics to the student: 
-     “He choose a topic related to my research and is just as interested in this field. I'm sure it will be great work." 

Stereotyping

Attributing characteristics to students based on groups to which they belong: 
-     “He has a social science background, he can't be good in engineering topics.”

Contamination effect bias

Irrelevant  student characteristics influence their scores, either in a pos or neg way. E.g. handwriting, gender, ethnic background…
-   "This report looks very neat, good layout, well structured, lots of illustrations. The content will surely be good too." 

Contrast effect

When a teacher compares a student against other students, instead of established standards. For instance after you have seen 10 bad reports and you see one that is slightly better, you may score it too high just by comparison. Or the other way round.

Criteria change or rater drift

Unintentionally you use more criteria over time, interpret them differently or weight hem differently. 

Generosity, stinginess, central tendency

The tendency to always give above-average or below-average grades or around the average.

 
* Part of this table was originally drawn up by and for the faculty BMS under the name of Jobaids for testing. Used with permission.