Toolbox
Analyze assignment results
Just as for written tests, you can analyze the results of an assignment. This is highly recommended. Preferably before the grades are communicated.
Based on the analysis you can:
- Take measures if needed before the grades are given. For instance when it turned out that no one was able to get any points related to a certain criterion.
- Evaluate the quality of your assignment description. Was anything unclear and led to an unexpected and not wished-for result?
- Assess the reliability of the assessment.
- Evaluate whether the learning objectives are achieved.
- Evaluate your teaching and support process.
The analysis on a more detailed (criteria) level, can be done in a holistic way and in a quantitative (psychometric) way. But you can start by looking at the overall results.
ANALYZE and reflect on the overall student results
First of all, you can look at and reflect upon the student results for the assignment. You can look at:
+ How many students passed/failed? Number/percentage?
+ What is the range and distribution of the grades? Frequencies?
+ What is the average grade?
+ Standard deviation/variance (tells you how spread out the samples in a set are from the mean).
To reflect: Are these results as expected? Can you account for the results? Explain what happened?
analysis on criteria level
The next step is to look at the quality of the performance or deliverables of the students in more detail. This can be done in a qualitative, holistic way and/or in a quantitative way. Based on the conclusions, measures can be taken; right away (e.g. if the way the scoring or grading was done seems not fair for the students) or - from an evaluative perspective - for the next time.
Analysis in a qualitative, holistic way
During your grading process, you get a notion of common mistakes that are made or parts of the assignment that were not understood well. Make notes on the things that strike you. What appears to be picked up well by the students? What do many students appear to have not yet mastered sufficiently? What types of mistakes are being made? What feedback often appears to be needed? If there are more assessors involved, you can ask all assessors to do the same and afterward compare and discuss the notes.
Analysis in a quantitative way
If you used criteria and scores, whether based on a checklist with scores or an elaborate rubric, you can calculate some psychometric values on the criteria level. Just like this can be done on questions level for written tests. You can for all criteria, interpret the values, draw conclusions and if necessary take action.
Be aware: The data provide signals, but you still have to check what happened. For instance, if hardly any student was able to meet a certain criterion, it might be that there was some unclarity in the assignment description. It can also be that this part was not taught well or it was assumed students would be familiar with applying something, but this was not the case. There can be many reasons for low values and it might take some extra effort to find what happened. But it will be worthwhile, for you can use the insights for the next time and improve your teaching and assessment process.
What kind of values can you look for? And how to calculate them?
In principle - if you have scores per criterion - you can calculate all psychometric values, as are possible for a written test with open questions, see Analyze test results (for written tests). By putting your data in an Excel file and applying formulas, you can easily obtain the values.
A point of attention is that there may only be a limited number of criteria and perhaps only a limited number of (groups of) students. This makes the values less reliable and meaningful.
In practice, at least for each criterion, it is good to look at:
> Averages and P-values
> Highest/last scores per criterion
> Distribution of points / frequencies
> Standouts (e.g. of the 5 points available, just one student scored higher than 3 points).