Considerations for efficient assessment at the programme and course levels

CONSIDERATIONS at the programme LEVEL

While individual teachers may find opportunities to increase assessment efficiency at the course level, major benefits may be reaped by considering assessment practices holistically at the programme level. The considerations below aim to guide Programme Management Teams in reflecting critically on assessment in their programme to find opportunities for increased efficiency.
If you would like support in considering or working out these aspects, you can call on the CELT faculty educationalists within your faculty.

Preliminary considerations

  • Setting minimum standards

    While striving for greater efficiency in assessment, programmes and teachers should consider the valid and reliable assessment of ILOs as the essential condition which must be always met. Nonetheless, some redundancy in ILOs at the curriculum and course level may be reduced to increase efficiency, when the repeated assessment of ILOs is involuntary rather than a strategic choice (see ILO-related considerations below). Additionally, assessment must always meet the quality criteria of validity, reliability, and transparency.

  • Setting efficiency goals and evaluating progress

    To make informed decisions about which changes to implement in assessment, we advise programmes and teachers to quantify the current assessment workload and compare it against the workload after the proposed changes. Consider the following:

    • Provide teachers with benchmarks: How much time should teachers spend on assessment tasks in your programme (including preparation, exam supervision time, administrative efforts, time spent reviewing/grading, etc.)? How much time should students spend on assessment preparation (e.g., preparing the assessment product, revising for an exam, etc.) relative to the total ECTs of the course? If possible, provide your teachers with benchmarks or rough rules of thumb which they can evaluate their current practices against.  
    • Quantify teacher workload for assessment-related tasks before and after proposed changes: Programmes should encourage teachers to estimate how much time they currently spend on assessment as precisely as possible and how much time a proposed change in assessment will save. 
    • Time spent by students on assessment: Is the time that students spend on assessment-related tasks (specifically, preparing assessment products or preparing for exams) justifiable and commensurate to the total workload involved in the course? A study workload calculator may help you gauge the entity of an assessment task. 
    • Consider third parties’ workload: Teachers are invited to reflect on the workload that their assessment choices imply for other parties (e.g., CES). For instance, if two possible options imply an equal workload for the teacher, but one imposes a substantially higher workload for a third party, this should be factored into the choice.

design-related Considerations: PILOs and assessment plan

  • Avoid curriculum overload

    An overloaded curriculum seriously hinders the well-being and functioning of students and teachers. Reduction can free up space to enhance knowledge retention or provide more practice and effective intermediate feedback. This can also help reduce resits and repair assignments and make the teaching process more effective. 

    Looking at the Programme Intended Learning Outcomes (PILOs): Which are the most important PILOs and components? What is needed to have and what is nice to have? Can there be a reduction in terms of PILOs? What will be the effect? 

    Looking at the content and learning objectives for all the courses/modules in detail. Are there components that don't seem to be related to the PILO's? Are there any duplications? Repetition can be desirable, to strengthen the knowledge and skills basis, but over time also unknown, unintentional, and unnecessary overlap may have been created. 

  • Avoid the unwanted repeated assessment of PILOs in the curriculum

    A way to reduce the assessment workload is to reduce the redundancy in the assessment of the same ILO across multiple courses in the same programme. Note that the corresponding element may be introduced in multiple courses but not assessed.

    It is advisable to have a critical look at the programme assessment plan. When and how often are all of the PILOs assessed? Are some PILOs sufficiently assessed or too often?

    We recommend identifying and prioritising the 'high-stake' objectives (the critical goals that have significant consequences for the graduation process and for the future career of the students and that reflect the significance and characteristics of the programme). Make sure these high-stakes objectives are emphasised in the assessment programme and are validly and reliably assessed.  

    The example of skills

    This applies especially to skills. If students have to give presentations several times during their programme, you may not need to teach and summatively assess (grade) the presentation skills as such explicitly each time, or you can put different emphases in different courses. So-called ‘learning lines’ can be established. In the development process, the skill in question is practised several times, and (peer) feedback is provided, but they do not have to be graded every time. As a tip: A general rubric and general criteria can be created for these skills. This saves time for teachers, they don’t have to create them themselves, and teachers can build on what’s done in another course, it provides students with clear expectations and will help them to see their development over time. 

  • Consider cumulative assessment

    Assessments that build on previous learning, and can consolidate learning across successive courses instead of having separate assessments in each course. Deliverables and feedback can be gathered in portfolios. Digital portfolio systems can be used for presenting deliverables and collecting and sharing feedback, to streamline the grading process (especially when different assessors are involved) and reduce administrative workload.

  • Other efficiency opportunities in programme design

    Some other methods that programmes may employ to increase efficiency at a structural level include (a) carefully scrutinizing the structure of modules in BSc programmes (in line with the current evaluation of the efficiency of the TOM model at UT), (b) sharing courses with other programmes (to accommodate both efficiency and personalization programmes may consider having general lectures and programme-specific tutorials or programme-specific materials in general tutorials), (c) cumulative assessment via portfolios.

Other considerations

  • Student dropouts, resits and efficiency

    One of the UT task groups working on solutions for the financial situation concluded that high dropout rates in some programmes are a major cause of budgetary inefficiencies. By extension, every resit or repair assignment requires extra time. The original TOM model assumed "participation = pass". Another principle was that the first two modules should have a clear selective function. This should help students find out whether the study suits them and whether they can cope in terms of the expected level of thinking and effort. These principles still have their value. If students drop out, it will be better if this happens early.

    Reducing the number of students taking part in the resits
    A resit, when the student has taken the first exam too, means double work. Reducing the amount of students taking a resit can save valuable time and help reduce the teacher workload. Next are a few strategies to help reduce resits:

    In general:

    • If a lot of students have to resit, investigate what may have happened
      This is the most important recommendation. What may give indications: Student evaluations, panel discussions, checking what type of errors were made, an assessment screening, attendance of the students, and competing teaching activities in the same period.

    For assignments:

    • Provide clear expectations. Explanation of the requirements and criteria 
      If students understand what is expected, they can work more successfully. 
    • Intermediate feedback
      This can also be done in class for all by way of self-assessment and peer assessment. It helps the students realize whether they are on the right track or need to adjust their activities or products while there is still time. 
    • Offering and discussing examples
      Showing examples and discussion of previously created products in prior years gives students a good indication of expectations.
    • Scaffolding assignments
      Breaking down the assignment into smaller parts. Students could submit portions of their work over time, allowing teachers to provide targeted feedback and reducing the chance of failure.
    • Build in checkpoints for more complex, open-ended projects.
      The tutors can check whether the students making progress and it gives the students opportunities to adjust their approach based on feedback.
    • Provide students with a checklist to assess their work before handing in 
      This can be useful especially to avoid students overlooking some requirements (amount of words, provided structure etc.) and for proofreading (writing aspects). 

    For written tests:

    • Check the quality of the exam; avoid unclear, confusing question types
      It is unfortunate for all if the quality of the exam contributes to poor results. Ask a colleague to check the questions for clear wording. Avoid ‘double statement’ questions or other type of questions that may confuse students unnecessarily. Avoid ‘non/all of the above’ options in MC questions. Check the checklist for constructing good-quality questions.
    • Choose the right difficulty for the exam
      This is always a challenge. It is important to keep the questions aligned with the learning objectives and teaching activities. A specification table may help to ensure this.  
      If the exam turned out to have been more difficult than expected (even the students doing well, did not get high grades), check whether there are problems with some specific questions ( e.g. it turned out that two answers were correct for an MC question or that they the reviewer was a bit too strict). One may consider whether it would be justified to use the Cohen-Schotanus way for adjusting the grading (see under Grading written tests under "Compromise method"). Important: This should be checked with the Examination Board, Is this allowed? And it should be a well-justified action.    
    • (More) Formative testing
      Giving students frequent opportunities in lectures, tutorials, or via Canvas to test their knowledge and progress, with no or limited consequences, helps the students assess whether they understand the subject matter and are on track. Limited consequences may for instance mean students are expected to take (and hand in) a mock test, but it is only about ‘having done it’. Students should be properly informed about the purpose of formative testing. It is meant to help them in the learning process.
    • Mock tests provided in an early stage
      Usually, a practice or mock test is offered and covered just before the real test is offered. This is logical because it is only then that students master the subject matter, but at the same time, it gives them little time to eliminate deficiencies or adjust their study behavior. If formative tests can be carried out earlier in the course, based on what has been covered so far, students can also take action earlier if it turns out they are ‘falling behind’.  Or ask questions if they do not understand something properly.
    • Formative testing to identify problem areas
      Formative testing can also be used by teachers to identify areas where a lot of students are struggling with. Or to discover the misconceptions. Extra support, resources etc., can be offered for students who experience problems. 
    • Offering automated practice tools
      A quiz in Canvas can allow students to practice questions and receive instant feedback. This can work as a self-assessment tool. Some students may find it more pleasant, or less confrontational, to find out for themselves, without supervision or others around them, whether they have mastered the material and based on feedback and advice, work on the mastery of the subject matter.
  • Efficiency in graduation

    Consider efficiency measures for the graduation phase in the programme. How much time does the graduation phase take for students? If more than the expected time limit: what are the reasons for exceeding? What consequences does this have for the student? For the supervisor? For the programme? Are there standards for time spent on supervision? Are these realistic? Followed? What challenges do the supervisors encounter? Can measures be taken to make the process more efficient, such as supervision in thesis groups; milestones with deadlines throughout the graduation phase; create a repository of resources, including templates, examples, tips for pitfalls, and FAQ's that students can reference to prevent supervisors from re-teaching topics or dealing with too many repetitive questions.     


Thoughts for the future: Considering fundamental new approaches for assessment 

The whole way of assessing students during a degree programme can be fundamentally considered. This is more of a long-term action. Principles from "programmatic assessment" (Dutch: programmatisch toetsen) might be considered. A lot of programmes at the Universities of Applied Sciences have chosen this approach. In the Handreiking Inrichting Programmatisch Toetsen of the Hogeschool Utrecht (2020), this is explained and described (Translated from Dutch): "This involves designing an assessment programme in which students' development is monitored over longer periods, e.g. an educational period six months, or a year. Decisions on awarding credits are based on a lot of information (data points) collected over this longer period. Within the education itself, the emphasis is on rich and frequent feedback (Van der Vleuten, Schut, and Heeneman, 2018). Developing a coherent assessment programme and paying attention to the formative function of assessments and their feedback are relevant themes for this approach. 

This overview has been prepared by Francesca Frittella and Helma Vlas (CELT; Oct. 2024). Based on insights from the faculties and CELT, this overview will be supplemented. For this, we really would like to hear and add your ideas and examples!