UTFacultiesBMSCentreLatest NewsKey lessons learned using performance tests to measure digital skills

Key lessons learned using performance tests to measure digital skills

ySkills: Alexander van Deursen and Ester van Laar and their report on measuring digital skills

Most studies use self-assessments to measure digital skills, in which respondents are asked to evaluate how well they perform. However, how well someone thinks they perform in relation to a range of digital skills does not always provide an accurate estimation. As such, self-assessments can have severe shortcomings in terms of validity. A more valid way to measure digital skills are performance tests that give participants the opportunities to actually demonstrate their skills. This method relies on the completion of tasks to measure skill levels.

The number of studies that apply performance tests is relatively scarce, as it is a costly and time-consuming method. Performance tests that do exist mainly focus on dimensions such as technical and information skills. An extended perspective on assessments of digital skills as a broader concept (referring to information navigation and processing, communication and interaction, and content creation and production) is lacking. Beyond being costly and time consuming, performance tests are context-specific, making them difficult to apply across situations and countries. Consequently, cross-country comparisons are to a great extent missing.

To address the gaps in the literature, researchers working on the ySKILLS project have developed a task-based measurement instrument for a broad range of digital skills aimed at young people aged between 12 and 17 years. After accounting for the lessons learned during the development and initial test phases, the final performance test is applied in six European countries (Estonia, Finland, Germany, Italy, Poland, and Portugal). Based on our cross-national experiences of performance testing, we have extracted key lessons of test development and execution procedures to improve the quality of such assessments. 

Here are three important lessons we learned from conducting performance tests:

  • Involve children early in the process of designing performance tests and take their level of understanding and experience as a starting point in the design process. Additionally, the topics need to be suitable for children of a wide age range. It is important to choose universal themes (for example climate change or COVID-19) to make search task topics applicable cross-nationally and across age groups.
  • Perform cognitive interviews in all countries involved when implementing performance tests, as they bring unique experiences and perspectives to consider. Large disparities exist in the degree to which information is available online in the different languages of the countries under study. Furthermore, as performance testing is cognitively demanding, a mitigation strategy is to present tasks in an interactive way (for example children can watch words appear as they are being typed).
  • When analysing performance test tasks, include a standardised coding scheme and training when working with a team of researchers to make sure everyone is on the same page. Additionally, restrict the numbers of coders per country to one or two, and ensure all coders are instructed in a similar manner. All in all, developing performance tests is an iterative process with regular feedback and consultation provided by the research team and children.

Interested in more lessons learned? Our report including the final performance test is available as part of the ySKILLS project. This report discusses all considerations and lessons learned when developing performance tests, and presents the final task-based measurement instrument used in six European countries.