Validating a new science process skills test (EST, PSY)

Mentors:  Ard Lazonder, Noortje Janssen

Topic (focus on research)

Elementary science education generally starts around the age of five, often by playful introductions that seek to arouse young children’s innate curiosity after common scientific phenomena such as gravity and buoyancy. In subsequent years, children are gradually familiarized with the practice of good science, which usually involves creating experiments that allow for valid causal inferences. Children’s performance during hands-on inquiry activities is often assessed by their teacher in a somewhat subjective way (if at all…), mainly because a formal assessment appropriate for elementary school children does not yet exist.

One reason for this apparent lack of a valid scientific reasoning test for children is the complexity of the skills involved; another reason is the inherent difficulties involved in using common test formats consisting of open and closed questions with young learners. To anticipate these difficulties, a new science process skills test has been developed. The test revolves around hands-on science activities that are either performed by the child or the person who administers the test. Using a scripted dialogue, the test administrator guides the child through a series of activities, asking him/her to set up and conduct simple experiments, explain or predict outcomes, justify the choices made, evaluate the reliability of data, and so on. A first version of the test has recently been completed; the next step is its validation. And that’s what this thesis project is all about.

Method

Following a review of the literature and existing scientific reasoning tests for older learners (e.g., the TIPS II), you design and conduct a validation study of the new science process skills test. The choice for a particular type of validation is yours. You could, for instance, determine discriminant validity by comparing performance across different age groups or establish concurrent validity by examining a child’s performance on the three parallell versions of the test (that each have a different topic). Whatever you choose, the target group should be children in Dutch upper elementary classrooms aged 8 to 10. (As the nature of the test requires one-on-one interaction with a child, you should be sufficiently fluent in Dutch).

Literature

  • Burns, J.C., Okey, J.R., & Wise, K.C. (1985). Development of an integrated process skill test: TIPSII. Journal of Research in Science Teaching, 22, 167-177.
  • Fives, H., Huebner, W., Birnbaum, A. S., & Nicolich, M. (2014). Developing a measure of scientific literacy for middle school students. Science Education, 98, 549–580.
  • Hogan, K., & Fisherkeller, J. (2000). Dialogue as data: Assessing students' scientific reasoning with interactive protocols. In J. J. Mintzes, J. H. Wandersee & J. D. Novak (Eds.), Assessing science understanding: A human constructivist view (pp. 95-127). San Diego: Academic Press.
  • Lazonder, A. W. (2014). Inquiry learning. In J. M. Spector, M. D. Merrill, J. Elen & M. J. Bishop (Eds.), Handbook of research on educational communications and technology (pp. 453-464). New York: Springer Science+Business Media.

Keywords

Scientific reasoning, inquiry learning, assessment, children.