Closed-ended questions

(By: Hilde ter Horst (Zoëzi) and Riet Martens)

There are various types of closed-ended questions, the most common types being multiple-choice questions and true/false questions. Examples can be found in publications such as the CITO Manual.

We will now examine these in more detail:

·

Advantages of closed-ended questions

·

Disadvantages of closed-ended questions

·

Composing tests as a whole

·

Composing multiple-choice questions

·

Checking test questions

·

Scoring key and mandatory retention of written work

Advantages of closed-ended questions

-

marking takes very little time;

-

assessment is objective: there are no differences between examiners, moreover a single examiner will also assess all students in the same way;

-

it does not matter whether students are good or bad at formulating their answers;

-

a great range of knowledge can be assessed in a short period of time;

-

they are conducive to retrospective statistical calculations (degree of difficulty, reliability).

Disadvantages of closed-ended questions

-

not all objectives (or learning objectives) can be assessed by means of closed-ended questions;

-

such questions make considerable demands of students’ reading skills (which can be a problem for some students);

-

the smaller the number of alternative answers, the greater the probability that students will guess the correct answer;

-

composing good questions is a difficult and time consuming task.

Composing tests as a whole

A test is more than just a set of questions. The test as a whole must exhibit content validity. To this end, it is vital that there is a balanced emphasis on the relevant learning objectives throughout the test. For example, if “obtaining an understanding of ....” is an important objective, then a test consisting only of questions that require knowledge recall would certainly not be appropriate.

The best way to derive a representative test is to create a specification matrix (also known as a test blueprint). This is used to distribute the test questions across the main topics and across the requisite level (knowledge, understanding, skills). This specification table serves both as a guideline and as a source of inspiration when developing the questions.

There should be a sufficiently large number of questions and assignments to rule out lucky guesses (reliability). As a guide, you can assume that a closed-ended test consisting of questions with a choice of four possible answers must contain a minimum of forty questions (this allows an acceptable level of reliability (> 0.80) to be achieved). If the questions used contain just three alternatives then at least sixty questions are required, while a test using two-choice questions must contain at least eighty questions. This is a general guideline; if a test relates to about 1500 pages of reading material then forty four-choice questions will not usually be sufficient to give a good reflection of the total quantity of subject matter involved. In practice, the number of questions involved is usually determined by the time available for the test.

The larger the number of answers to each question, the smaller the chance of a lucky guess (and the more accurate the impression of a student’s knowledge). However, it is not possible to think up an infinite number of plausible alternative answers. Another disadvantage of a large number of alternatives is that the reading time increases proportionally (which means that your test will contain fewer questions). Multiple choice questions offering three alternative answers are recommended.

Composing multiple-choice questions

When composing a multiple-choice question, it is best to use the following procedure:

1.

First formulate the stem. The stem consists of a clear question or formulation of the problem, and sets out the context of the question.

2.

Next, formulate the scoring key. This indicates the correct answer in the list of options.

3.

Finally, formulate the distractors. These are the incorrect answers in the list of options.

Consider the following aspects when formulating a question:

-

Ideally, the stem should be formulated in positive terms, if not then highlight the NEGATIVE parts of the question.

-

Only formulate distractors that are sufficiently plausible to be selected by students who have not mastered the subject matter.

-

Use the same style throughout when formulating the answers in the list of options to ensure that their phrasing does not give a clue to the correct answer.

-

Ensure that the options do not overlap.

-

Ensure that the distractors are not too different from one another.

Sample questions, together with examples of do’s and dont’s, and of common mistakes are listed in the CITO Manual (in Dutch).

Checking test questions

However much care is devoted to their compilation, experience shows that there is often room for improvement in the form and/or content of questions. If possible, ask a colleague to take the test that you have prepared for your students. A lack of consensus about the correct option indicates that the question needs to be discussed. To this end, you can use the checklist for compiling closed-ended questions.

If you have not been able to get a colleague to check the test, use this checklist yourself.

Scoring key and mandatory retention of written work

When all of the questions have been checked you can finalise the scoring key. This contains the correct answers and details of the number of points per question.

TAKE NOTE: Students have the right to inspect their own past examination papers (from the previous two years) in preparation for an exam resit and/or “to ascertain which standards were applied in assessing said examination” (OER Art. 12.5). Accordingly, you are required to retain both the students’ answer sheets and the scoring key for a period of two years. If so desired, you can post the scoring system on Blackboard, such that it is inaccessible to students.

Sources:

- CITO group: Tests with closed-ended questions. http://195.169.48.3/html/literatuur/geslotenvragen.pdf

- Berkel, H. van & Bax. A., Toetsen met gesloten vragen (Tests with closed-ended questions). In: Berkel, H. van & Bax. A. (ed.) (2006). Toetsen in het hoger onderwijs. (Testing in higher education.) Houten: Bohn Stafleu of Lochem, 2006.

## go to summary of job aids