Schraagen, Prof. Dr. J.M.C.

Designing training for professionals based on Subject matter experts and Cognitive Task analysis

Jan Maarten Schraagen, TNO Defence, Security and Safety

Introduction

Instructional Design (ID) is a field of both applied research and development activities that aims at formulating, executing, and testing theoretically sound solutions for instructional problems in real-life situations. ID focuses on the analysis and design phases that usually occur before the actual development/production and implementation of training systems. As such, ID is part of the more encompassing Instructional Systems Development process (ISD). In a traditional ISD approach, every instructional design will incorporate a task or content analysis. The purpose of the task and content analyses is to organize the content to be taught by analyzing a job to be performed (task analysis) or the content domain that represents the information to be learned (content analysis) (Tennyson & Elmore, 1997). According to Jonassen, Tessmer and Hannum (1999, p. 3), “task analysis is probably the most important part of the instructional systems design (ISD) process, and it has been thought so for some time.”

Notwithstanding its alleged importance, Jonassen, Tessmer and Hannum also state that task analysis is the most often miscontrued, misinterpreted, poorly executed, or simply ignored component of the ID process. This complaint has been voiced not only in the development of training systems, but also in the design of computer systems to support human work (Diaper and Stanton, 2004). The reasons are manifold: task analysis requires a lot of time, effort, and expertise; it is more of an art than a science; and its usefulness is frequently doubted which has resulted in a gulf between the outcomes of task analysis and systems design (Schraagen, Chipman, & Shalin, 2000; Schraagen, 2006). Designers are often not convinced that task analysis is worth the effort.

Hence, task analysis is viewed by some as the most important step in the design process, while others do not think it is worth the effort. Before being able to reach some kind of consensus between these camps, it is best to look into the reasons behind each position and to ask ourselves why task analysis, according to some, is such a useful tool, and why, according to others, we should not use it. After having reached a consensual view on the importance of task analysis, I will then proceed to a discussion of the use of domain experts in ID, and the way cognitive task analysis may be used here. Next, I will illustrate these theoretical notions with a case study on the design of a set of training courses in troubleshooting. This should give the reader an understanding of how particular methods of task analysis were applied, how they fit into the larger scheme of instructional design, and what the impact of this research effort has been.

The advantages of task analysis

Task analysis, viewed as a decomposition of a complex task into a set of constituent subtasks, provides an overview of the target task that has to be learned or supported, and therefore serves as a reminder for the designer to include every subtask in his or her design. So the first advantage of task analysis is to make sure we don’t forget anything in our training design. This function of task analysis is to inventory or describe tasks, rather than analyze them.

A second function of task analysis, is to identify sources of actual or potential performance failure or identifying training needs. As emphasized by Annett (2000), task analysis, as opposed to task description, should be a way of producing answers to questions. Task analysis should always be undertaken with some purpose in mind. If the purpose is to design instruction, then the analyst needs to select tasks for training, based on priorities assigned to training objectives. The priorities may be derived from observations of trainees before and after a training course. Observations before the training course provide an idea of what trainees already know (and hence need not be taught any more); observations after the training course provide an idea of what trainees have and have not learned from the training course (and hence what needs to be emphasized more during the course). The second advantage of task analysis therefore, is to provide input to the instructional designer as to what content should be taught (specified at an abstract level) and what should not be taught.

A third function of task analysis is to list the knowledge, skills and attitudes that are required for performing each (sub)task. This function has varied the most over the course of history, and is dependent on the particular scientific, technological, economical, political, and cultural developments that together constitute the social constraints the task analyst is working with (Schraagen, 2006). Several “long waves” of economic development have been distinguished by Perez (2002):

1.

The age of steel, electricity, and heavy engineering. Leading branches of the economy are electrical equipment, heavy engineering, heavy chemicals, and steel products. Railways, ships, and the telephone constitute the transport and communication infrastructure. Machines are manually controlled. This period, during which industrial psychology emerged (e.g., Viteles, 1932), lasted from approximately 1895–1940. Task analysis mainly consisted of time and motion study, in order to determine how manual labor could be carried out more efficiently.

2.

The age of oil, automobiles, and mass production. Oil and gas allow massive motorization of transport, civil economy, and war. Leading branches of the economy are automobiles, aircraft, refineries, trucks, and tanks. Radio, motorways, airports, and airlines constitute the transport and communication infrastructure. A new mode of control emerged: supervisory control, characterized by monitoring displays that show the status of the machine being controlled. The “upswing” in this period lasted from 1941 until 1973 (Oil Crisis). The “downswing” of this era is still continuing. Task analysis techniques began to focus on the unobservable cognitive activities, such as state recognition, fault finding, and scheduling. Hierarchical Task Analysis (HTA), developed in the 1960s by Annett and Duncan, is a primary example of these task analysis techniques (Annett and Duncan, 1967).

3.

The age of information and telecommunications. Computers, software, telecommunication equipment, and biotechnology are the leading branches of the economy. The internet has become the major communication infrastructure. Equipment is “cognitively” controlled, in the sense that users need to draw on extensive knowledge of the environment and the equipment. Automation gradually takes on the form of intelligent cooperation. This period started around 1970 with the emergence of “cognitive engineering,” and still continues. During the Age of Information Processing (1973–now), cognitive control, or rather the “joint cognitive system” (Hollnagel, 2003) is the predominant form of control and Cognitive Task Analysis the main form of task analysis (Schraagen, Chipman, & Shalin, 2000).

In conclusion, it is clear that this function of task analysis differs over the years, as the definition of what constitutes a task has varied over the years. We are rapidly moving towards a situation where computers take over more and more tasks that used to be carried out by humans. Humans are relegated to a supervisory role or no role at all, and their training for these roles needs to be dictated by the knowledge, skills and attitudes that are required to successfully fulfill these roles. The third advantage of task analysis is therefore to inform the instructional designer as to the details of what should be taught.

The disadvantages of task analysis

The main reasons why task analysis is not applied as widely as some believe it should be have to do with its perceived usefulness, its lack of rigor, its complexity, and its costliness in terms of time and effort. To start with the usability of the products of task analysis, Diaper (2001) has argued that since the beginning of the 1990s a gulf exists between task analysis and traditional software-engineering approaches. When designing systems, software engineers rarely use the task-analysis techniques advocated by psychologists. This may have to do with differences in background and training between software engineers and cognitive psychologists. The same may be the case with instructional design, although there seems more overlap here than in the field of human-computer interaction both in terms of background and training of the groups of analysts, as well as in content between the outcomes of task analysis and the instructional products.

As long as task analysis defines itself as a collection of isolated techniques, largely derived from mainstream cognitive psychology, it is doomed to have little or no impact on instructional design practice, just as it has had little impact on the field of human-computer interaction. Mainstream cognitive psychology invents its techniques and tasks for a particular purpose, namely to test a theory on the nature of the representations and processes used by humans in particular situations. It is a mistake to isolate these techniques from their theoretical context and present them as “knowledge elicitation techniques” (Cooke, 1994, listed over 70 of them!), or task analysis methods. Task analysis is an applied activity and its outcomes should be of direct utility to the practitioners that design training systems or interfaces or tests. This means that task analysis methods should deliver results in a format that is directed by the end users of the task analysis results.

The perceived lack of rigor of task analysis methods may also be more of a problem within the software-engineering field than in the instructional design field. Software engineers have their own formalisms that often have little resemblance to the outcomes of task analyses, although recently there has been some work on integrating task analysis with standard software-engineering methods such as Universal Modeling Language (Diaper & Stanton, 2004; Part IV). Interestingly, with ID moving into computer-assisted instruction and intelligent tutoring systems, the distinction with traditional software-engineering becomes more and more blurred and therefore also the gulf with task analysis. In other words, as ID becomes more formalized and computerized, task analysis will be seen as having less and less to offer. Again, it will be up to the task analysis community to adapt itself to the requirements of the instructional designers.

Task analysis is often viewed as complex, filled with uncertainty and ambiguity. However, task analysis does not seem to be any more complex than, say, learning the intricacies of the Universal Modeling Language (that ususally take a couple of hundred pages to explain). Task analysis may be filled with uncertainty and ambiguity, but this is more often than not a reflection of the state of the world it is trying to capture, and not an inherent feature of its method. At any rate, more formalized methods are also complex and have to deal with uncertainty and ambiguity.

Finally, a perceived disadvantage of task analysis is that it is very time-consuming and therefore expensive. Yes, it is cheaper, in the short term, to skip a task analysis altogether. And it may also be cheaper, even in the long term, to skip a task analysis if it does not yield any usable results. However, given that a task analysis yields usable results, the argument of cost should be countered by pointing to its tangible results. The design of computer programs has also evolved from the attitude of “I just sit there and type code” to a highly evolved and structured approach to programming based on extensive up-front analysis and specification. The latter approach is much more expensive than the former, but it pays out dividends when programs have to be changed or debugged. Of course, this is all the more so as programs become more critical. A simple program that has little impact on society at large, does not need a structured and documented approach to programming. By analogy, the more important the training system one has to design, the more important becomes the up-front task analysis on which it is based, no matter what its cost may be.

Task analysis for ISD: a consensual view

After having described the reasons behind the respective positions on the advantages and disadvantages of task analysis, I now proceed to a consensual view that strikes a compromise between both positions. If the advantages of task analysis as listed above are correct, then we may conclude that task analysis is indispensable in instructional design. This is because it provides an overview of all tasks to be trained, it provides input as to what should be taught and what should not be taught, and it specifies the details of what should be taught (in terms of, for instance, knowledge, skills and attitudes, or whatever one’s theoretical predilection may specify). On the other hand, task analysis is only indispensable if it delivers results in a format that is directed by the end users of the task analysis results. Therefore, rather than starting with reading books on task analysis methods for instructional design, the task analyst should first and foremost talk to the instructional designers themselves in order to find out what they need. It is then up to the analyst to deliver the format required by the instructional designer.

This is easier said than done and it will almost never be the final story. For instance, what happens when instructional designers base their learning objectives on existing documentation, as we have observed in the case of troubleshooting that will be discussed later? Existing factory documentation of systems to be maintained was used by the instructional designers to base their training courses on. This is suboptimal, as this tends to promote an orientation on hardware rather than functions; moreover, describing how a system is put together is fundamentally different from troubleshooting a system. In this case, we convinced the instructional designers to change the content of their training courses, while at the same time abiding by the general format they prescribed. In general, the format an instructional designer needs may leave open the details of what should be taught, and may therefore still allow room for innovation.

Cognitive task analysis and the use of subject-matter experts

In the early 1970s, the word “cognitive” became more acceptable in American academic psychology, though the basic idea had been established at least a decade earlier by George Miller and Jerome Bruner (see Gardner, 1985; Hoffman & Deffenbacher, 1992; Newell & Simon, 1972 for historical overviews). Neisser’s Cognitive psychology had appeared in 1967, and the scientific journal by the same name first appeared in 1970. It took one more decade for this approach to receive broader methodological justification and its practical application.

In 1984, Ericsson and Simon (1984) published Protocol analysis: Verbal reports as data. This book reintroduced the use of think-aloud problem-solving tasks, which had been relegated to the historical dustbin by behaviorism even though it had some decades of successful use in psychology laboratories in Germany and elsewhere in Europe up through about 1925. In 1983, Card, Moran, and Newell published The psychology of human–computer interaction. This book helped lay the foundation for the field of cognitive science and presented the GOMS model (Goals, Operators, Methods, and Selection rules), which was a family of analysis techniques, and a form of task analysis that describes the procedural, how-to-do-it knowledge involved in a task (see Kieras, 2004, for a recent overview). Task analysis profited a lot from the developments in artificial intelligence, particularly in the early 1980s when expert systems became commercially interesting (Hayes-Roth, Waterman, & Lenat, 1983). Since these systems required a great deal of expert knowledge, acquiring or “eliciting” this knowledge became an important topic (see Hoffman & Lintern, 2006). Because of their reliance on unstructured interviews, system developers soon viewed “knowledge elicitation” as the bottleneck in expert-system development, and they turned to psychology for techniques that helped elicit that knowledge (Hoffman, 1987). As a result, a host of individual techniques was identified (see Cooke, 1994), but no single overall method for task analysis that would guide the practitioner in selecting the right technique for a given problem resulted from this effort.

However, the interest in the knowledge structures underlying expertise proved to be one of the approaches to what is now known as cognitive task analysis (Hoffman & Woods, 2000; see Hoffman & Lintern, 2006; Schraagen, Chipman, & Shalin, 2000). With artificial intelligence coming to be a widely used term in the 1970s, the first ideas arose about applying artificial intelligence to cockpit automation. As early as 1974, the concepts of adaptive aiding and dynamic function allocation emerged (Rouse, 1988). Researchers realized that as machines became more intelligent, they should be viewed as “equals” to humans. Instead of Taylor’s “designing the human to fit the machine,” or the human factors engineering’s “designing the machine to fit the human,” the maxim now became to design the joint human–machine system, or, more aptly phrased, the joint cognitive system (Hollnagel, 2003). Not only are cognitive tasks everywhere, but humans have lost their monopoly on conducting cognitive tasks, as noted by Hollnagel (2003, p. 6). Again, as in the past, changes in technological developments were followed by changes in task-analysis methods. In order to address the large role of cognition in modern work, new tools and techniques were required “to yield information about the knowledge, thought processes, and goal structures that underlie observable task performance” (Chipman, Schraagen, & Shalin, 2000, p. 3). Cognitive task analysis is not a single method or even a family of methods, as are Hierarchical Task Analysis or the Critical Incident Technique. Rather, the term denotes a large number of different techniques that may be grouped by, for instance, the type of knowledge they elicit (Seamster, Redding, & Kaempf, 1997) or the process of elicitation (Cooke, 1994; Hoffman, 1987). Typical techniques are observations, interviews, verbal reports, and conceptual techniques that focus on concepts and their relations.

Apart from the expert-systems thread, with its emphasis on knowledge elicitation, cognitive task analysis has also been influenced by the need to understand expert decision making in naturalistic, or field, settings. A widely cited technique is the Critical Decision Method developed by Klein and colleagues (Klein, Calderwood, & Macgregor, 1989; see Hoffman, Crandall, & Shadbolt, 1998, for a review, and see Hoffman & Lintern, 2006, Ross, Shafer, & Klein, 2006). The Critical Decision Method is a descendent of the Critical Incident Technique developed by Flanagan (1954). In the CDM procedure, domain experts are asked to recall an incident in detail by constructing a time line, assisted by the analyst. Next, the analyst asks a set of specific questions (so-called cognitive probes) about goals, cues, expectancies, and so forth. The resulting information may be used for training or system design, for instance, by training novices in recognizing critical perceptual cues.

ISD employs domain experts as a source of accurate information on how specific tasks should be conducted (Amirault & Branson, 2006). Put simply, the goal of instruction is to transform a novice into an expert. From the early 1980s onward, the emphasis was placed more and more on the underlying knowledge and strategies required for expert performance. For instance, in “intelligent tutoring systems”, a distinction was made between ‘student models’ and ‘expert models’, and the discrepancy between the two drove the instructional interventions. For a more detailed view on changes in military training during the past 25 years, and its relation to ID and ISD, see Van Merriënboer and Boot (this volume).

To take an example in the area of troubleshooting, consider the impressive work carried out by BBN Laboratories in the 1980s on the MACH-III tutoring system to support the training of novices in troubleshooting complex electronic devices (see the chapters by Tenney and Kurland, 1988; Kurland and Tenney, 1988; and Massey, De Bruin and Roberts, 1988). They started out by a cognitive task analysis of novices, intermediates and experts who were asked to explain during an interview how a radar works. The participants’ explanations were corroborated with drawings of the radar that they made. This was an attempt to see how the machanics’ views, or mental models, of the radar changed as they acquired experience. The results of this study showed that novices focused on power distributions and the physical layout of the radar, whereas experts had a better functional understanding of the flow of information. They concluded that an important aspect of training was to help the student to map the functional aspects of the radar onto the physical machinery (Tenney and Kurland, 1988).

In a second study, Tenney and Kurland looked at the troubleshooting behavior of the same participants. They found that the novice followed the Fault Isolation Manual literally, but needed lots of help with reasoning and procedures. They concluded that a tutoring system should teach the novice the kind of reasoning the expert goes through in deciding what faults are possible, what fault possibilities have been eliminated, and when all but one possibility has been ruled out. They demonstrated the close ties between a trainee’s mental model of the radar system and troubleshooting performance. More specifically, a functional model of the radar is a necessary condition for good troubleshooting.

Based on these expert-novice differences in troubleshooting, Kurland and Tenney (1988) concluded that the HAWK intelligent tutoring system should focus on both the representations and the strategic troubleshooting skills of novices. As far as representations are concerned, the tutoring system should provide summary views varying in detail and emphasis for each functional circuit. This would reduce the information overload novices have to cope with and would result in more structured troubleshooting. In addition, the system should include a simulation of the way an expert would reason about the radar, both to explain its working and predict its behavior, and to show how to troubleshoot the radar (Massey, De Bruin, & Roberts, 1988). Obviously, an intelligent tutoring system provides the possibility of extensive practice with practical problem solving, which remedies a problem frequently encountered in education: a lack of opportunity for hands-on experience.

An evaluation study of the MACH-III carried out by the US Department of the Army in 1990 compared two groups (Acchione-Noel, Saia, Willams, & Sarli, 1990). Both had equal amounts of hands-on training with the actual radar. But one group alternated training on the radar with training on the Mach-III, while the other alternated training on the radar with training using a paper troubleshooting method (which depended on asking questions of students who were working on the radar and waiting for their answers).

Acchione-Noel et al. (1990) found an advantage for students trained with the MACH III in the speed with which they completed problems in a troubleshooting test on the actual radar. There was no difference in number of solutions, just in speed of completion on actual radar problems, favoring the Mach III training. As the investigators point out, the additional speed could have been due to either to the Mach-III group's exposure to more problems during training (which might have allowed them to remember and use solutions on the radar test) or to their having learned more methodical troubleshooting strategies. The experiment did not differentiate between these two possibilities.

One thing that might have prevented stronger findings in favor of Mach III (for example, the investigators found that students trained on the MACH III did not do better on written and oral tests of radar functionality), was that the students in the Mach III group were not required to read the explanations provided by the tutor. They were trained in a mode where they could see a critique of what they did at the end, but there is no evidence that they paid attention to that, as opposed to moving right on to the next problem. When reading is not required, there is a tendency for students to play a game of "swap the parts and see if it solves the problem."  Lajoie (this volume) describes another research effort aimed at enhacing troubleshooting skill by providing a practice environment for professionals to acquire knowledge in the context of realistic troubleshooting activity.

The MACH-III system is a prime example of how to use experts in the design of instruction. Note that the system was not focused on duplicating the observable behavior of experts (the physical behaviors associated with performing tests, and replacing components), but rather with the cognitive processes that support effective troubleshooting. As such, it is also in line with numerous expert-novice studies carried out in other domains. These studies have all found that expertise involves functional, abstracted representations (see Feltovich, Prietula, and Anders Ericsson, 2006 for a review).

Training Structured Troubleshooting: A case study

I will now turn to a discussion of a study that my colleagues and I carried out during the 1990s for the Royal Netherlands Navy, in particular the Weapon Engineering Service. In the beginning of the 1990s, complaints started to emerge from the operational Dutch fleet concerning the speed and accuracy of weapon engineers. The Royal Netherlands Navy asked TNO Human Factors to look at these complaints and suggest possible solutions to the problem. One possible solution that we considered early on was the use of a knowledge-based system to support the technician (Schaafstal & Schraagen, 1991). As this turned out to be too daunting a task, and the root cause, being inadequate training, remained unaffected, we turned our attention to the innovation of the training courses given by the Weapon Engineering School.

We started by carrying out a cognitive task analysis consisting of a number of preliminary observational studies on troubleshooting in which technicians with varying levels of expertise participated.The results of these observational studies were interpreted in the context of the literature and previous studies on expert-novice differences in the area of problem solving in general and troubleshooting in particular. Our observations of experts and novices troubleshooting planted faults in a radar system may be summarized as follows:

1.

we observed a gap between theory and practice: a theory instructor for the radar course had great difficulties troubleshooting in practice.

2.

there was not much functional thinking

3.

the training courses were component oriented instead of functionally oriented. Hence, we observed little transfer from one radar system to the other.

4.

beginners were very unsystematic in their approach to troubleshooting, partly as a result of the component-oriented nature of the training course, but partly also as a result of lack of practice in actual troubleshooting.

5.

some experts exhibited a general troubleshooting strategy in the absence of domain knowledge.

6.

problems were mainly solved by recognition of a similarity to a previous problem. This is a rather brittle basis for a training philosophy, as we would like trainees to be able to handle novel faults as well as previously encountered faults.

We were able to quantify these qualitative observations in later studies, with larger samples, ratings of think-aloud protocols, scores on a knowledge test, and number of problems solved (see Schaafstal, Schraagen, & Van Berlo, 2000, for a review). The results of these studies showed that on average (a) only 40% of the problems were solved accurately, (b) students were not very systematic in their reasoning process (2.6 on a scale from 1 to 5), and (c) they did not have a high degree of functional understanding (2.9 on a scale from 1 to 5). In addition, only a small correlation was found between the knowledge test and troubleshooting performance (Pearson r = .27). This confirmed the results of the radar study, showing a gap between theoretical knowledge and the application of this knowledge in real-life situations.

Research in the domain of papermaking (Schaafstal, 1991) and integrated circuit design (Ball, Evans, Saint, Dennis, & Ormerod, 1997) has shown that expert troubleshooters use a structured approach to troubleshooting consisting of a number of steps they take in a particular order, deviating only a small degree from a normatively optimal top-down and breadth-first method. For the generic class of troubleshooting problems, we posited that the following steps constitute the systematic approach that experts follow: make a problem description (list both normal and abnormal cues); generate possible causes; test possible causes; repair cause of the problems; evaluate. The function of this approach is primarily memory management. Particularly with novel faults—ones not previously encountered—the control strategy is essential for preventing working memory overload. Working memory overload occurs when students have lost track of where they were in the problem-solving process because they encountered unforeseen problems while troubleshooting. For example, in testing, students may encounter a problem in interpreting a signal, leaving them wondering whether or not they used the measurement tools correctly or measured at the right place. This may result, for example, in taking measurements at various places or recalibrating measurement tools to make sure the original measurement was correct.If this process takes some time, there is a fair chance that the students will have forgotten why and what they were measuring at all, resulting in having to back up in the troubleshooting process. Novice troubleshooters, lacking a good control strategy, do not see the forest for the trees. Based on our observations of expert troubleshooters, and the problems trainees have, we hypothesized that explicit training in a structured, systematic approach to troubleshooting would be beneficial for novices.

The systematic approach by itself will not be sufficient, however, in solving all problems. Schraagen (1993) has shown how highly experienced researchers may exhibit a structured approach in designing an experiment in an area they are unfamiliar with. However, the result may be as poor as that of a beginner. Experienced researchers know what it means to design an experiment, and they will start by trying to fill in the slots of their design schemata in a systematic way. When confronted with research questions outside their area of expertise, they lack the necessary domain knowledge to adequately fill in the slots. Beginners therefore need to draw upon structured domain knowledge. In the area of troubleshooting, almost all researchers have found that a functional representation of systems is a necessary condition for good troubleshooting. Where they differ is whether this is a sufficient condition. Kurland and Tenney (1988) seem to imply that it is: once novices are equipped with a functional model of a system, the structured approach to troubleshooting will follow automatically. We take a different approach and argue that strategic, or metacognitive, knowledge constitutes a separate layer and needs to be taught separately, yet integrated with functional domain knowledge (see Mettes, Pilot, and Roossink, 1981, for evidence of the effectiveness of teaching a structured approach to problem solving separately).

In an earlier experiment, described in detail in Schraagen and Schaafstal (1996), we tried to teach the strategy for troubleshooting independent of the context of a particular system, the rationale being that a context-free strategy may well be trained independently of a particular context. The results were somewhat disappointing: the students trained explicitly on a strategy for diagnosis did not outperform trainees trained in the regular way. In hindsight, differences might have been found if we had taken finer-grained measures, but we wanted to have a real impact on troubleshooting success itself. We concluded that our inconclusive findings were caused by the limited time we had available for practising with the general strategy for troubleshooting (4 hr). Therefore, in consultation with the Royal Netherlands Navy, we decided to implement the interaction between the proposed strategy for troubleshooting and models of the system into a one-week training course, added on to the existing function course, as a first step. If this proved successful, the navy agreed to completely modify the existing course according to our principles, and the modified course would also be evaluated empirically. The interaction between the strategy for troubleshooting and system models will henceforth be referred to as Structured Troubleshooting (ST). I will now turn to a description of how the courses were modified, based on a thorough task analysis, what the instructional objectives were, what instructional materials we used, how practical exercises, written questions and tests were designed, and how we developed the curriculum.

Structured Troubleshooting as a one-week add-on

When developing the one-week add-on course, we followed the following principles:

1.

Instruction needs to be developed based on a thorough task analysis. On the basis of the task analysis, one is able to identify which knowledge and skills are required for adequate task performance. This will prevent education from becoming too detailed at some points, while showing gaps at other points. After the knowledge and skills have been identified, one will determine what knowledge and skills are considered to be present at the start of the course. Finally, one needs to determine which parts are better dealt with in theory, and which parts in practice sessions, and what the most suited exercises are. This will prevent education from being grounded too much on existing documentation of systems and personal experiences of instructors. This documentation is mostly written by system developers and is not always suitable to serve as educational material. A trainee needs to troubleshoot until he or she reaches the Line Replaceable Unit level and system knowledge is only required for this up to a certain level of detail. Technical documentation presents too much detail and goes well beyond LRU-level. This is not to say that technical documentation should play no part during the training courses: students need to learn how to use the documentation as this is the primary reference source on board ships. But the current practice that we encountered was that instructional materials were selected before the instructional goals were formulated, rather than the other way around. This is what we tried to alter.

2.

Theory and practice need to be tuned to each other, both in content and in timing. Current practice within the Royal Netherlands Navy was to develop theory lessons supported by practical lessons. We advocate the opposite: practice lessons supported by theoretical lessons. What is learned in theory should be practiced as soon as possible directly afterwards. In order to tune theory and practice to each other, the methods for developing both types of lessons should be integrated. Preferably, both theory and practice need to be taught by the same instructor, and not by different instructors, as was the case in the past.

3.

Cognitive skills can only be acquired by practicing. This implies that students need to have the opportunity to troubleshoot themselves using the actual system or a simulation thereof. It used to be the case that practice lessons degraded into theory lessons in front of the equipment.

4.

A systematic approach to troubleshooting needs to be acquired in the context of a specific system.

The one-week course was developed by a team of two TNO researchers and one engineer from the Weapon Engineering School. The TNO researchers delivered the task analysis, methods for the analysis of knowledge and skills, and the way instructional goals needed to be achieved, whereas the naval engineer provided the knowledge and skills required for the specific system that was being taught (a computer system). During a period of ten weeks the team met once a week for a full day to discuss progress.

It should come as no surprise that the first step, the task analysis, started out with the structured approach to troubleshooting characteristic for expert behavior. The standard task decomposition for corrective maintenance (1.1) was defined as follows:

1.1.1

Establishes problem description

1.1.2

Generates possible causes

1.1.3

Tests possible causes

1.1.4

Repairs possible cause

1.1.5

Checks for correct functioning

Subtask 1.1.1 was further decomposed into:

1.1.1.a: Establishes problem description at system level

1.1.1.b: Establishes problem description at functional level

1.1.1.c: Establishes problem description at <deeper> level

Subtask 1.1.1.a was further decomposed into:

1.1.1.a.1: Recognize normal behavior at system level

1.1.1.a.2: Identify abnormal behavior

At the lowest level of a subtask, we arrive at the skills and attitudes required for carrying out the subtask. A skill requires knowledge to be carried out, and therefore the knowledge elements are always represented below the skills. Knowledge elements that represent ‘normal behavior’ are, for instance:

·

24V DC BAT LED on

·

115V / 60 Hz ACH-lamp burns

·

Lamp X during 10 seconds on, then off

·

TTW prints “application”

·

CSR rewinds

Knowledge of normal behavior is essential, as it is the starting point for the technician to start troubleshooting. Only when abnormal behavior is perceived, does the engineer start with troubleshooting. Moreover, knowledge of normal behavior also leads to conclusions of what still functions correctly. Hence, these functions do not need to be taken into account any more, which prunes the search space enormously right at the start.

During the subtask of “Generates possible causes”, the trainee draws upon knowledge of system functioning at several levels of abstraction. For the computer system under consideration here, we developed the following three levels of abstraction:

1

at the highest level, the computer system is decomposed into four functional blocks:

a.

Power supply

b.

CPIO

c.

Memory

d.

Peripheral equipment

2

A The block “power supply” is further decomposed into four blocks; the blocks CPIO, Memory, and Peripheral Equipment are also further decomposed.

B Level “2 plus” contains no new functional blocks, but does add the electrical signals between blocks. This level is required for testing purposes.

3 level 3 are the “electric principle schemata” that already exist in the system documentation. This level is required if one wants to know how the signals at level “2 plus” are generated.

Hence, for educational purposes, we developed the levels 1, and 2A and 2B. Level 1 is primarily used in classroom exercises to teach students that troubleshooting can take place at different levels of abstraction., and that with particular faults at level 1 complete functional blocks may be eliminated (for instance, if the translator is defect, there is no connection with the peripheral equipment; the power supply is not defect, however, because the system can be turned on). Levels “2” and “2 plus” are actually used during troubleshooting. For instance, when a fault manifests itself only in one particular type of peripheral equipment (as it usually does), it becomes necessary to distinguish among various types of peripheral equipment. This is what level “2” does.

Space prohibits presenting the complete task decomposition with its constituent knowledge, skills and attitudes. I will therefore move on to the next step we took in developing the one-week add-on course: formulating instructional goals. The instructional goals should be derived from the task analysis and the analysis of the knowledge and skills required. Our philosophy in formulating instructional goals may be summarized in the following three points:

1.

structured troubleshooting should be taught in parts. Therefore, separate goals were formulated for establising a problem description, generating causes, testing, repairing, and evaluating.

2.

instructional goals were established in which troubleshooting was connected to the different levels of abstraction. Separate goals were formulated n which the trainee had to troubleshoot faults at level 1 and at level 2. Goals involving level “2 plus” were related to the testing of possible causes.

3.

goals were established that explicitly referred to the skill of troubleshooting, e.g. “To be able to find a number of faults in the computer system and its peripheral equipment by applying the fault finding method.” These goals emphasize the importance of learning by doing

The next step was to establish how these instructional goals should be achieved during instruction. The resulting methodology is meant as a manual for the instructor. The following didactic principles were followed:

1.

the trainee needs to troubleshoot on his or her own. In the beginning, this will not always be successful, and the instructor will be tempted to assist. There is nothing wrong with that, as long as the instructor does not take over completely, and the trainee is put into a passive role. The instructor should ask a few questions, and then give the initiative back to the trainee.

2.

faults should be ranked from easy (day 1) to difficult (day 4).

3.

faults should be adapted to the instructional goals. We developed faults to explain the different levels, and we developed faults to practice measurement skills.

4.

knowledge acquired on a particular day will be tested at the end of the day with a number of “control questions.” These questions serve to consolidate the knowledge better. The answers to the control questions are discussed first thing next morning with the instructor.

5.

Troubleshooting does not need to take place in all cases with the real system present. The instructor can play the role of the system and the trainee can ask the instructor-system questions. This serves to remind the trainee that troubleshooting is more a matter of thinking than of measuring a lot of things. Generally, the instructional goals emphasizing knowledge should be taught in a classroom setting, whereas all skills and attitudes should be taught with a simulated or actual installation to practice upon.

6.

In order to force the students to follow the structured method of troubleshooting with each fault, a so-called “troubleshooting form” was developed. The purpose of this form was to teach students a systematic approach to troubleshooting. It also serves as a memory aid, given that students often forget what they measured and why they measured at all. The form follows the five main tasks in corrective maintenance and asks the student to write down during (not after!) troubleshooting:

a.

The incorrect and correct attributes

b.

What functional blocks function incorrectly, and what functional blocks still work correctly

c.

What test is being used, what the expected results are, what the actual results are, and which blocks function correctly after testing

d.

What LRU needs to be replaced, repaired or adjusted

e.

Whether the incorrect attributes have disappeared after the LRU has been replaced, repaired, or adjusted.

The final step was to arrange the instructional goals across the five days. We chose to successively extend the skills of the students. Day 1 therefore dealt with the theme of problem descriptions and generating causes of some simple faults at level 1. Day 2 extended this to faults at level 2. On day 3, the transports were taught at the level of “2 plus” and the theme of “testing” was addressed. On day 4, faults at the “2 extra level” were presented and the student was required to more or less independently find these faults and repair them. Finally, day 5 consisted of a practice test, during which the students had to work completely independently on a number of faults.

To obtain an objective assessment of the effect of this training innovation compared with the previous training situation, we performed an experimental evaluation of the new course (for more details, see Schaafstal, Schraagen, & Van Berlo, 2000). The participants were 21 corporals, 10 of whom had passed the regular function course (control group). The remaining 11 participants took the one week add-on course described above. Before starting troubleshooting, participants were asked to take a test to measure their theoretical knowledge. Next, participants were asked to diagnose four faults. They were asked to think aloud while troubleshooting. The verbal protocols were transcribed literally and rated blindly by two expert troubleshooters on three aspect: quality of solution (on a scale from 0 to 1 with five increments), systematicity of reasoning, and functional understanding of the system (both on a scale from 1 to 5). A problem was considered to be “solved”, when the quality of the solution was .75 or higher. Finally, participants in the ST group were asked, after having finished the extra week but before the experiment started, to fill out an anonymous evaluation form about their experience with the extra week of training.

The results showed that the ST group solved 86% of the problems, while the control group solved 40% of the problems, a highly significant difference, F(1,19) = 26.07, p<.01. The ST group displayed higher systematicity of reasoning than the control group (4.64 versus 2.60; F(1,19) = 77.57, p<.01), and more functional understanding (4.59 versus 2.87; F(1,19) = 43.00, p<.01). The ST group did not score higher on the knowledge test than the control group (63% versus 55%, respectively; F(1,10) = 2.36, p = .14). The course was judged to be “useful” and “very good” by the ST group; they felt it was more practice-oriented and provided a better and faster troubleshooting method. Regarding the latter point of faster troubleshooting, we did not formally test this as we had set a 1 hour time limit on troubleshooting and participants in the control condition frequently had not solved their problem within one hour. Given that ST participants often solved at least the first three problems within 30 minutes, sometimes even within 5 to 10 minutes, we feel confident that a reduction of 50% in time to solution can be achieved when someone has been trained according to the ST principles.

Based on these successful results, the Royal Netherlands Navy asked us to completely modify the existing function course and come up with a new course fully grounded on ST principles. This would also address a possible objection to the impressive results obtained, namely that these results were due to the extra week of instruction, and not so much to the ST principles as such.

Modifying existing courses based on ST principles

The regular computer course lasted for six weeks. We modified this course according to the principles described above in the development of the one week add-on course. The modification resulted in a course with a duration of four weeks instead of six weeks, a 33% reduction. This reduction was achieved in the following way. The theory no longer went beyond the level of line replaceable units. This implied that a lot of unnecessary detail was eliminated. The functional decomposition takes less time to convey to students than does component-oriented training, which may be because the former provides a better context for memorizing materials (more top-down and hierarchically structured instead of a morelist-oriented approach, which was the structure of component-oriented training).

The shortened course was also evaluated, along the same lines as described above. This time, 95% of the malfunctions were correctly identified (score of .75 or higher on “quality of solution”), the systematicity of reasoning was 4.84, and the functional understanding was 4.77. These scores are all significantly higher than the control group scores; they do not differ significantly from the scores obtained by the group with 6+1 weeks of training, except for “quality of solution” (average of 0.86 for the 6+1 week group, and 0.97 for the 4 week group, F(1,20) = 4.63, p = .04). Scores on the knowledge test again did not differ among the three groups (control: 53%; 6+1 week: 63%; 4 weeks: 65%). A spearman rank order correlation between “quality of solution” and “score on knowledge test” for all three groups on all four problems was calculated. The correlation was .26, meaning that knowledge of the theory is hardly predictive of actual troubleshooting performance, a result frequently reported in the troubleshooting and process control literature (see Morris and Rouse, 1985, for a review).

Conclusions

The practical implication of the present results for the training of troubleshooting in technical systems is that training designed and given according to the principles of structured troubleshooting results in an enormous performance improvement and faster troubleshooting. In his foreword to the volume “Intelligent tutoring systems: Lessons learned”, Lt.Col. Kenneth H. Rose wrote:

We are not interested in applying new technology just because it is new technology. Benefits must be clear. They must be expressed in terms of effectiveness, material covered, time,and costs. Of course, a sure winner would be a system that offers more training that is more effective in less time for less cost (p. xxii).

We believe that with Structured Troubleshooting we have a sure winner. We offer more training in the sense that trainees finish the course with a more structured approach to troubleshooting and a deeper, more principled knowledge of the systems they have to maintain. It is likely that this knowledge will transfer to other systems as well. We offer more effective training, because at the end of their training students solve more than twice the number of problems than they did in the traditional training in about 50% of the time. We offer training in less time, because courses modified along the principles of Structured Troubleshooting are shortened on average by 25% to 35%. Finally, we offer training for less cost, because courses are shorter and demand less instructor-time and the non-availability of trainee technicians for operational service due to education goes down. Effectiveness, material covered, time, and costs are not compensatory: increasing one does not lead to a decrement in the other. For this reason, the Naval Weapon Engineering School has taken this method as the basis for the design of all its function courses (including the technical management courses), resulting in a more practice-oriented and job-oriented training with less emphasis on the detailed functioning of an installation and its components and more emphasis on the actual skill of troubleshooting.

The instructional design approach we took to modifying the naval courses started with a cognitive task analysis using subject-matter experts. However, added value was provided by introducing a quasi-experimental manipulation—for example, by using experts with different perspectives, backgrounds, or areas of expertise, and by contrasting expert with novice performance. In our radar observation study, for instance, we observed both theory and practice instructors as well as experts with varying familiarity with a particular radar system. This study yielded the important insights that possessing theoretical lnowledge was not sufficient for good troubleshooting and that, presumably because of training practices, there was no such thing as “general radar knowledge.” In this broad sense of task analysis, the analyst frequently gains more insight into the domain studied, the problems and opportunities present, and hence may use this insight to design novel instructional concepts.

In a more restricted sense, task analysis is required for the actual content of the course one designs. Our observations of expert troubleshooting behavior have led us to a generic task structure that formed the basis for all our modified courses in corrective maintenance. The generic task structure is first of all used to derive the knowledge, skills and attitudes required for effective maintenance. Second, it is used to formulate and segment the instructional goals. For each component of the generic task structure, a particular instructional goal was formulated. Third, it inspired us to develop a support tool for the trainee, the “troubleshooting form”. The form is an external representation of the structured approach to troubleshooting displayed by experts. We use it as a normative instructional tool to guide students through the complexities of their task, serving both as a memory aid and a reminder of how the general task of troublehsooting should be carried out.

In the Introduction, I described the advantages and disadvantages of task analysis for ISD. I concluded that the outputs of a task analysis should be directed by the end users of the task analysis results. In the case study of Structured Troubleshooting, there was less of a distinction between the analysts and the instructional designers than I described in the Introduction. The project team we had formed at the time consisted of “believers” in the new approach to training troubleshooting. This team had the persuasive power, backed by empirical data, to convince management of the need to change the traditional approach to instructional design. In other words, the end users in some cases may be too conservative and need to be convinced, by task analysis or other means, that things need to change.

Of course, there are examples of task analyses that were less successful than the one reported here (see Schraagen, 2006, for an example in the area of pilotage). Determining when a particular task analysis method is cost effective, or just effective, depends on a host of factors. First, at the level of individual methods, Hoffman and Lintern (2006) claim that knowledge elicitation methods differ in their relative efficiency. The think-aloud problem solving task combined with protocol analysis has uses in the psychology laboratory but is relatively inefficient in the context of knowledge elicitation. Concept Mapping is arguably the most efficient method for the elicitation of domain knowledge (Hoffman, 2002). Yet, elicitation is rarely something that can be done easily or quickly. In eliciting weather-forecasting knowledge for just the Florida Gulf Coast region of the United States, about 150 Concept Maps were made (Hoffman & Lintern, 2006). Second, whether a task analysis method is effective, that is, delivers results that are useful for the end users, is determined to a large extent by knowing exactly what the end users want. The analyst therefore needs to be clear up front about the end users’ requirements and subsequently needs to attune his or her methods to these requirements. It pays to be somewhat opportunistic in one’s choice of methods, and it certainly pays to always rely on more than one single method. This is because the method depends on the exact circumstances, including the nature of the expertise to be elicited, the expert’s personality and communication skills, the time and budget available, and of course the desired end results. Third, the analyst needs to be aware of political issues involved. In carrying out a task analysis, one always needs to ask oneself: what is my role in the multitude of conflicting interests (experts, sponsors, academic versus commercial interests, etc.)? For instance, do sponsors really value the expert’s knowledge, or do they want to capture the knowledge and make the experts redundant? When developing instructional systems, is the primary goal to further academic knowledge, the students’ abilities to learn or the school management’s interests? In the end, these organizational and sometimes political issues may be the decisive factors that lead to successful or unsuccessful applications.

References

Acchione-Noel, S.C., Saia, F.E., Willams, L.A., & Sarli, G.G. (1990). Maintenance and Computer HAWK Intelligent Institutional Instructor Training Development Study. Final Report. Department of the Army, December 1990.

Amirault, R.J., & Branson, R.K. (2006). Educators and expertise: A brief history of theories and models. In K. Anders Ericsson, N. Charness, P.J. Feltovich, & R.R. Hoffman (eds.), The Cambridge handbook of expertise and expert performance (pp. 69–86). New York: Cambridge University Press.

Annett, J. (2000). Theoretical and pragmatic influences on task analysis methods. In J.M. Schraagen, S.F. Chipman, & V.L. Shalin (Eds.), Cognitive task analysis (pp. 25–37). Mahwah, NJ: Lawrence Erlbaum Associates.

Annett, J., & Duncan, K.D. (1967). Task analysis and training design. Occupational Psychology, 41, 211–221.

Ball, L.J., Evans, J., Saint, B.T., Dennis, I., & Ormerod, T.C. (1997). Problem-solving strategies and expertise in engineering design. Thinking and Reasoning, 3, 247–270.

Card, S.K., Moran, T.P., & Newell, A. (1983). The psychology of human-computer interaction. Hillsdale, NJ: Lawrence Erlbaum Associates.

Chipman, S.F., Schraagen, J.M., & Shalin, V.L. (2000). Introduction to cognitive task analysis. In J.M. Schraagen, S.F. Chipman, & V.L. Shalin (Eds.), Cognitive task analysis (pp. 3–23). Mahwah, NJ: Lawrence Erlbaum Associates.

Cooke, N.J. (1994). Varieties of knowledge elicitation techniques. International Journal of Human-Computer Studies, 41, 801–849.

Diaper, D. (2001). Task analysis for knowledge descriptions (TAKD): A requiem for a method. Behavior and Information Technology, 20, 199–212.

Diaper, D., & Stanton, N. (2004) (Eds.), The handbook of task analysis for human-computer interaction. Mahwah, NJ: Lawrence Erlbaum Associates.

Diaper, D., & Stanton, N. (2004). Wishing on a sTAr: The future of task analysis. In D. Diaper & N. Stanton (Eds.), The handbook of task analysis for human-computer interaction (pp. 603–619). Mahwah, NJ: Lawrence Erlbaum Associates.

Ericsson, K.A., & Simon, H.A. (1984). Protocol analysis: Verbal reports as data. Cambridge, MA: MIT Press.

Feltovich, P.J., Prietula, M.J., & Anders Ericsson, K. (2006). Studies of expertise from psychological perspectives. In K. Anders Ericsson, N. Charness, P.J. Feltovich, & R.R. Hoffman (eds.), The Cambridge handbook of expertise and expert performance (pp. 41–67). New York: Cambridge University Press.

Flanagan, J.C. (1954). The critical incident technique. Psychological Bulletin, 51, 327–358.

Gardner, H. (1985). The mind’s new science: A history of the cognitive revolution. New York: Basic Books.

Hayes-Roth, F., Waterman, D.A., & Lenat, D.B. (Eds.). (1983). Building expert systems. Reading, MA: Addison-Wesley Publishing Company.

Hoffman, R.R. (1987, Summer). The problem of extracting the knowledge of experts from the perspective of experimental psychology. AI Magazine, 8, 53–67.

Hoffman, R.R. (2002, September). An empirical comparison of methods for eliciting and modeling expert knowledge. In Proceedings of the 46th Meetng of the Human Factors and Ergonomics Society (pp. 482–486). Santa Monica, CA: Human Factors and Ergonomics Society.

Hoffman, R.R., & Deffenbacher, K. (1992). A brief history of applied cognitive psychology. Applied Cognitive Psychology, 6, 1–48.

Hoffman, R.R., Crandall, B.W., & Shadbolt, N.R. (1998). A case study in cognitive task analysis methodology: The critical decision method for elicitation of expert knowledge. Human Factors, 40, 254–276.

Hoffman, R.R., & Woods, D.D. (2000). Studying cognitive systems in context: Preface to the special section. Human Factors, 42, 1–7 (Special section on cognitive task analysis).

Hoffman, R.R., & Lintern, G. (2006). Eliciting and representing the knowledge of experts. In K. Anders Ericsson, N. Charness, P.J. Feltovich, & R.R. Hoffman (eds.), The Cambridge handbook of expertise and expert performance (pp. 203–222). New York: Cambridge University Press.

Hollnagel, E. (2003). Prolegomenon to cognitive task design. In E. Hollnagel (Ed.), Handbook of cognitive task design (pp. 3–15). Mahwah, NJ: Lawrence Erlbaum Associates.

Jonassen, D.H., Tessmer, M., & Hannum, W.H. (1999). Task analysis methods for instructional design. Mahwah, NJ: Lawrence Erlbaum Associates.

Kieras, D. (2004). GOMS models for task analysis. In D. Diaper & N.A. Stanton (Eds.), The handbook of task analysis for human-computer interaction (pp. 83–116). Mahwah, NJ: Lawrence Erlbaum Associates.

Klein, G.A., Calderwood, R., & Macgregor, D. (1989). Critical decision method for eliciting knowledge. IEEE Transactions on Systems, Man, and Cybernetics, 19, 462–472.

Kurland, L.C., & Tenney, Y.J. (1988). Issues in developing an intelligent tutor for a real-world domain: Training in radar mechanics. In J. Psotka, L. Dan Massey, & S.A. Mutter (Eds.), Intelligent tutoring systems: Lessons learned (pp. 119–180). Hillsdale, NJ: Lawrence Erlbaum Associates.

Lajoie, S.P. (this volume). Assessing Professional Competence: Cognitive Apprenticeship Models in Avionics and Medicine.

Massey, L. Dan, De Bruin, J., & Roberts, B. (1988). A training system for system maintenance. In J. Psotka, L. Dan Massey, & S.A. Mutter (Eds.), Intelligent tutoring systems: Lessons learned (pp. 369–402). Hillsdale, NJ: Lawrence Erlbaum Associates.

Mettes, C.T.C.W., Pilot, A., & Roossink, H.J. (1981). Linking factual and procedural knowledge in solving science problems: A case study in a thermodynamics course. Instructional Science, 10, 333–361.

Morris, N.M., & Rouse, W.B. (1985). Review and evaluation of empirical research in troubleshooting. Human Factors, 27, 503–530.

Newell, A., & Simon, H.A. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice-Hall.

Perez, C. (2002). Technological revolutions and financial capital: The dynamics of bubbles and golden ages. Cheltenham: Edward Elgar.

Ross, K.G., Shafer, J.L., & Klein, G. (2006). Professional judgments and “Naturalistic Decision Making.” In In K. Anders Ericsson, N. Charness, P.J. Feltovich, & R.R. Hoffman (eds.), The Cambridge handbook of expertise and expert performance (pp. 403–419). New York: Cambridge University Press.

Rouse, W.B. (1988). Adaptive aiding for human/computer control. Human Factors, 30, 431–443.

Schaafstal, A.M. (1991). Diagnostic skill in process operation: A comparison between experts and novices. Unpublished doctoral dissertation, Groningen University, Netherlands.

Schaafstal, A.M., & Schraagen, J.M.C. (1991). Diagnosis in technical environments: A theoretical framework and a review of the literature (Tech. Rep. 1991 A–37). Soesterberg: TNO Institute for Perception.

Schaafstal, A., Schraagen, J.M., & Berlo, M. van (2000). Cognitive task analysis and innovation of training: The case of structured troubleshooting. Human Factors, 42, 75–86.

Schraagen, J.M. (1993). How experts solve a novel problem in experimental design. Cognitive Science, 17, 285–309.

Schraagen, J.M.C. (2006). Task analysis. In K. Anders Ericsson, N. Charness, P.J. Feltovich, & R.R. Hoffman (eds.), The Cambridge handbook of expertise and expert performance (pp. 185–201). New York: Cambridge University Press.

Schraagen, J.M.C., Chipman, S.F., & Shalin, V.L. (Eds.) (2000). Cognitive task analysis. Mahwah, NJ: Lawrence Erlbaum Associates.

Schraagen, J.M.C., & Schaafstal, A.M. (1996). Training of systematic diagnosis: A case study in electronics troubleshooting. Le Travail Humain, 59, 5–21.

Seamster, T.L., Redding, R.E., & Kaempf, G.L. (1997). Applied cognitive task analysis in aviation. London: Ashgate.

Tenney, Y.J., & Kurland, L.C. (1988). The development of troubleshooting expertise in radar mechanics. In J. Psotka, L. Dan Massey, & S.A. Mutter (Eds.), Intelligent tutoring systems: Lessons learned (pp. 59–83). Hillsdale, NJ: Lawrence Erlbaum Associates.

Tennyson, R.D., & Elmore, R.L. (1997). Learning theory foundations for instructional design. In R.D. Tennyson, F. Schott, N.M. Seel, & S. Dijkstra (Eds.), Instructional design: International perspectives (Vol. 1: Theory, research, and models) (pp. 55–78). Mahwah, NJ: Lawrence Erlbaum Associates.

Van Merriënboer, J.J.G., & Boot, E.W. (this volume). Research on Past and Current Training in the Armed Forces: The Need for a Paradigm Shift in Military Training.

Viteles, M.S. (1932). Industrial psychology. New York: W.W. Norton.