(cont. 3 of 4)
Assumes that correct performance and systematic errors are two sides of the same coin. For example, automaticity (the delegation of control to low level specialists) makes actions-not-as-planned inevitable.
The resource limitations of the conscious workspace (we will consider this in attention) while essential for focusing computationally powerful operators upon particular aspects of the world contribute to information overload and data loss. A knowledge base that contains specialised theories rather than isolated facts preserves meaningfulness but renders us liable to confirmation bias. An extraordinarily rapid retrieval system, capable of locating relevant items within a virtually unlimited knowledge base leads our interpretations of the present and anticipations of the future to be shaped by too much the matching regularities of the past. Considerations such as these make it clear that a broadly based analysis of recurrent error forms is essential to achieving a proper understanding of the largely hidden processes that govern human thought and action.
Many of these processing abilities will be discussed in the next part of the course when we consider the information processing characteristics of the human operator. By detailing these characteristics in terms of their strengths and weaknesses we can allocate function in a principled way.
Most of the time we assume that the possibilities to err are enormous. However, errors take a limited number of forms (Reason, 1990). Reason offers the example of boiling an egg. Think of each step along the way from getting the egg out of the egg box, boiling the water and so on and think how many steps there are which could be messed up. The possibilities seem endless. However, Reason asserts that human error is neither as abundant nor as varied as its vast potential might suggest.
Errors appear in very similar guises across a wide range of mental activities. Thus it is possible to identify comparable error forms in action, speech, perception, recall, recognition, judgment, problem solving, decision making, concept formation and the like. However, the varied situations in which these errors come to light make them hard to predict. It is not necessarily variability and unreliability in the human operator that causes these errors.
Reason distinguishes between variable and constant errors.
Chapanis' target: illustrates how we can consider error prediction. See attached target area, depicting variability in accuracy.
The accuracy of error prediction depends very largely on the extent to which the factors giving rise to the error are understood. This requires a theory that relates the three major elements in the production of an error: the nature of the task and its environmental circumstances, the mechanisms governing performance and the nature of the individual. An adequate theory therefore is one that enables us to forecast both the conditions under which an error will occur and the particular form it will take.
For most errors, our understanding of the complex interaction between these various causal factors is imperfect and incomplete. Therefore most error predictions will be probabilistic rather than precise. They are likely to take the form "Given this task to perform under these circumstances, this type of person will probably make errors at around this point, and they are likely to be of this variety". NOT "person X will make error Y at time Z in place A". For example, think about perceptual illusions: not only can we predict them with near certainty (given an intact sensory system) we can also forecast with considerable accuracy how their experience will vary with different experimental manipulations.
As a larger example, we can predict that next January the banks will return a large number of cheques with this year's date on them. We cannot necessarily predict the exact number of misdated cheques (although we could approximate this from previous years) nor can we say precisely who will make this error or on which day. But we do know that such string habit intrusions are amongst the most common of all error forms; that dating a cheque being largely routine activity (at least with respect to the year) is particularly susceptible to absent-minded deviations of this kind; and that the early part of the year is the period in which these slips are most likely to happen. Such qualitative predictions may seem banal but they are powerful. Arguably more powerful because we know they are qualitative and do not rely on them as we would more prescriptive approaches like error ergonomics advocates. Such regular occurrences are extremely revealing of the covert processes controlling practised activities.
Intentions to act and errors are inseparable. People make slips everyday. Reason gives the example of forgetting to put tea in a teapot. The point is that even very skilled behaviour is particularly prone to these slips. Experts generally make slips and novices make mistakes.
Slips and lapses are errors that result from some failure in the execution and/or storage of an action sequence, regardless of whether or not the plan which guided them was adequate to achieve its objective.
Mistakes may be defined as deficiencies or failures in the judgments and / or inferential processes involved in the selection of an objective or in the specification of the means to achieve it, irrespective of whether or not the actions directed by the decision-scheme run according to plan.
That is:
Cognitive Stage Primary Error type Planning Mistakes (failures of expertise or lack of expertise) Storage Lapses Execution Slips
Examples: See Norman. For more, see Reason.
Error types and error forms
Whereas error types are conceptually tied to the cognitive stages or mechanisms, error forms are recurrent varieties of fallibility that appear in all kinds of cognitive activity. Thus they are evident in mistakes, lapses and slips. Error forms are so widespread that it is extremely unlikely that their occurrence is linked to the failure of any single cognitive entity. They seem to be rooted in universal cognitive processes, particularly the mechanisms involved in knowledge retrieval. Two such example are similarity and frequency biases.
Allocation of functions requires an understanding of the user.
Human error is a complex subject to consider. This is for a number of reasons: