Copyright 1998. David Gilmore, Elizabeth Churchill, & Frank Ritter

Why do we need Human Factors?

(cont. 2 of 4)

Attitudes to error

There are a number of accidents that illustrate the need for human factors investigation. As I have pointed out before, there are very different views to how we should see error. In the two examples that follow, I would like to highlight the differences between these views.

In 1966, D.R. Davis published in Ergonomics a review of causes of trains going through red signals. Almost all of his analyses consisted of analysing the state of the driver at the time of the near accident. (from Brown and Martin)

"There was one case in which the error could be attributed to a panic reaction. Case 25 was driving a diesel into a station where he was due to stop. A bird flew into the window of the cab. The driver and the fireman raised their hands to protect themselves. They all laughed at this. Lowering his hands again, the driver began to apply the brake at the usual time and place. Gradually he realised that the train wasn't slowing and was still travelling at around 60 mph. He knew he was going to overrun the station, where shunting frequently occurred. The signals remained red and the train didn't slow. Then he noticed his brake vacuum gauge showed normal (20 lb/sq. in) and then he realised that he had been applying the engine air brake and not the vacuum brake - the two controls being side by side".

Davis concludes: "This is an excellent example of inappropriate behaviour in an emergency.....that the driver was of an anxious personality is regarded as a significant factor".

For example, as we saw on the video the Kegworth Aircrash: here, a plane crashed onto the M1 as a result on one engine failing. The pilots switched off the "healthy" engine and the plane crashed because the defective engine gave out. Pilot error was the official verdict. In fact, a number of factors contributed to the crash.

Thus, very often the `cause' for accidents is considered to be "user error". This is a judgment that has a number of associated assumptions. There are many examples of such a view: surveys of problems with anaesthesia recently suggested that 70-75% of the problems could be attributed to Human Error (Chopra et al, 1992). Boeing (1993) states that 70% of crashes were due to "crew error".

The underlying assumption here is that human element is separate from the system in question. Therefore, the problems that arise are either in the technology or in the human. In turn this view has in-built the idea that it does not matter whether the operator or the machine carry out the task, simply the most reliable element should do it. This leads us to automation, as in our technocentric view, we feel machines to be more reliable than human (but see the Air Bus incident where the on-board computers failed). Such a view also leads us to enforce work practices and work rules, exile culprits, police practitioners and generally make judgments about human performance, where those judgments are attributed to the last person to touch the system (as the pilot in the Kegworth video says). Thus, allocation of function between human and device becomes arbitrary. But we should take into account processing characteristics of the operator and optimise the relationship.

However, what "user-centred" HF considers is "Why did it happen?". In the first example, we would want to consider why the brake handles were side by side, inviting confusion. Redesigning the cab just possibly might have made a difference here.

Compare this description from a review of aircraft accidents by JM ROLFE in the 1970's:

"A single engined aircraft was engaged in crop spraying. At the end of a run and during a low steep turn to the left, it lost height and the port wing struck the ground. The aircraft crashed, caught fire and the pilot was killed. The accident investigators reported that it seemed likely that the pilot lost control of his aircraft during a manoeuvre which left no margins for small errors of judgment. But why did the pilot lose control? The manoeuvre was without doubt a demanding one , but the pilot was also very experienced. A possible explanation arose from the operating manual prepared by the operator for the pilot to follow. The manual stated that in the turn "airspeed must be checked, the engine rpm, oil pressure, spray pressure and spray content noted. When safely level in the turn immediate reference should be made to the ground marker and if possible the spray drift noted". The aircraft involved in the accident was a high-winged monoplane and to keep the ground marker in sight during a turn it was necessary to look upwards through the transparent roof. So, if the pilot followed the instructions he had in turn to look at his instruments and then transfer his gaze upwards. This would almost certainly have involved head movements, which is in any direction other than that in which the aircraft is turning will lead to severe disorientation. Was the writer of the operations manual aware of this severe limitation of the human visual system?"

What this example shows is how there had been a shift in the perspective of the human operator away from simple 'mistakes' and 'misjudgements' toward appreciating that human operators have certain characteristics. Those characteristics need to be considered and designs made in conjunction with consideration of them. These characteristics are NOT limitations when we design to optimise the relationship between the user and the device. We are not encouraged to consider the limitations of devices like aircraft when an accident occurs. Rather, we expect the human operator to make up for all the foibles of poor system design.

Studying error

Most of the examples we discuss are in the form of case studies: error analysis also occurs through simulator study, lab studies, questionnaire studies and corpus gathering, for example, the error ergonomics approach.

Case studies detailing the etiology of accidents show that disasters are seldom the product of single monumental behaviour. Usually they involve the concatenation of several often quite minor actions committed by one person or a number of people. In general the final action may well have been a mistake, lapse or slip but their effects accumulate whilst the actual action itself would normally have negligible consequence. Each action compounds the problems of its predecessor . In retrospect the whole series seems to move inexorably toward the final conclusion.

Rarely can any mishap be blamed on one person. Research shows that errors have their roots in the backgrounds of participants, the dynamics of the group and the environment in which the activity occurs. Failure in interpersonal communications characterises many accidents caused by human error. For example, the Crew Resource Management reports show that (an organisation in the States which emphasises communication and co-ordination):

* a distracted crew that failed to complete a safety checklist didn't confirm the aeroplane's flaps were extended causing the plane to crash on take-off.

* a co-pilot failed to get the attention of the captain about concerns that take-off thrust was not properly set causing the aircraft to crash into the river. Why? The co-pilot who couldn't tell his "boss" what to do.

* a communications breakdown between captain, co-pilot and air traffic control on the amount of fuel in the plane caused a crash when the fuel ran out.

Ascriptions of error entail 'blame'. Who touched the system last? But think about blame: for example, we know that fatigue affects cognitive performance, so can we really blame the worker who is required to stay awake for long shifts when they make a decision that turns out to be an "error", e.g. house doctors on wards for 36 hours? We know that design layout affects performance: consider the nature of the task, use consistency, use tactile cues, use forcing functions. If we do not use appropriate designs can we blame users? If we do not provide adequate or correct feedback can we blame users? We know that past experience affects current performance. Can we blame operators for importing actions learnt before if we have not trained them? We know that the human operator has certain sensory characteristics. Can we blame someone who can't perceive (visually or auditorally) a signal?

All of these were present in some of the crashes we will talk about.

Useful concepts when considering errors and accidents

Latent errors e.g. Kemeny Commission, 1979: an error during maintenance that resulted in the emergency feedwater system being unavailable during the Three Mile Island incident.

e.g. The Space Shuttle Challenger disaster: the decision to launch in cold weather was the initiating event that activated the consequences of the latent failure: a highly vulnerable booster rocket seal design.

The latent failure model points out that there are many factors that contribute to incidents and disasters. Which of these many factors we focus on are the products of human processes of causal attribution. What we identify as causes depends on who we are, who we are communicating to, on the assumed contrast cases or causal background for that exchange, on the purposes of the knowledge of the outcome.

Sharp end and blunt end Most of what we will consider will be the cognitive factors that are present at the sharp end. But remember that cognition relies on information and often factors at the blunt end affect what information is available at the sharp end.

Hindsight bias is the tendency for people to consistently exaggerate what could have been anticipated in foresight (Fischcoff, 1975). Studies have consistently shown that people have a tendency to judge the quality of a process by its outcome (Baron & Hershey, 1988). The information about the outcome biases their evaluation of the process that was followed. Decisions and actions followed by a negative outcome will be judged more harshly than if the same decisions had resulted in a neutral or positive outcome. Indeed this effect is present even when those making the judgments have been warned about the phenomenon and been advised to guard against it.

The hindsight bias leads us to "construct a map that shows only those forks in the road that we decided to take, where we see the view from one side of the fork in the road, looking back" (Lubar, 1993).

Given the knowledge of the outcome, reviewers will tend to simplify the problem solving situation that was actually faced by the practitioner. The dilemmas, the uncertainties, the tradeoffs, the demands on attention, and double binds faced by practitioners may be missed or under-emphasised when an accident is viewed in hindsight.

Lecture 2 Continued...

Return to Contents...