Copyright 1998. David Gilmore, Elizabeth Churchill, & Frank Ritter

These lecture notes were not written as a course handout, but as a resource for lectures. Therefore, references and comments will not always be complete.

Aim: Define, describe and locate human factors practice

(part 4 of 5)

When do you need Human Factors


We will deal with the design cycle and how Human Factors is used within the design of a single artifact in a later lecture.

In the meantime, let's consider when in the life of an artifact's production Human Factors gets applied.

Often when something becomes available for more than a few people. When a number of users who cannot be expected to be experts are going to use a device then the remit of the human factors expert is very different from looking at something for use by a few experts.

Historically, innovation is the starting point for any system.

i.e. single expert user and the need for HF studies is not high.

e.g. computers began as highly specialised devices, used by only a few people. In these days, the users were largely the programmers who were highly technologically literate and who had probably built the machines themselves. From this developed the batch processor, punched holes in cards that embodied the programme and which was fed into a reader. This was highly error-prone activity, which led to the development of the teletype so that one could see what was being entered. As the market grew, so did the importance of usability issues.

e.g. Wright brothers and synchromesh. No-one would claim that the Wright brother's plane was usable, nor that pre-synchromesh cars were easy to learn to drive. In both cases there was no other affordable means of building that system. Also in both cases, the systems were usable provided they were used by experts. This leads to three potential contributions from psychology within the design of an artifact and how it is used:

1. select the people who can use the system: through psychometric testing, job samples, qualifications and so on

2. train the people into being right for the system

3. redesign the system.

Of course, as a system gets used by more people, we need to consider how to make it more usable by training people or by redesigning.

All three areas of research are relatively modern (post WW II) and therefore, maybe not very surprising, a certain amount of conflict (rather than co-operation) exists between them. Nowadays, a consultant occupational psychologist (called in to explain why some aspect of an organisation was failing) would be expected to cover all three possibilities when giving advice and making recommendations. Even so, larger organisations (e.g. Civil Service, MOD, etc) still divide up these three functions though the psychologists in each usually know everyone else.

One needs to consider who the user's will be and what is the range within the users e.g. a child's tricycle is designed with the view that most kids are small, similarly door handles in a primary school are designed to stop kids leaving by themselves.

Also we need to consider what users want to do, e.g. door handles may be designed with safety in mind, high up at a primary school -- "forcing functions".

Human Factors as a frame of mind (approaches)

The role and involvement of the human factors expert varies in design. The involvement depends on the ethos of the design setting (the relative importance of usability issues and the degree of focus on supporting the user). Clearly the complexity of the object being designed in also a factor.

Classical ergonomics. also called "interface ergonomics". The interface referred to is the person / machine interface of controls and displays, and the principle contribution of the human factors expert is the improved design of dials and meters, control knobs and panel layout. The human factors expert's concerns can extend beyond the design of chairs, benches and machinery and to a limited extend the specification of the optimum ambient working environment.

The 'classical approach' began primarily with the design of military equipment, but now considers the design of items and workspaces in civilian contexts. Given this approach often takes a consultancy mode, advice is usually given in the form of principles, guidelines and standards. Such application of guidelines and prescriptions for design activity are limited by their lack of context-related specific advice. Design problem can only be considered with the "best" solution according to the experimental results without being able to predict either the likely consequences of deviations from that or the way it would change in other conditions. Classical ergonomists also face organisational problems: relegated to advising on final product, not full member of the design team. Often there are communication barriers between ergonomists and developers. Historically, this approach has run into difficulties as people see the ergonomist as having roots in the efficiency and time motion studies of Taylor and the Gilbreth's.

Error ergonomics. The is the study and explanation of human error in systems. The "zero defects" approach assumes that human error is the result of inadequate motivation, c.f. the examples of accidents and error attribution. This approach tends to results in campaigns for safety propaganda, e.g. oil rigs. These drives attempt to raise awareness and incentives for the workers. However, even in WW I, where there was a highly motivated work force, fatigue was a major problem.

Similarly, the "error data store" approach believes that human error is inevitable. This approach produces data banks of error probabilities for a variety of tasks executed under various conditions. Proposed solutions take the form of ways of designing systems in such a way as to minimise the occurrence and effects or errors. Therefore, it is necessary to predict the incidence and consequences of human errors in any given situation.

Systems ergonomics. This approach developed in the USA in the 50's, and takes a more holistic approach to user-system dyad. The user and the system are seen as a single interacting system that is placed within a work context. Within this approach, system design involves parallel development of hardware and personnel issues, with training and selection issues considered. The ergonomist acts as a full member of the design team, working throughout the design cycle and involved in early and late design guidance. Therefore, in addition to the anthropometric, behavioural and cognitive considerations of the finished product itself, the human factors expert is involved in: (a) determining the required task functions (by activity and task analysis in conjunction with the consideration of the task requirements) and allocating the functions between the user and the system, (b) the design of personnel subsystems, and (c) the design of job descriptions and job support materials (e.g. manuals and training schemes).

The approach differs from user-centred design as the designers and human factors experts still view the user as just one part of the system, whereas user-centred design focuses more on the user's needs and perspective than those of the system, tasks and activities per se. In computer system development for example, a systems approach would consider the task from a "logical", "syntactic" perspective then the computer system implementation issues with a view to allocating function between the user and the computer system. A user-centred approach would consider the processing capabilities of the human user and analyse tasks from the perspective of the user.

User centred design. (cf. Norman, 1988) involves focusing on the user's needs, carrying out an activity/task analysis as well as a general requirements analysis, carrying out early testing and evaluation and designing iteratively. As in the systems approach this has a broader focus than the other approaches, but here there is a greater focus on the user.

Human Centred design

(a) introduction of a new system should focus on the organisational changes, user needs and demands together with the technological requirements.

(b) the boundaries between which issues are defined as "technical" and "organisational" are not fixed and need to be negotiated.

(c) new applications of technology should be seen as the development of permanent support systems and not one-off products which finishes with implementation (i.e. the way in which technological change alters the organisation needs to be considered).

(d) humans should be seen as the most important facets of an information systems and should be 'designed in'.

(e) the people context of information systems must be studied and understood for it is clear that dimensions such as gender, race, class, power affect people's behaviour with respect to technologies.

(f) design by doing, user participative design

Socio Technical Systems design. see Clegg.

Conclude: a lot of work and many perspectives - in this course, we will take a user centred approach, concentrating on user centred analysis and evaluation of tasks and artifacts, and user-centred design suggestions and practices.

Concepts in Human Factors (with special emphasis on interactive systems)

The ultimate goal of quality must be that of fitness for purpose, although the criteria for determining whether this is achieved must be problem dependent, domain dependent and context dependent.

Functionality, usability and learnability.

Often functionality is the first thing to be considered and some consideration of the usability issues is sometimes tacked on at the end. This can lead to poorly designed artifacts which are hard to use.

Usability issues need to be considered with respect to the task that is being carried out and the human operator's capabilities. This kind of analysis leads us to reassess what is meant by "human error" and why accidents occur.

Considering the quality of a software design: There are many factors, but especially reliability, efficiency, maintainability, usability and learnability. Also testability, portability, reusability.

Functionality. What something does.

Efficiency of a system: can be measured through the use of resources such as processor time, memory, network access, system facilities, disk space and so on. Often the most cited by programmers. For good reasons because it ensures that systems work fast and don't frustrate users. But don't be mistaken, the concern with efficiency is more often about the ego of the programmer than about the poor end users.

It is a relative concept in that one system can be evaluated as more efficient than another in terms of some parameter such as processor use, but there is no absolute scale on which to specify an optimum efficiency as soon as multiple criteria are allowed, which is nearly always the case in the real world. In the early days of computers, when programs were small and computer time was relatively expensive, efficiency of computer time was considered to be of paramount importance, and it probably was. With the better machines of today, the designer needs to consider the effects of choices upon all resources. The HF professional most often defends the user's resources, including their motivation.

Maintainability. As systems get larger and more costly, the need for a life-long time in service increases in parallel. To help achieve this, designs must allow for future modification. Designers needs to provide future maintainers with mental models of the system so they can gain a clear understanding (Littman et al 1987). Development of modular designs....BUT....

Reliability is concerned with the dynamic properties of the eventual system and involves the designer in making predictions about behavioural issues. We need to know if the system is going to be complete (in the sense that it will be able to handle all combinations of events and system states), consistent (in that its behaviour will be as expected and will be repeatable, regardless of the overall system loading at any time) and robust (when faced with component failure or some similar conflict (for example, if the printer used for logging data in a chemical process-control plant fails for some reason, this should not be able to'[hang' the whole system but should be handled according to the philosophy summed up in the term graceful degradation.

As systems get larger and more complex, the problems of ensuring reliability also escalate. For safety critical systems where this factor is paramount, various techniques have been developed to help overcome limitations in design and implementation techniques. For example, in a system used in a fly-by-wire aircraft in which the control surfaces are managed by computer links rather than by direct hydraulic controls, the implementation will be by means of multiple computers, each programmed by a separate development team and tested independently any operational request to the control system will then be processed in parallel by all the computers and only if they concur with the requested operation to be performed.

Usability: but what is it?

Ravden et al specify the following usability criteria:

Eason: Usability is not determined by just one or two constituents, but is influenced by a number of factors. These factors do not simply and directly affect usability, but interact with one another in sometimes complex ways. Eason (1984) has suggested a series of concepts that explain what these variables may be: system function-task match, task characteristics, user characteristics, are independent variables which lead to the dependent variables of user reaction, scope of use (restricted, partial, distant, constant).

Eason offers the following definition of usability: the "major indicator of usability is whether a system or facility is used".

However, this is patently not the case as many devices which are used are hard to use. A more operational definition of usability would be ISO (International Standards Organisation ) "the usability of a product is the degree to which specific users can achieve specific goals within a particular environment; effectively efficiently comfortably and in an acceptable manner."

The ETSI (European Telecommunications Standards Inst) considers two kinds of usability dimensions, those linked to performance and those related to attitude, where performance is measured objectively and attitude represents subjective dimensions (ETSI, 1991).

Schakel (1991) maintains the distinction between performance and attitudinal dimensions, but he defines four distinguishable and quantifiable dimensions which may sensibly assume varying degrees of importance in different systems: effectiveness, learnability, flexibility and attitude. These dimensions are not mutually exclusive in the sense that measures of for example effectiveness can, at the same time also give some indication of system learnability. However, they provide a good starting point.

Booth (1989) says that usefulness, effectiveness and easy of use, learnability, attitude and likeability. A useful system is one that helps users to achieve their goals.

Most important concepts: FUNCTIONALITY, USABILITY, LEARNABILITY

Lecture 1 Continued...

Return to Contents...