These lecture notes were not written as a course handout, but as a resource for lectures. Therefore, references and comments will not always be complete.
Task analysis is a multitude of techniques for representing what people do when they perform a particular task.
There are many different techniques, focussing on different aspects of task:
The goals of task analysis vary too, covering such things as:
However task is a very nebulous term and needs to be set in the context of many other things that people do, such as:
Most TA techniques ignore the work and activities (remember activity theory?) part of people's lives and focus on tasks, operations and actions. But few would argue that this was a good thing -- all TA needs to be done with an understanding of the general context in relation to work and activities.
For example, consider coffee making. As an isolated task, it may appear very straightforward, but when one considers a wider activity context (the spatial layout, who occasionally shares the physical space, what other tasks occur concurrently, and so on) the analysis can become much more complex, as complex as the task is. The activity context for coffee making may be making breakfast, getting the kids ready for school, and not being late for work, or it may a quite different one of welcoming important business visitors and making a good impression.
It is especially common in work places to discover that the activity context produces changes in the way people do tasks from how they would ideally do them, or from the way they are supposed to do them.
Another important distinction is between the focus of different analyses. We can study
Many would argue that a good task analysis would include analyses of each of these.
Time studies (Taylor) and motion studies (Gilbert) came first (1920's) -- studying patterns of behaviour as tasks were executed. (Back to lecture 1 on the historic roots of HF.)
Given the nature of psychological knowledge then, the absence of concerns about user's cognitions is not surprising. Psychology was primarily focussed upon observable behaviour. Most systems that were initially studied were simple enough that observed behaviour was adequate for ways to understand how to improve the system.
But time and motion looked purely at sequence of behaviour. This level of analysis breaks down when people juggle multiple tasks or are interrupted. Furthermore, time and motion studies do not address the fact that even sequences of behaviour have underlying structures.
This led to a focus on the goals and subgoals of behaviour (hierarchical task analysis) at the expense of sequencing information. This is still the dominant form of task analysis.
More recently people have been trying to move beyond goals towards understanding "activities" in which sequencing, multiple concurrent tasks, interruptions and other aspects are all taken into account -- but no formal methods as yet.
Task analysis has two key roles in human factors
Furthermore, we can take two types of task analysis and look at their relationship -- for example, is the relationship between goals/subgoals and the chronology of actions simple or complex. We can assume that where there is a complex relationship between goal structure and action structure, then an interface may be hard to use.
The major problems of task analysis are
Much different terminology is used, and much of it is used in ways which seemed designed to confuse the novice.
For example, goal and task are used interchangeably by some and to mean importantly different things by others. Here's a glossary that you can use:
These distinctions are in part influenced by experience, since expertise can change the way we perceive the changes we can effect in the world (i.e. our goals). Expertise can also make tasks become simple actions. So, even a Goal-Task-Action analysis would need to be performed with other assumptions, such as the skill-level of users in mind.
Although TA assumes that tasks may be performable in different ways and tries to represent these choices, the more complex mapping between goals and tasks is never clearly addressed (users may have multiple, conflicting goals, and may not be certain whether a given task will help them achieve their goal).
Involves decomposing tasks into subtasks and representing the order and structure through structure diagrams.
Tasks / subtasks are represented down the page and sequencing is represented across from left to right.
Performing HTA is a time-consuming process with many choices. It is fussy, and takes effort when you could be having fun coding or chatting to users.
In essence it is a representational device rather than a technique.
Read p. 416 of Preece for more details of the technique.
Cognitive task analysis focuses on what the user knows and frequently is concerned with the quality of the mapping between a representation of the user's knowledge and that required by the system.
There are few established techniques or representations, though many claim to pursue this approach.
Sometimes one wants to analyse what a system as a whole must do, not just what the user must do, or just what the system must do.
This can be called a logical task analysis (though it is also called conceptual design).
This can be especially useful if one is designing a new way to do a familiar task, since the logical description should apply equally to the before and after systems.
This can enable one to look at skill and work requirements. A logical representation can be overlaid with indicators of which bits the person does and which bits the computer does -- a comparison of the before and after indicates changes in work practices as a result of the technology.
Goals, Operations, Methods and Selection rules (Card, Moran & Newell, 1980; 1983)
There are a whole family of GOMS (NGomsl, CPM-Goms, KLMs) - most are far from easy to apply and use. Whereas HTA is task structure and sequence, GOMS represents cognitive structure and sequence.
Cognitive structure is represented in the concept of starting from Goals, not tasks and also from the explicit use of selection rules to indicate how methods are chosen for the goals.
Further more, the mapping from goal to method direct avoids some of the issues in the goal-task mapping mentioned earlier.
Commentary
In essence, a GOMS model is no different from any other hierarchical task analysis examining Goal/Subgoal analysis. The main difference is that it has formalised its components, and it only claims to be able to describe expert, error-free behaviour. Thus, for example, GOMS analyses presume that methods are known before, and not calculated during performance.
Since there is no such thing as expert, error-free performance, many people have questioned the utility of such analyses. However, even if flawed, they are better than no task analysis at all.
Problems are that there is no clear specification of what can be used in selection rules -- there is an implication that it should be the current task context, but real behaviour undoubtedly allows for selection based on previous selections (for example).
Keystroke-level modelling
When the operators are analysed down to elemental perceptual, motor, cognitive actions (e.g. keystrokes), then by classifying key-strokes it can be possible to make time predictions for expert error-free performance.
The execution of a unit-task requires operators of (basically) 4 types:
To these should be added some number of mental operators (1.35 s), and if it limits the user's task performance, some estimate of the system's response time.
The number of mental operators comes from a set of rules -- basically between all operators, except those linked through knowledge or skill (e.g. the keystrokes in a single word, or point, click mouse).
Where there are selection rules governing the choice of methods then it is up to the analyst to decide whether to go for best or worst case time predictions.
Example
Method
Time Predictions
Texecute = [24tK + 8tP + 5tH ] + 7tM = [24(0.15) + 8(1.03) + 5(0.57)] + 7(1.35) = 14.7 + 9.4 = 24.1 s
Commentary
Although closely related to GOMS, note how keystroke-level modelling is really closer to time-and-motion (chronological) analysis than goal/subgoal analysis. It assumes a concentration on one task at a time, no interleaving of goals, no interruptions, a single method, and so on.
Indeed, KLMs can be achieved without worrying about goals and selection rules.
Quite a considerable effort has gone into trying to make them more usable - particularly by building computer tools to apply them (Nichols & Ritter, 1995; Beard, Smith, & Denelsbeck, 1996). The tools themselves, however, may be like the cobbler's children.
One of these potential tools is called Soar, which is an implementation of a problem-solving architecture. You write the Goals, Methods, Operators and Selection Rules in a production-rule syntax -- and the SOAR architecture runs the model, learns skills and can provide timing predictions.
Within its limitations, these approaches are attempting to support usability testing without real users. They do not (yet) readily offer up information for formative evaluations. They will not spot systems that encourage errors; they will not evaluate visibility / feedback differences between systems (users are presumed to have all the required knowledge. But they are they way forward.
Dismal, example applications and tool for applying the KLM model
Reference to Soar Frequently Asked Questions (FAQ) list
Kieras's notes on the KLM in the library.
Richard Young works in this area as well, and has an interesting web site.
Beard, D. V., Smith, D. K., & Denelsbeck, K. M. (1996). Quick and dirty GOMS: A case study of computed tomography interpretation. Human-Computer Interaction, 11, 157-180.
Card, S. K., Moran, T. P., & Newell, A. (1980). The keystroke-level model for user performance time with interactive systems. Communications of the ACM, 23(7), 396-410.
Card, S., Moran, T., & Newell, A. (1983). The psychology of human-computer interaction. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.
Gray, W. D., John, B. E., & Atwood, M. E. (1993). Project Ernestine: Validating a GOMS analysis for predicting and explaining real-world task performance. Human-Computer Interaction, 8(3), 237-309.
John, B. E., Vera, A. H., & Newell, A. (1994). Towards real-time GOMS: A model of expert behavior in a highly interactive task. Behavior and Information Technology, 13, 255-267.
John, B. E., & Kieras, D. E. (1996). Using GOMS for user interface design and evaluation: Which technique? ACM Transactions on Computer-Human Interaction, 3(4), 287-319.
Kieras, D. E. (1988). Towards a practical GOMS model methodology for user interface design. In M. Helander (Ed.), Handbook of Human-Computer Interaction. North-Holland: Elsevier Science.
Nichols, S., & Ritter, F. E. (1995). A theoretically motivated tool for automatically generating command aliases. In CHI '95, Human Factors in Computer Systems. 393-400. New York, NY: ACM.
Olson, J. R., & Olson, G. M. (1990). The growth of cognitive modeling in human-computer interaction since GOMS. Human Computer Interaction, 5(2 &3), 221-265.