These lecture notes were not written as a course handout, but as a resource for lectures. Therefore, references and comments will not always be complete.
So far I have presented a number of different techniques for assisting in human factors analyses:
Now I want to present a fourth possibility, known as "cognitive dimensions". The fundamental essence here is that if we wish to try and understand something, then one way is to try and reduce it to some (a few) fundamental dimensions. These dimensions can then provide a common language for discussing usability problems, as well as a means of comparing aspects of interface and system design.
The goal of Cognitive Dimensions is to provide a small set of labelled dimensions that describe critical ways in which interfaces / systems / environments can vary from the perspective of usability.
In this lecture I'll outline some of the dimensions proposed, with examples of systems. However, the complete set of dimensions does not yet exist -- this is still a research issue. Currently, the effort is in two directions -- trying to find ways of formalising and maybe measuring the dimensions currently proposed; and trying to explore the completeness and redundancy of the current set.
Hidden Dependencies relates to the number and direction of relationships that can be seen at any one time between objects. For example, spreadsheets show you formulae in one direction only; variable declarations in Pascal show you which variable type a structure is made from, but not the other way around; Microsoft Word Style Sheets show you the parent of a style but not the children.
In all these three examples there are two invisible types of dependency -- the ancestor and the children. Parents are all that are visible. To find ancestors means following a chain of parents -- to find children means checking every entity for its parent (creating a high memory and work load, as well).
The implications of considering 'hidden dependencies' is that all dependencies that may be of relevance to the user's tasks should be represented -- or have tools provided which enable them to be represented.
For example, spreadsheets would be easier to use (for certain tasks anyway) if they could show forward relationships (e.g. this cell's value is used in all these other places). Recognising this fact (the need for visibility) is quite separate from designing a solution to the problem -- one option would be to use colour coding (e.g. children cells could be shown in the same colour; ancestral cells could be shown in ever paler shades of the same colour).
There would be two things here to be user-tested; one is the interface technique and the other is the interface function. These cannot necessarily be separated, which can make interpretation of user-testing results difficult.
Another, much richer form of hidden dependency, can occur in many modern operating systems, where numerous files are generated and used. Other applications may well be dependent upon these files, but these dependencies are not visible -- throwing away files therefore becomes dangerous and hazardous. And so people's file stores get filled up with fossils that may or may not be used.
A viscous system is resistant to change - even small changes can require substantial effort.
A classic example would be a word-processed document in which the numbers assigned to the figures have been typed explicitly by hand. If you decide to introduce another figure early on in the paper, then all the existing figures numbers need to be changed -- this is not too difficult for the figures themselves, but ensuring that all the references to all the figures are updated accurately can be extremely difficult. If you do this multiple times you might start to skip doing it every time, and may forget in the end to do it at all. (Scribe and LaTex avoid this problem by doing it automatically, and I avoid it in Word by using names as long as I can).
Viscosity is an intriguing dimension in relation to usability, since in some circumstances viscosity can be for the user's benefit -- encouraging reflective action, rather than evolutionary hacking. If it is too easy to make small changes then many small, unnecessary changes may be made. This problem, however, is relatively rare.
It is possible to start to describe different sorts of viscosity, according to the reasons for viscosity:
Repetitive viscosity -- where the small change requires lots of small, repetitive actions (e.g. many forms of global search-and-replace, - in Word can replace <digit><space> with <x><tab>, but not with the same digit).
Knock-on viscosity -- where side-effects occur that require more small changes (e.g. adding a sentence at the beginning of a document and having to redo all the work involved in ensuring appropriate page breaks occur
Solutions to the problems of viscosity can involve either redesigning the system or providing support tools to manage the difficulties (where viscosity may be desirable, this may be a very good solution).
In the lecture on language we commented upon the need to infer the other person's goal and plan in handling successful communication. "Role-expressiveness' is the term used to reflect the extent to which a system reveals the goals of the author/designer to the reader/user.
A classic example would be a button designed such that it is not recognisable as a functional button -- "But how was I meant to know that it would do something if I clicked there!" is a problem this would lead to.
Similarly, a piece of program code with good variable names can be very role-expressive -- the goals of the programmer in each statement can be clearly apparent to the reader.
Classic problems of role-expressiveness occur where two similar looking features achieve different functions, or where two different looking functions achieve similar effects (e.g. "Border" functions in Microsoft Word appear in the same dialogue box, from three different places, with different parts available to the user, and with differing effects. The effect of this is to make users who fail to distinguish between the different roles (e.g. paragraph boarders, cell borders) very confused because Word behaves in apparently random ways.
In this example one can see an interesting paradox, which is that role-expressiveness is partially the opposite of consistency. Designing a system to be role-expressiveness will usually mean using a richer vocabulary, with less (apparent) uniformity. The resolution to this paradox can be found in the need for a clear and effective task analysis of user's tasks.
Systems and environments vary in respect of the point at which commitment is made to certain properties. For example, early desk-top publishing programs often required that you specify the number of pages and the layout of those pages before importing the text. Many database systems still require that you plan the record structures and the size limits on them before entering any data or actually using the system
These are examples of Premature Commitment. Computer programming environments often require premature commitment -- variable declarations, compilation or developing the system in sequential order, etc.
Premature commitment can contribute to viscosity, because the effect of the commitment is to make future changes very hard to make.
The solution to the problems of premature commitment usually lie in providing not only freedom, but also support, for performing tasks in many orders (e.g. outliners in many word-processors).
A final dimension on which systems vary is the number of hard mental operations involved. In GOMS, KLM modelling and several other task analysis methods, the mental operations are all assumed to be equivalent. In fact, psychology and human factors work shows us that some kinds of problems (and operations) are substantially harder than others.
For example, multiple negative information can be very hard to disentangle. The concepts of pointers and indirection in C and other programming languages prove very difficult -- usually one level of indirection is fine, but multiple pointers are tricky. Another type of hard mental operation are boundary problems, for example, counting fenceposts.
The important thing about these kinds of mental operations is that they are easy computationally, but especially troublesome for people.
Again these problems can be solved by either avoiding the problem by understanding the relative difficulty of operations, or by providing tools to assist in these operations (e.g. graphics displays of indirection in C programs).
In the context of the dimensions I have presented (others might include visibility - clearly overlapping with some mentioned; consistency; error susceptibility) then cognitive dimensions can become an early, formative evaluation methodology. The designer then asks questions like the following:
They address primarily the cognitive usability problems, not the anthropometric, behavioural and social difficulties.
They do not directly address quality of feedback and support for error recovery.
They do not currently address issues relating to user acceptability (rather than usability).
A critical point about cognitive dimensions is the argument that if we had a good vocabulary of sources of poor usability, then the methodology would be staring us in the face -- the dimensions currently described may not be complete, and may not be an appropriate set, but they do provide an easily understood set of concepts and evaluation questions.