Skip to content

First, the span of absolute judgment and the span of immediate memory impose severe limitations on the amount of information that we are able to receive, process, and remember. By organizing the stimulus input simultaneously into several dimensions and successively into a sequence of chunks, we manage to break (or at least stretch) this informational bottleneck.

Second, the process of recoding is a very important one in human psychology and deserves much more explicit attention than it has received. In particular, the kind of linguistic recoding that people do seems to me to be the very lifeblood of the thought processes. Recoding procedures are a constant concern to clinicians, social psychologists, linguists, and anthropologists and yet, probably because recoding is less accessible to experimental manipulation than nonsense syllables or T mazes, the traditional experimental psychologist has contributed little or nothing to their analysis. Nevertheless, experimental techniques can be used, methods of recoding can be specified, behavioral indicants can be found. And I anticipate that we will find a very orderly set of relations describing what now seems an uncharted wilderness of individual differences. George Miller (1956) p. 96-97, (emphasis added).

This continues the discussion about the differences between CSE (meaning processing) and more classical HF (information processing) approaches to human performance summarized in the table below. This post will focus on the second line in the table - shifting emphasis from human limitations to include greater consideration of human capabilities.

Anybody who has taken an intro class in Psychology or Human Factors is familiar with Miller's famous number 7 plus or minus 2 chunks that specifies the capacity of working memory. This is one of the few numbers that we can confidently provide to system designers as a spec of the human operator that needs to be respected. However, this number has little practical value for design unless you can also specify what constitutes a 'chunk.'

Although people know Miller's number, few appreciate the important points that Miller makes in the second half of the paper about the power of 'recoding' and the implications for the functional capacity of working memory. As noted in the opening quote to this blog - people have the ability to 'stretch' memory capacity through chunking. The intro texts emphasize the "limitation," but much less attention has been paid to the recoding "capability" that allows experts to extend their functional memory capacity to deal with large amounts of information (e.g., that allows an expert chess player to recall all the pieces on a chess board based on a very short glance).

Cataloguing visual illusions has been a major thrust of research in perception. Similarly, cataloguing biases has been a major thrust of research in decision making. However, people such as James Gibson have argued that these collections of illusions do not add up to a satisfying theory of how perception works to guide action (e.g., in the control of locomotion). In a similar vein, people such as Gerd Gigerenzer have made a similar case for decision making - that collections of biases (the dark side of heuristics) do not add up to a satisfying theory of decision making in every day life and work. One reason is that in every day life, there is often missing and ambiguous data and incommensurate variables that make it difficult or impossible to apply more normative algorithmic approaches.

One result of the work of Tversky and Kahneman in focusing on decision errors is that the term 'heuristic' is often treated as if it is synonymous with 'bias.' Thus, heuristics illustrate the 'weakness' of human cognition - the bounds of rationality. However, Herbert Simon's early work in artificial intelligence treated heuristics as the signature of human intelligence. A computer program was only considered intelligent if it took advantage of heuristics that creatively used problem constraints to find short cuts to solutions - as opposed to applying mathematical algorithms that solved problems by mechanical brute force.

This emphasis on illusions and biases tends to support the MYTH that humans are the weak link in any sociotechnical system and it leads many to seek ways to replace the human with supposedly more reliable 'automated' systems. For example, recent initiatives to introduce autonomous cars often cite 'reducing human errors and making the system safer' as a motivation for pursuing automatic control solutions.

Thus, the general concept of rationality tends to be idealized around the things that computers do well (use precise data to solve complex algorithms and logical puzzles) and it tends to underplay those aspects of rationality where humans excel (detecting patterns and deviations and creatively adapting to surprise/novelty/ambiguity).

CSE recognizes the importance of respecting the bounds of rationality for humans when designing systems - but also appreciates the limitations of automation (and the fact that when the automation fails - it will typically fall to the human to fill the gap). Further, CSE starts with the premise that 'human errors' are best understood in relation to human abilities. On one hand, a catalogue of human errors will never add up to a satisfying understanding of human thinking. On the other hand, a deeper understanding may be possible if the 'errors' are seen as 'bounds' on abilities.  In other words, CSE assumes that a theory of human rationality must start with a consideration of how thinking 'works,' in order to give coherence to the collection of errors that emerge at the margins.

The implication is that CSE tends to be framed in terms of designing to maximize human engagement and to fully leverage human capabilities, as opposed to more classical human factors that tends to emphasize the need to protect systems against human errors and limitations. This does not need to be framed as human versus machine, rather it should be framed in terms of human-machine collaboration. The ultimate design goal is to leverage the strengths of both humans and technologies to create sociotechnical systems that extend the bounds of rationality beyond the range of either of the components.

A rather detailed account of the nineteenth-century history of the steam engine with governor may help the reader to understand both the circuits and the blindness of the inventors. Some sort of governor was added to the early steam engine, but the engineers ran into difficulties. They came to Clark Maxwell with the complaint that they could not draw a blueprint for an engine with a governor. They had no theoretical base from which to predict how the machine that they had drawn would behave when built and running.

There were several possible sorts of behavior: Some machines went into runaway, exponentially maximizing their speed until they broke or slowing down until they stopped. Others oscillated and seemed unable to settle to any mean Others - still worse - embarked on sequences of behavior in which the amplitude of their oscillation would itself oscillate or would become greater and greater.

Maxwell examined the problem. He wrote out formal equations for relations between the variables at each successive step around the circuit. He found, as the engineers had found, that combining this set of equations would not solve the problem. Finally, he found that the engineers were at fault in not considering time. Every given system embodied relations to time, that is was characterized by time constants determined by the given whole. These constants were not determined by the equations of relationship between successive parts but were emergent properties of the system.

... a subtle change has occurred in the subject of discourse .... It is a difference between talking in a language which a physicist might use to describe how one variable acts upon another and talking in another language about the circuit as a whole which reduces or increases difference. When we say that the system exhibits "steady state" (i.e., that in spite of variation, it retains a median value), we are talking about the circuit as a whole, not about the variations within it. Similarly the question which the engineers brought up to Clark Maxwell was about the circuit as a whole: How can we plan it to achieve a steady state? They expected the answer to be in terms of relations between the individual variables. What was needed and supplied by Maxwell was an answer in terms of time constants of the total circuit. This was the bridge between the two levels of discourse. Gregory Bateson (2002) p. 99-101.

This is a continuation of the previous posting and the discussion of what is unique to a CSE approach relative to more traditional human factors. This post will address the first line in the table below that is repeated from the prior post.

Norbert Wiener's (1948) classic, introducing the Cybernetic Hypothesis, was subtitled "On Control and Communication in the Animal and Machine." The ideas in this book about control systems (machines that were designed to achieve and maintain a goal state) and communication (the idea that information can be quantified) had a significant impact on the framing of research in psychology. It helped shift the focus from behavior to cognition (e.g., Miller, Gallanter, Pribram, 1960).

However, though psychologists began to include feedback in their images of cognitive systems, the early program of research tended to be dominated by the image of an open-loop communication system and the research program tended to focus on identifying stimulus-response associations or transfer functions (e.g., bandwidth) for each component in a series of discrete information processing stages. A major thrust of this research program was to identify the limitations of each subsystem in terms of storage capacity (7 + or - 2 chunk capacity of working memory) and information processing rates (e.g., Hick-Hyman Law, Fitts' Law).

Thus, the Cybernetic Hypothesis inspired psychology to consider "intentions" and other internal aspects associated with thinking, however, it did not free psychology from using the language of the physicist in describing the causal interaction between one variable and another, rather than thinking in terms of properties of the circuit as a whole (i.e., appreciating the emergent properties that arise from the coupling of perception and action). The language of information processing psychology was framed in terms of a dyadic semiotic system for processing symbols as illustrated below.

In contrast, CSE was framed from the start in the language of control theory. This reflected an interest in the role of humans in closing the loop as pilots of aircraft and supervisors of energy production processes. From a control theoretic perspective it was natural to frame the problems of meaning processing as a triadic semiotic system, where the function of cognition was to achieve stable equilibrium with a problem ecology. Note that the triadic semiotic model emerged as a result of the work of functional psychologists (e.g., James & Dewey) and pragmatic philosophers (Peirce), who were most interested in 'mind' as a means for adapting to the pragmatic demands of everyday living.  Dewey's (1896) classic paper on the reflex arc examines the implications of Maxwell's insights (described in the opening quote from Bateson) for psychology:

The discussion up to this point may be summarized by saying that the reflex arc idea, as commonly employed, is defective in that it assumes sensory stimulus and motor response as distinct psychical existences, while in reality they are always inside a coordination and have their significance purely from the part played in maintaining or reconstituting the coordination; and (secondly) in assuming that the quale of experience which precedes the 'motor' phase and that which succeeds it are two different states, instead of the last being always the first reconstituted, the motor phase coming in only for the sake of such mediation. The result is that the reflex arc idea leaves us with a disjointed psychology, whether viewed from the standpoint of development in the individual or in the race, or from that of the analysis of the mature consciousness. As to the former, in its failure to see that the arc of which it talks is virtually a circuit, a continual reconstitution, it breaks continuity and leaves us nothing but a series of jerks, the origin of each jerk to be sought outside the process of experience itself, in either an external pressure of 'environment,' or else in an unaccountable spontaneous variation from within the 'soul' or the 'organism.'  As to the latter, failing to see unity of activity, no matter how much it may prate of unity, it still leaves us with sensation or peripheral stimulus; idea, or central process (the equivalent of attention); and motor response, or act, as three disconnected existences, having to be somehow adjusted to each other, whether through the intervention of an extra experimental soul, or by mechanical push and pull.

Many cognitive scientists and many human factors engineers continue to speak in the language associated with causal, stimulus-response interactions (i.e., jerks) with out an appreciation for the larger system in which perception and action are coupled. They are still hoping to concatenate these isolated pieces into a more complete picture of cognition. In contrast, CSE starts with a view of the whole - of the coupling of perception and action through an ecology - as a necessary context from which to appreciate variations at more elemental levels.

In the early 1960s, we realized from analyses of industrial accidents the need for an integrated approach to the design of human-machine systems. However, we very rapidly encountered great difficulties in our efforts to bridge the gap between the methodology and concepts of control engineering and those from various branches of psychology. Because of its kinship to classical experimental psychology and its behavioristic claim for exclusive use of objective data representing overt activity, the traditional human factors field had very little to offer (Rasmussen, p. ix, 1986).

 

Cognitive Engineering, a term invented to reflect the enterprise I find myself engaged in: neither Cognitive Psychology, nor Cognitive Science, nor Human Factors. (Norman, p. 31, 1986).

 

The growth of computer applications has radically changed the nature of the man-machine interface. First, through increased automation, the nature of the human’s task has shifted from an emphasis on perceptual-motor skills to an emphasis on cognitive activities, e.g. problem solving and decision making…. Second, through the increasing sophistication of computer applications, the man-machine interface is gradually becoming the interaction of two cognitive systems. (Hollnagel & Woods, p. 340, 1999).

As reflected in the above quotes, through the 1980s and 90s there was a growing sense that the nature of human-machine systems was changing, and that this change was creating the demand for a new approach to analysis and design. This new approach was given the label of Cognitive Engineering (CE) or Cognitive Systems Engineering (CSE). Table 1, illustrates one way to characterize the changing role of the human factor in sociotechnical systems. As technologies became more powerful with respect to information processing capabilities, the role of the humans increasingly involved supervision and fault detection. Thus, there was increasing demand for humans to relate the activities of the automated processes to functional goals and values (i.e., pragmatic meaning) and to intervene when circumstances arose (e.g., faults or unexpected situations) such that the automated processes were no longer serving the design goals for the system.  Under these unexpected circumstances, it was often necessary for the humans to create or invent new procedures on the fly, in order to avoid potential hazards or to take advantage of potential opportunities. This demand to relate activities to functional goals and values and to improvise new procedures required meaning processing.

In a following series of posts I will elaborate the differences between information processing and meaning processing as outlined in Table 1. However, before I get into the specific contrasts, it is important to emphasize that relative to early human factors, CSE is an evolutionary change, NOT a revolutionary change. That is, the concerns about information processing that motivated earlier human factors efforts were not wrong and they continue to be important with regards to design. The point of CSE is not that an information processing approach is wrong, but rather that it is insufficient.

The point of CSE is not that an information processing approach is wrong, but rather that it is insufficient.

The point of CSE is that design should not simply be about avoiding overloading the limited information capacities of humans, but it should also seek to leverage the unique meaning processing capabilities of humans. These meaning processing capabilities reflect peoples' ability to make sense of the world relative to their values and intentions, to adapt to surprises, and to improvise in order to take advantage of new opportunities or to avoid new threats. In following posts I will make the case that the overall vision of a meaning processing approach is more expansive than an information processing approach. This broader context will sometimes change the significance of specific information limitations. It will also provide a wider range of options for by-passing these limitations and for innovating to improve the range and quality of performance of sociotechnical systems.

In other words, I will argue for a systems perspective - where the information processing limitations must be interpreted in light of the larger meaning processing context. I will argue that constructs such as 'expertise' can only be fully understood in terms of qualities that emerge at the level of meaning processing.

Hollnagel, E. & Woods, D.D. (1999). Cognitive Systems Engineering: New wine in new bottles. In. J. Human-Computer Studies, 51, 339-356.

Norman, D.A. (1986). Cognitve Engineering. In D.A. Norman & S.W. Draper (Eds). User-Centered System Design, 31-61, Hillsdale, NJ: Erlbaum.

Rasmussen, J. (1986). Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering. New York: North Holland.

Another giant in the field of Cognitive Systems Engineering has been lost. Jens Rasmussen created one of the most comprehensive and integrated foundations for understanding the dynamics of sociotechnical systems available today. Drawing from the fields of semiotics (Eco), control engineering, and human performance, his framework was based on a triadic semiotic dynamic, which he parsed in terms of three overlapping perspectives. The Abstraction Hiearchy (AH) provided a way to characterize the system relative to a problem ecology and the consequences and possibilities for action. He introduced the constructs Skills-, Rules-, and Knowledge (SRK) as a way to emphasize the constraints on the system from the perspective of the observers-actors. Finally, he introduced the construct of Ecological Interface Design (EID) as a way to emphasize the constraints on the system from the perspective of representations.

Jens had a comprehensive vision of the sociotechnical system and we have only begun to plumb the depths of this framework and to fully appreciate its value as both a basic theory of systems and as a pragmatic guide for the engineering and design of safer more efficient systems.

Jens death is a very personal loss for me. He was a valued mentor who saw potential in a very naive, young researcher long before it was evident or deserved. He opened doors for me and created opportunities that proved to be essential steps in my education and professional development. Although I may never realize the potential that Jens envisioned, he set me on a path that has proved to be both challenging and satisfying. I am a better man for having had him as a friend and mentor.

A website has been created to allow future researchers to benefit from Jens' work.

http://www.jensrasmussen.org/

1

The fields of psychology, human factors, and cognitive systems engineering have lost another leader who did much to shape the fields that we know today. I was very fortunate to overlap with Neville Moray on the faculty at the University of Illinois. To a large extent, my ideals for what it means to be a professor were shaped by Neville.  From his examples, I learned that curiosity does not stop at the threshold of the experimental laboratory and that training graduate students requires engagement beyond the laboratory and classroom. Neville was able to bridge the gulfs between basic and applied psychology, between science and art, and between work and play - in a way that made us all question why these gulfs existed at all.

More commentary on Neville's life and the impact he had on many in our field can be found on the HFES website

 

1

The fields of psychology, human factors, and cognitive systems engineering have lost another leader who did much to shape the fields that we know today. I was very fortunate to overlap with Neville Moray on the faculty at the University of Illinois. To a large extent, my ideals for what it means to be a professor were shaped by Neville.  From his examples, I learned that curiosity does not stop at the threshold of the experimental laboratory and that training graduate students requires engagement beyond the laboratory and classroom. Neville was able to bridge the gulfs between basic and applied psychology, between science and art, and between work and play - in a way that made us all question why these gulfs existed at all.

More commentary on Neville's life and the impact he had on many in our field can be found on the HFES website

 

I just finished Simon Sinek's (2009) "Start with Why." I was struck by the similarities between Sinek's 'Golden Circle' and Jens Rasmussen's 'Abstraction Hierarchy' (see figure). Both parse systems in terms of  a hierarchical association between why, what, and how.

For Rasmussen - 'what' represented the level of the system being attended; 'why' represented a higher-level of abstraction that reflected the significance of the 'what' relative to the whole system; with the highest level of abstraction reflecting the ultimate WHY of a system - its purpose.  In Rasmussen's system,  'how' represented a more concrete description of significant components of the level being attended (e.g., the component processes serving the 'what' level above).

Sinek's 'why' corresponds with the pinnacle of Rasmussen's Abstraction Hiearchy. It represents the ultimate purpose of the system. However, Sinek reverses Rasmussen's 'what' and 'how.' For Sinek, 'how' represents the processes serving the 'why;' and the 'what' represents the products of these processes.

Although I have been teaching Rasmussen's approach to Cognitive Systems Engineering (CSE) for over 30 years, I think that Sinek's WHY-HOW-WHAT partitioning conforms more naturally with common usage of the terms 'how' and 'what.' So, I think this is a pedagogical improvement on Rasmussen's framework.

However, I found that the overall gist of Sinek's "Start with Why" reinforced many of the central themes of CSE. That is, for a 'cognitive system' the purpose (i.e., the WHY) sets the ultimate context for parsing the system (e.g., processes and objects) into meaningful components. This is an important contrast to classical (objective) scientific approaches to physical systems. Classical scientific approaches have dismissed the 'why' as subjective! The 'why' reflected the 'biases' of observers. But for a cognitive science, the observers are the objects of study!

Thus, cognitive science and cognitive engineering must always start with WHY!

1

The Big Data Problem and Visualization

The digitization of healthcare data using Electronic Healthcare Record (EHR) systems is a great boon to medical researchers. Prior to EHR systems, researchers were responsible for collecting and archiving the patient data necessary to build models for guiding healthcare decisions (e.g., the Framingham Study of Cardiovascular Health). However, with EHR systems, the job of collecting and archiving patient data is off-loaded from the researchers, freeing them to focus on the BIG DATA PROBLEM. Thus, there is a lot of excitement in the healthcare community about the coming BIG DATA REVOLUTION and computer scientists are enthusiastically embracing the challenge of providing tools for BIG DATA VISUALIZATION.

It is very likely that the availability of data and the application of advanced visualization tools will stimulate significant advances in the science of healthcare. However, will these advances translate into better patient care? Recent experiences with EHR systems suggest that the answer is "NO! Not unless we also solve the LITTLE DATA PROBLEM."

The Little Data Problem in Healthcare

Compared to the excitement about embracing the BIG DATA PROBLEM, healthcare technologists and in particular EHR developers have paid relatively little attention to visualization problems on the front end of EHR systems. The EHR interfaces to the frontline healthcare workers consist almost exclusively of text, dialog boxes, and pull-down menus. These interfaces are designed for ‘data input-output.’ They do very little to help physicians to make sense of the data relative to judging risk and making treatment decisions. For example, the current EHR interfaces do little to help physicians to ‘see’ what the data ‘mean’ relative to the risk of a cardiac event; or to ‘see’ the recommended treatment options for a specific patient.

The LITTLE DATA PROBLEM for healthcare involves creative design of interfaces to help physicians to visualize the data for a specific patient in light of the current medical research. The goal is for the interface representations to support the physician in making well-informed treatment decisions and for communicating those decisions to patients. For example, the interface representations should allow a physician to ‘see’ patient data relative to risk models (e.g., Framingham model) and relative to published standards of care (e.g., Adult Treatment Panel IV), so that the decisions made are informed by the evidence-base. In addition, the representation should facilitate discussions with patients to explain and recommend treatment options, to engender trust, and ultimately to increase the likelihood of compliance.

Thus, while EHRs are making things better for medical research, they are making the everyday work of healthcare more difficult. The benefits with respect to the ‘Big Data Problem’ are coming at the expense of increased burden on frontline healthcare workers who have to enter the data and access it through clumsy interfaces. In many cases, the technology is becoming a barrier to communications with the patients, because time spend interacting with the technology is reducing the time available for interacting directly with patients (Arndt, et al, 2017).

At Mile Two, we are bringing Cognitive Systems Engineering (CSE), UX Design, and Agile Development processes together to tackle the LITTLE DATA PROBLEM. Follow this link to see an example of a direct manipulation interface that illustrates how interfaces to EHR systems might better serve the needs of both frontline healthcare workers and patients: CVDi.

Conclusion

The major point is that advances resulting from the BIG DATA REVOLUTION will have little impact on the quality of everyday healthcare if we don't also solve the LITTLE DATA PROBLEM associated with EHR systems.

2

New study finds that physicians spend twice as much time interacting with EHR systems as interacting directly with patients.

http://www.beckershospitalreview.com/ehrs/primary-care-physicians-spend-close-to-6-hours-performing-ehr-tasks-study-finds.html

http://www.annfammed.org/content/15/5/419.full

This is a classical example of clumsy automation. That is, automation that disrupts the normal flow of work, rather than facilitating it. It is unfortunate that healthcare is far behind other industries when it comes to understanding how to use IT to enhance the quality of every day work. While the healthcare industry promotes the potential wonders of "big data," the needs of everyday clinical physicians have been largely overlooked.

EHR systems have been designed around the problem of 'data management' and the problems of 'healthcare management' have been largely unrecognized or unappreciated by the designers of EHR systems.

In solving the 'data' problem, the healthcare IT industry has actually made the 'meaning' problem more difficult for clinical physicians.

This should be a great opportunity for Cognitive Systems Engineering innovations, IF anyone in the healthcare industry is willing to listen.

The team at Mile Two recently created an App (CVDi) to help people to make sense of clinical values associated with cardiovascular health. The App is a direct manipulation interface that allows people to enter and change clinical values and to get immediate feedback about the impact on overall health and treatment options.

The feedback about overall health is provided in the form of three Risk Models from published research on cardiovascular health. Each model is based on longitudinal models that have tracked the statistical relations between various clinical measures (e.g., age, total cholesterol, blood pressure) and incidents of cardiovascular disease (e.g., heart attacks or strokes).  However, the three models each use different subsets of that data to predict risk, and thus the risk estimates can be quite varied.

A number of people who have reviewed the CVDi App have suggested that this variation among the models might be a source of confusion to users or it might lead people to cherry-pick the value that fits their preconceptions (e.g., someone who is skeptical about medicine might take the best value as justification for not going to the doctor; while a hypochondriac might take the worst value as justification for his fears). In essence, the suggestion is that the variability among the risk estimates is NOISE that will reduce the likelihood that people will make good decisions. These people suggest that we pick one (e.g., the 'best') model and drop the other two.

We have an alternative hypothesis. We believe that the variation among the models is INFORMATION that provides the potential for deeper insight into the complex problem of cardiovascular health. Our hypothesis is that the variation will lead people to consider the basis for each model (e.g., whether it is based on lipids, or BMI, or whether C-reactive proteins are included).  Our interface is designed so that it is easy to SEE the contribution of each of these variables to each of the models. For example, a big difference in risk estimates between the lipid-based models and the BMI-based model might signify the degree to which weight or lipids is contributing to the risk.  We believe this is useful information in selecting an appropriate treatment option (e.g., statins or diet).

The larger question here is the function of MODELS in cognitive systems or decision support systems. Should the function of models be to give people THE ANSWER; or should the function of models be to provide insight into the complexity so that people are well-informed about the problem - so that they are better able to muddle through to discover a satisfying answer.

Although there is great awareness that human rationality is bounded, there is less appreciation of the fact that all computational models are bounded. While we tend to be skeptical about human judgment, there is a tendency to take the output of computational models as the answer or as the truth. I believe this tendency is dangerous! I believe it is unwise to think that there is a single correct answer to a complex problem!

As I have argued in previous posts, I believe that muddling through is the best approach to complex problems. And thus, the purpose of modeling should be to guide the muddling process, NOT to short-circuit the muddling process with THE ANSWER. The purpose of the model is to enhance situation awareness, helping people to muddle well and increasing the likelihood that they will make well-informed choices.

Long ago we made the case that for supporting complex decision making, models should be used to suggest a variety of alternatives - to provide deeper insight into possible solutions - rather than to provide answers:

Brill, E.D. Jr., Flach, J.M., Hopkins, L.D., & Ranjithan, S. (1990). MGA: A decision support system for complex, incompletely defined problems. IEEE Transactions on Systems, Man, and Cybernetics, 20(4), 745-757.

Link to the CVDi interface: CVDi