Skip to content

4

The term “Cognitive Engineering” was first suggested by Don Norman and it is the title of a chapter in a book (User-Centered System Design -1986) that he co-edited with Stephen Draper. Norman (1986) writes:

Cognitive Engineering, a term invented to reflect the enterprise I find myself engaged in: neither Cognitive Psychology, nor Cognitive Science, nor Human Factors. It is a type of applied Cognitive Science, trying to apply what is known from science to the design and construction of machines. It is a surprising business. On the one hand, there actually is a lot known in Cognitive Science that can be applied. On the other hand, our lack of knowledge is appalling. On the one hand, computers are ridiculously difficult to use. On the other hand, many devices are difficult to use - the problem is not restricted to computers, there are fundamental difficulties in understanding and using most complex devices. So the goal of Cognitive Engineering is to come to understand the issues, to show how to make better choices when they exist, and to show what the tradeoffs are when, as is the usual case, an improvement in one domain leads to deficits in another (p. 31).

Norman (1986) continues to specify two major goals that he had as a Cognitive Systems Engineer:

  1. To understand the fundamental principles behind human action and performance that are relevant for the development of engineering principles in design.
  2. To devise systems that are pleasant to use - the goal is neither efficiency nor ease nor power, although these are all to be desired, but rather systems that are pleasant, even fun: to produce what Laurel calls “pleasurable engagement” (p. 32).

Rasmussen (1986) was less interested in pleasurable engagement and more interested in safety - noting the accidents at Three Mile Island and Bhopal as important motivations for different ways to think about human performance and work. As a controls engineer Rasmussen was concerned that the increased utilization of centralized, automatic control systems in many industries (particularly nuclear power) was changing the role of humans in those systems. He noted that the increased use of automation was moving humans “from the immediate control of system operation to higher-level supervisory tasks and to long-term maintenance and planning tasks” (p. 1). Because of his background in controls engineering, Rasmussen understood the limitations of the automated control systems and he recognized that these systems would eventually face situations that their designers had not anticipated (i.e., situations for which the ‘rules’ or ‘programs’ embedded in these systems were inadequate). He knew that it would be up to the human supervisors to detect and diagnose the problems that would thus ensue and to intervene (e.g., creating new rules on the fly) to avert potential catastrophes.

The challenge that he saw for CSE was to improve the interfaces between the humans and the automated control systems in order to support supervisory control. He wrote:

Use of computer-based information technology to support decision making in supervisory systems control necessarily implies an attempt to match the information processes of the computer to the mental decision processes of an operator. This approach does not imply that computers should process information in the same way as humans would. On the contrary, the processes used by computers and humans will have to match difference resource characteristics. However, to support human decision making and supervisory control, the results of computer processing must be communicated at appropriate steps of the decision sequence and in a form that is compatible with the human decision strategy. Therefore, the designer has to predict, in one way or another, which decision strategy an operator will choose. If the designer succeeds in this prediction, a very effective human-machine cooperation may result; if not, the operator may be worse off with the new support than he or she was in the traditional system … (p. 2).

Note that the information technologies that were just beginning to change the nature of work in the nuclear power industry in the 1980’s when Rasmussen made these observations have now become significant parts of almost every aspect of modern life - from preparing a meal (e.g., Chef Watson), to maintaining personal social networks (e.g., Facebook and Instagram), to healthcare (e.g., electronic health record systems), to manufacturing (e.g., flexible, just-in-time systems), to shaping the political dialog (e.g., President Trump’s use of Twitter). Today, most of us have more computing power in our pocket (our smart phones) than was available for even the most modern nuclear power plants in the 1980s. In particular, the display technology in 1980s was extremely primitive relative to the interactive graphics that are available today on smart phones and tablets.

A major source of confusion that has arisen in defining this relatively new field of CSE has been described in a blog post by Erik Hollnagel (2017):

The dilemma can be illustrated by considering two ways of parsing CSE. One parsing is as C(SE), meaning cognitive (systems engineering) or systems engineering from a cognitive point of view. The other is (CS)E, meaning the engineering of (cognitive systems), or the design and building of joint (cognitive) systems.

From the earliest beginnings of CSE, Hollnagel and Woods (1982; 1999) were very clear about what they thought was the appropriate parsing. Here is their description of a cognitive system:

A cognitive system produces “intelligent action,” that is, its behavior is goal oriented, based on symbol manipulation and used knowledge of the world (heuristic knowledge) for guidance. Furthermore, a cognitive system is adaptive and able to view a problem in more than one way. A cognitive system operates using knowledge about itself and the environment, in the sense that it is able to plan and modify its actions on the basis of that knowledge. It is thus not only data driven, but also concept driven. Man is obviously a cognitive system. Machines are potentially if not actually, cognitive systems. An MMS [Man-Machine System] regarded as a whole is definitely a cognitive system (p. 345)

Unfortunately, there are still many who don’t quite fully appreciate the significance of treating the whole sociotechnical system as a unified system where the cognitive functions are emergent properties that depend on coordination among the components. Many have been so entrained in classical reductionistic approaches that they can’t resist the temptation to break the larger sociotechnical system into components . For these people, CSE is simply systems engineering techniques applied to cognitive components within the larger sociotechnical system.  This approach fits with the classical disciplinary divisions in universities where the social sciences and the physical sciences are separate domains of research and knowledge. People generally recognize the significant role of humans in many sociotechnical systems (most notably as a source of error); and they typically advocate that designers account for the ‘human factor’ in the design of technologies. However, they fail to appreciate the self-organizing dynamics that emerge when smart people and smart technologies work together. They fail to recognize that what matters with respect to successful cognitive functioning of this system are emergent properties that cannot be discovered in any of the components. Just as a sports team is not simply a collection of people, a sociotechnical system is not simply a collection of things (e.g., humans and automation).

The ultimate challenge of CSE as formulated by Rasmussen, Norman, Hollnagel, Woods and others is to develop a new framework where the ‘cognitive system’ is a fundamental unit of analysis. It is not a collection of people and machines - rather it is an adapting organism with a life of its own (i.e., dynamic properties that arise from relations among the components). CSE reflects a desire to understand these dynamic properties and to use that understanding to design systems that are increasingly safe, efficient, and pleasant to use.

It is a bit of a jolt for me to realize that it has now been more than 25 years since the publication of Rasmussen, Pejtersen and Goodstein’s (1994) important book “Cognitive Systems Engineering.” It is partly a jolt because it is a reminder of how old I am getting. I used early, pre-publication drafts of that book in a graduate course on Engineering Psychology that I taught as a young assistant professor at the University of Illinois. And even before that, I used Rasmussen’s (1986) previous book “Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering” in the same course. Yes, I have been learning about and talking about Cognitive Systems Engineering (CSE) for a long time.

It is also a jolt when I realize that many people who are actively involved in the design of complex sociotechnical systems have very little understanding of what CSE is or why it is valuable.  For example, they might ask how is CSE different than Human Factors? It seems despite the efforts of me and many others, the message has not reached many of the communities who are involved in shaping the future of sociotechnical systems (e.g., computer scientists, human factors engineers, designers, chief technology officers).

It is also a jolt when I consider how technology has changed over the past 25 years. A common theme in Rasmussen’s arguments for why CSE was necessary was the fast pace of change due to advances in modern information technologies. Boy did he get that right. Twenty-five years ago the Internet was in its infancy. Google was a research project, not formally incorporated as a company till 1998. There was no Facebook, which wasn’t launched until 2004. No YouTube - the first video was uploaded in 2005. No Twitter - the first tweet wasn’t sent until 2006. And perhaps most significantly, there were no smart phones. The first iPhone wasn’t released until 2007.

I’m not sure if Jens (or anyone else) fully anticipated the full impact of the rapid advances of technology on work and life today. However, I do feel that most of his ideas about the implications of these advances for doing research on and for designing sociotechnical systems are still relevant. In fact, the relevance has only grown with the changes that information technologies have fostered.  Despite the emergence of other terms for new ways to think about work, such as UX design and Resilience Engineering, I contend that having an understanding of CSE remains essential to anyone who is interested in studying or designing sociotechnical systems.

It is a bit of a jolt for me to realize that it has now been more than 25 years since the publication of Rasmussen, Pejtersen and Goodstein’s (1994) important book “Cognitive Systems Engineering.” It is partly a jolt because it is a reminder of how old I am getting. I used early, pre-publication drafts of that book in a graduate course on Engineering Psychology that I taught as a young assistant professor at the University of Illinois. And even before that, I used Rasmussen’s (1986) previous book “Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering” in the same course. Yes, I have been learning about and talking about Cognitive Systems Engineering (CSE) for a long time.

It is also a jolt when I realize that many people who are actively involved in the design of complex sociotechnical systems have very little understanding of what CSE is or why it is valuable.  For example, they might ask how is CSE different than Human Factors? It seems despite the efforts of me and many others, the message has not reached many of the communities who are involved in shaping the future of sociotechnical systems (e.g., computer scientists, human factors engineers, designers, chief technology officers).

It is also a jolt when I consider how technology has changed over the past 25 years. A common theme in Rasmussen’s arguments for why CSE was necessary was the fast pace of change due to advances in modern information technologies. Boy did he get that right. Twenty-five years ago the Internet was in its infancy. Google was a research project, not formally incorporated as a company till 1998. There was no Facebook, which wasn’t launched until 2004. No YouTube - the first video was uploaded in 2005. No Twitter - the first tweet wasn’t sent until 2006. And perhaps most significantly, there were no smart phones. The first iPhone wasn’t released until 2007.

I’m not sure if Jens (or anyone else) fully anticipated the full impact of the rapid advances of technology on work and life today. However, I do feel that most of his ideas about the implications of these advances for doing research on and for designing sociotechnical systems are still relevant. In fact, the relevance has only grown with the changes that information technologies have fostered.  Despite the emergence of other terms for new ways to think about work, such as UX design and Resilience Engineering, I contend that having an understanding of CSE remains essential to anyone who is interested in studying or designing sociotechnical systems.

Kurt Lewin's quote that "nothing is more practical than a good theory" has been repeated so often that it has become trite. However, few appreciate the complementary implication of this truism. That is, that "the strongest test of a theory is design." In other words, the ultimate test of a theory is whether it can be put to practical use.  In fact, Pragmatists such as William James, C.S. Pierce, and John Dewey might have argued that 'practice' is the ultimate test of 'truth.'

William James was always skeptical about what he called "brass instrument" psychology (ala Wundt and others). In experimental science, the experiment is often 'biased' by the same assumptions that motivated the theory being tested. The result is that most experiments turn out to be demonstrations of the plausibility of a theory, NOT tests of the theory.  That is, in deciding what variables to control, what variables to vary, and what variables to measure the scientist has played a significant role in shaping the ultimate results.  For example, in testing the hypotheses that humans are information processors - the experiments often put people into situations (choice reaction time) where successfully doing the task, requires that the human behaves like an information processing system. Thus, in experiments, hypotheses are tested against the reality as imagined by the scientist. The experiment rarely tests the limits of that imagination - because the scientist creates the experiment.

However, in design the hypothesis runs up against a reality that is beyond the imagination of the designer.  A design works well, or it doesn't. It changes things in a positive way or it doesn't. When a design is implemented in practice, the designer is often 'surprised' to discover that in framing her hypothesis she didn't consider an important dimension of the problem. Sometimes these surprises result in failures (i.e., products that do not meet the functional goals of the designers). But sometimes these surprises result in innovations (i.e., products that turn out to be useful in ways that the designer hadn't anticipated). Texting on smart phones is a classical example. Who would have imagined before the smart phone that people would prefer texting to speaking over a phone?

Experiments are typically designed to minimize the possibilities for surprise. Design tends to do the opposite. Design challenges tend to generate surprises. In fact, I would define 'design innovation' as simply a pleasant surprise!

So, I suggest a variation on Yogi Berra's quote "If you don't know where you're going, you might not get there."

If you don't know where you're going you might be headed for a pleasant surprise (design innovation). 

And if you don't reach a pleasant surprise on this iteration, simply keep going (iterating) until you do!

Kurt Lewin's quote that "nothing is more practical than a good theory" has been repeated so often that it has become trite. However, few appreciate the complementary implication of this truism. That is, that "the strongest test of a theory is design." In other words, the ultimate test of a theory is whether it can be put to practical use.  In fact, Pragmatists such as William James, C.S. Pierce, and John Dewey might have argued that 'practice' is the ultimate test of 'truth.'

William James was always skeptical about what he called "brass instrument" psychology (ala Wundt and others). In experimental science, the experiment is often 'biased' by the same assumptions that motivated the theory being tested. The result is that most experiments turn out to be demonstrations of the plausibility of a theory, NOT tests of the theory.  That is, in deciding what variables to control, what variables to vary, and what variables to measure the scientist has played a significant role in shaping the ultimate results.  For example, in testing the hypotheses that humans are information processors - the experiments often put people into situations (choice reaction time) where successfully doing the task, requires that the human behaves like an information processing system. Thus, in experiments, hypotheses are tested against the reality as imagined by the scientist. The experiment rarely tests the limits of that imagination - because the scientist creates the experiment.

However, in design the hypothesis runs up against a reality that is beyond the imagination of the designer.  A design works well, or it doesn't. It changes things in a positive way or it doesn't. When a design is implemented in practice, the designer is often 'surprised' to discover that in framing her hypothesis she didn't consider an important dimension of the problem. Sometimes these surprises result in failures (i.e., products that do not meet the functional goals of the designers). But sometimes these surprises result in innovations (i.e., products that turn out to be useful in ways that the designer hadn't anticipated). Texting on smart phones is a classical example. Who would have imagined before the smart phone that people would prefer texting to speaking over a phone?

Experiments are typically designed to minimize the possibilities for surprise. Design tends to do the opposite. Design challenges tend to generate surprises. In fact, I would define 'design innovation' as simply a pleasant surprise!

So, I suggest a variation on Yogi Berra's quote "If you don't know where you're going, you might not get there."

If you don't know where you're going you might be headed for a pleasant surprise (design innovation). 

And if you don't reach a pleasant surprise on this iteration, simply keep going (iterating) until you do!

4

As of May 1st I have retired from Wright State University. I accepted an early retirement incentive that was offered due to severe economic conditions at the university. I am not at all ready to retire, but I am eager for a change from WSU.  I hope I still have things to offer and I know there is still much for me to learn.

It was great to see many of my former students at a research celebration that the Department of Psychology hosted in my honor on May 7th. It is amazing to see the work that these former students are doing.  Clearly, I didn't do too much damage!

I am looking forward to the next adventure!  Just waiting for the right door to open.

Taleri Hammack, Jehengar Cooper, John M. Flach & Joseph Houpt

ABSTRACT

This paper explores the ‘hot hand illusion’ from the perspective of ecological rationality. Monte Carlo simulations were used to test the sensitivity of typical tests for randomness to plausible constraints (e.g., Wald=Wolfowitz) on sequences of binary events (e.g., basketball shots). Most of the constraints were detected when sample sizes were large. However, when the range of improvement was limited to reflect natural performance bounds, these tests did not detect a success dependent learning process. In addition, a series of experiments assessed people’s ability to discriminate between random and constrained sequences of binary events. The result showed that in all cases human performance was better than chance, even for the constraints that were missed by the standard tests. The case is made that, as with perception, it is important to ground research on human cognition in the demands of adaptively responding to ecological constraints. In this context, it is suggested that a ‘bias’ or ‘default’ that assumes that nature is ‘structured’ or ‘constrained’ is a very rational approach for an adaptive system whose survival depends on assembling smart mechanisms to solve complex problems.

Download PDF

Abstract

An alternative to conventional models that treat decisions as open-loop independent choices is presented. The alterative model is based on observations of work situations such as healthcare, where decisionmaking is more typically a closed-loop, dynamic, problem-solving process. The article suggests five important distinctions between the processes assumed by conventional models and the reality of decisionmaking in practice. It is suggested that the logic of abduction in the form of an adaptive, muddling through process is more consistent with the realities of practice in domains such as healthcare. The practical implication is that the design goal should not be to improve consistency with normative models of rationality, but to tune the representations guiding the muddling process to increase functional perspicacity.

This paper has been accepted for publication in Applied Ergonomics: Access Article

 

FIGURE: The muddling dynamic is modeled as two coupled loops. The inner loop reflects the active control, driven by the current assumptions about the problem. The outer loop is monitoring performance on the inner loop for 'surprises' that might indicate that the current assumptions do not fit the situation.

In Zen and the Art of Motorcycle Maintenance Robert Pirsig wrestled with the contradictions and alienation that has resulted from the separations between mind and matter - subjective and objective, rationality and emotion, values and morality, classical and romantic attitudes, form and function, science and art - that are embedded in Western culture. The Metaphysics of Quality was his solution for reuniting the dichotomies underlying Western thought into a unified ontology of human experience.

In Lila he describes the delicate balance between static and dynamic quality that is necessary to achieve stability in life (if not satisfaction). Static quality being the ratchet (essential friction1) that protects us from collapsing into instability, and dynamic quality being the impetus to innovate in order to adapt to change and to discover more satisfying solutions for living.

Robert Pirsig's books, Zen and the Art of Motorcycle Maintenance and Lila, were an important source of inspiration for our recent book What Matters. In this book we explore the implications of Pirsig's Metaphysics of Quality for applied cognitive science and design. The major theme that 'experience' is ontologically basic, and that the components of 'mind' and 'matter' or 'subjective' and 'objective' are derivative aligns well with William James' philosophy of Radical Empiricism and C.S. Peirce's triadic model of Semiotics.

Farewell Robert! You taught us the meaning of areté through example. Thank you for allowing us to share part of the journey toward quality through your books.

  1. Åkerman, Nordal (ed.) (1998). The necessity of friction. Boulder, CO: Westview Press.

Links to more information about Robert Pirsig:

https://www.nytimes.com/2017/04/24/books/robert-pirsig-dead-wrote-zen-and-the-art-of-motorcycle-maintenance.html

https://www.theguardian.com/books/2017/apr/25/robert-pirsig-obituary

http://www.npr.org/sections/thetwo-way/2017/04/24/525443040/-zen-and-the-art-of-motorcycle-maintenance-author-robert-m-pirsig-dies-at-88

In Zen and the Art of Motorcycle Maintenance Robert Pirsig wrestled with the contradictions and alienation that has resulted from the separations between mind and matter - subjective and objective, rationality and emotion, values and morality, classical and romantic attitudes, form and function, science and art - that are embedded in Western culture. The Metaphysics of Quality was his solution for reuniting the dichotomies underlying Western thought into a unified ontology of human experience.

In Lila he describes the delicate balance between static and dynamic quality that is necessary to achieve stability in life (if not satisfaction). Static quality being the ratchet (essential friction1) that protects us from collapsing into instability, and dynamic quality being the impetus to innovate in order to adapt to change and to discover more satisfying solutions for living.

Robert Pirsig's books, Zen and the Art of Motorcycle Maintenance and Lila, were an important source of inspiration for our recent book What Matters. In this book we explore the implications of Pirsig's Metaphysics of Quality for applied cognitive science and design. The major theme that 'experience' is ontologically basic, and that the components of 'mind' and 'matter' or 'subjective' and 'objective' are derivative aligns well with William James' philosophy of Radical Empiricism and C.S. Peirce's triadic model of Semiotics.

Farewell Robert! You taught us the meaning of areté through example. Thank you for allowing us to share part of the journey toward quality through your books.

  1. Åkerman, Nordal (ed.) (1998). The necessity of friction. Boulder, CO: Westview Press.

Links to more information about Robert Pirsig:

https://www.nytimes.com/2017/04/24/books/robert-pirsig-dead-wrote-zen-and-the-art-of-motorcycle-maintenance.html

https://www.theguardian.com/books/2017/apr/25/robert-pirsig-obituary

http://www.npr.org/sections/thetwo-way/2017/04/24/525443040/-zen-and-the-art-of-motorcycle-maintenance-author-robert-m-pirsig-dies-at-88