Skip to content

2

Pirsig "Zen and the Art of Motorcycle Maintenance"

The development of and standardization of metrics was critical to the development of science. The standard metrics provided "objective" standards for describing events and experiments to ensure that they could be replicated and generalized appropriately. Without objective standards of measurement there could be no science.

Development of objective, observer independent standards of measurement was essential to the success of the physical sciences.

However, the great error in Western Science was to take the description of the world in terms of these metrics as an objective reality - in opposition to a subjective reality! The implication is that the objective distance in terms of meters is true, but the functional relations such as graspable, reachable, near or far are 'subjective.' This implies that the variability associated with individual differences along such dimensions is "noise" with regard to the "true" reality. And there is an implication that this "noise" has to be somehow filtered and added-to in order to construct a mental model of the objective truth - in relation to the standard metrics (e.g., the size in meters).

One implication is that since people and animals are not well calibrated to the standard metrics, then their perceptions of the world must be 'indirect' and therefore it is necessary for them to reconstruct the true world (recover the correct standard) in order to act appropriately.

Another implication is that many of the relations that directly impact how people make judgements about graspability (e.g., their own hand size), reachability (their arm length or height), or closeness (e.g., available modes of transportation) are less real - less basic - or that they are derivative. But of course, these relations are every bit as 'real' and every bit as specifiable as the elements comprising these relations.

These relations are part of a "whole" that can not be discovered in the components. These relations are 'emergent properties' of the whole. A central premise of ecological psychology is that these emergent properties are 'essential and fundamental' elements for a science that hopes to describe how people adapt to their ecologies. Ecological Psychology argues that the size of an object relative to a hand or the distance to a cliff relative to your height is every bit as objective as the size relative to a meter stick.

Further, ecological psychology argues that these functional relations exist in the world to be discovered and perceived directly. And that there is information (e.g., structure in optical arrays) that specifies these emergent properties. Thus, there is no need for internal processing to construct or reconstruct these relations. These are NOT mental constructions - they are functional properties of the coupling of an animal with its ecology - they are properties of the umwelt. They are affordances that can be directly experienced.

Too close as dependent on height and specified as a visual angle.

The mistake that Western Science has made is that it has taken the arbitrary metrics created to aid formal scientific enterprises as 'fundamental' and it has taken the relations that emerge from the functional interactions of people with their ecology to be 'derivative.' However, I think there is little doubt that the experiences of graspable, reachable, near or far are fundamental primitives of the human-ecology system. These pragmatic/functional relations are the raw primitives of experience. They are REAL! The metrics of objective science are also real - but they are the wrong level of description for exploring how people adapt to the functional demands of everyday living.

As Protagoras claimed: Man is the measure of all things.

In our everyday lives we directly experience the ecology in terms of the REAL properties that emerge as a function of the perception-action coupling with our ecology! We will never construct a satisfying understanding of human performance if we start by denying the reality of these essential emergent properties. Thus, the claim is that a science of human performance must be built using different bricks than those used to construct an 'objective' physical science. These bricks, these essential elements are different from those used by physicists, but they are no less real.

The essential elements for building a science of human experience are different than those that have been used successfully in building a science of an observer independent physical world. However, these elements are no less real.

The irony of using different bricks or working at different levels of description is that this may be the path that might allow us to escape from a collection of little sciences to a single, unified science, that spans the field of possibilities reflecting the joint constraints of mind and matter.

See What Matters for an exploration of the implications of these ideas for cognitive science and experience design.

Although I have used the term "wicked problems" in my writing, I only recently read Rittel & Webber's (1973) original description of this concept along with an editorial by Churchman (1967) commenting on his hearing Rittel talk about this construct.

I have little to add to the original formulation and encourage others to access and read both papers.

Rittel, H.W. & Webber, W.M. (1973). Policy Sciences, 4, 155-169.

Churchman, C. W. (1967) Wicked Problems. Management Science, 14(4), B141-B142,

Rittel and Webber list 10 attributes of wicked problem, that I will list here, but encourage readers to go to the original source for further explication.

  1. There is no definitive formulation of a wicked problem.
  2. Wicked problems have no stopping rule.
  3. Solutions to wicked problems are not true-or-false, but good-or-bad.
  4. There is no immediate and no ultimate test of a solution to a wicked problem.
  5. Every solution to a wicked problem is a "one-shot operation"; because there is no opportunity to learn by trial-and-error, every attempt counts significantly.
  6. Wicked problems do not have an enumerable (or an exhaustively describable) set of potential solutions, nor is there a well-described set of permissible operations that may be incorporated into the plan.
  7. Every wicked problem is essentially unique.
  8. Every wicked problem can be considered to be a symptom of another problem.
  9. The existence of a discrepancy representing a wicked problem can be explained numerous ways. The choice of explanation determines the nature of the problem's solution.
  10. The planner has no right to be wrong.

From the Churchman article:

... the term "wicked problem" refers to that class of social system problems which are ill-formulated, where the information is confusing, where there are many clients and decision makers with conflicting values, and where the ramifications in the whole system are thoroughly confusing. The adjective "wicked" is supposed to describe the mischievous and even evil quality of these problems, where proposed "solutions" often turn out to be worse than the symptoms.

p. B141

Churchman raises some ethical issues in the context of OR associated with approaching wicked problems piecemeal, that I think applies far more broadly than to just OR:

A better way of describing the OR solution might be to say that it tames the growl of the wicked problem: the wicked problem no longer shows its teeth before it bites.

Such a remark naturally hints at deception: the taming of the growl may deceive the innocent into believing that the wicked problem is completely tamed. Deception, in turn, suggests morality: the morality of deceiving people into thinking something is so when it is not. Deception becomes an especially strong moral issue when one deceives people into thinking that something is safe when it is highly dangerous.

The moral principle is this: whoever attempts to tame a part of a wicked problem, but not the whole, is morally wrong.

p. B141 - B142

A consequence of an increasingly networked world is that our problems are getting increasingly more wicked. These two papers should be required reading for anyone who is involved in management or design.

Fred Voorhorst has created a poster to help us organize our thoughts with respect to the design of representations that help smart people to skillfully muddle through wicked problems.  In the case of wicked problems - there is no recipe that will guarantee success - but there are things that we can do to improve our muddling skill and to shape our thinking in more productive directions.

3

Successful innovation demands more than a good strategic plan; it requires creative improvisation. Much of the “serious play” that leads to breakthrough innovations is increasingly linked to experiments with models, prototypes, and simulations. As digital technology makes prototyping more cost-effective, serious play will soon lie at the heart of all innovation strategies, influencing how businesses define themselves and their markets.”

“Serious play turns out to be not an ideal but a core competence. Boosting the bandwidth for improvisation turns out to be an invitation to innovation. Treating prototypes as conversation pieces turns out to be the best way to discover what you yourself are really trying to accomplish.

Michael Schrage (1999)

“… generative design research [is] an approach to bring the people we serve through design directly into the design process in order to ensure that we can meet their needs and dreams for the future. Generative design research gives people a language with which they can imagine and express their ideas and dreams for future experience. The ideas and dreams can, in turn, inform and inspire stakeholders in the design and development process.

(Sanders & Stappers, 2012, p. 8)

The concept of "Design Thinking" is very much in vogue these days and I share the associated optimism that there is much that everyone can learn from engaging with the experiences of designers. However, my own experiences with design suggest that the label 'thinking' is misleading. For me the ultimate lesson from design experiences is the importance of coupling perception (analysis and evaluation) with action (creation of various artifacts). For me Schrage's concept of "Serious Play" and Sanders and Stapper's concept of "Co-Creation" provide more accurate descriptions of the power of design experiences for helping people to be more productive and innovative in solving complex problems. The key idea is that thinking does not happen in a disembodied head or brain, but rather, through physically and socially engaging with the world.

A number of years ago, I was part of a brief chat with Arthur Iberall (who designed one of the first suits for astronauts) and he was asked how he approached design? His response was: "I just build it. Then I throw it against the wall and build it again. Till eventually I can't see the wall. At that point I am beginning to get a good understanding of the problem."

The experience of building artifacts is where designers have an advantage on most of us. Building artifacts and interacting with the artifacts is an essential part of the learning and discovery process. Literally grasping and interacting with concrete objects and situations is a prerequisite for mentally grasping them. Trying to build and use something provides an essential test of assumptions and design hypotheses. In fact, I would argue that the process of creating artifacts can be a stronger test of an idea, than more classical experiments. The reason is that the same wrong assumptions that led to the idea to be tested are often also informing the design of the experiment to test the idea.

Thus, an important step in assessing an idea is to get it out of your head and into some kind of physical or social artifact (e.g, a storyboard, a persona, a wireframe, a scenario, an MVP, a simulation).

As a Cognitive Systems Engineer, I am strongly convinced of the value of Cognitve Work Analysis (CWA) as described by Kim Vicente (1999) and others (Naikar, 2013; Stanton, et al., 2018). However, although not necessarily intended by Kim or others, people often treat CWA as a prerequisite for design. That is, there is an implication that a thorough CWA must be completed prior to building any thing.  However, I have found that it is impossible to do a thorough CWA without building things along the way. In my experience, it is best to think of CWA as a co-requisite with design in an iterative process, as illustrated in the Figure below.

The figure illustrates my experiences with the development of the Cardiac Consultant App that is designed to help Family Practice Physicians to assess cardiovascular health. The first phase of this development was Tim McEwen's dissertation work at Wright State University. Tim and I did an extensive evaluation of the literature on cardiovascular health as part of our CWA. I particularly remember us trying to decompose the Framingham Risk equations. Discovering a graphical representation for the Cox Hazard function underlying this model was a key to our early representation concept. With Randy Green's help, we were able to code a Minimally Viable Product (MVP) that Tim could evaluate as part of his dissertation work. Note that MVP does not mean minimal functionality. Rather, the goal is to get sufficient functionality for empirical evaluation in a realistic context with MINIMAL developmental effort. The point of the MVP is to create an artifact for testing design assumptions and for learning about the problem.

I was able to take the MVP that we generated in connection with Tim's dissertation to Mile Two, where the UX designers and developers were able to refine the MVP into a fully functioning web App (CVDi). I have to admit that I was quite surprised by how much the concept was improved through working with the UX designers at Mile Two. This involved completely abandoning the central graphic of the MVP, that had provided us with important insights into the Framingham model. We lost the graphic, but carried the insights forward and the new design allowed us to incorporate additional risk models into the representation (e.g., Reynolds Risk Model).

Despite the improvements, there was a major obstacle to implementing the design in a healthcare setting. The stand alone App required a physician to manually enter data from the patient's record (EHR system) into the App. This is where Asymmetric came into the picture. Asymmetric had extensive experience with the FIHR API and they offered to collaborate with Mile Two to link our interface directly to the EHR system - eliminating the need for physicians to manually enter data. In the course of building the FIHR backend, the UX group at Asymmetric offered suggestions for additional improvements to the interface representation, leading to the Cardiac Consultant. Again, I was pleasantly surprised by the value added by these changes.

So, the ultimate point of this story is to illustrate a Serious Play process that involves iteratively creating artifacts and then using the artifacts to elicit feedback in the analysis and discovery process. The artifacts are critical for pragmatically grounding assumptions and hypotheses. Further, the artifacts provide a concrete context for engaging a wide range of participants (e.g., domain experts and technologists) in the discovery process (participatory design).

I have found that it is impossible to do a thorough CWA without building things along the way. In my experience, it is best to think of CWA as a co-requisite with design in an iterative process, rather than a prerequisite.

At the end of the day, Design Thinking is more about doing (creating artifacts), than about 'thinking' in the conventional sense.

Works Cited

Naikar, N. 2013. Work Domain Analysis. Boca Raton, FL: CRC Press.

Sanders, E.B. -N, Stappers, P.J. (2012). Convivial Toolbox. Amsterdam, BIS Publishers.

Schrage, M. (1999). Serious Play: How the World’s Best Companies Simulate to Innovate. Cambridge, MA: Harvard Business School Press.

Stanton, N.A., Salmon, P.M., Walker, G.H. & Jenkins, D.P. (2018). Cognitive Work Analysis. Boca Raton, FL: CRC Press.

Vicente, K.J. (1999). Cognitive Work Analysis. Mahwah, NJ: Erlbaum.

3

Successful innovation demands more than a good strategic plan; it requires creative improvisation. Much of the “serious play” that leads to breakthrough innovations is increasingly linked to experiments with models, prototypes, and simulations. As digital technology makes prototyping more cost-effective, serious play will soon lie at the heart of all innovation strategies, influencing how businesses define themselves and their markets.”

“Serious play turns out to be not an ideal but a core competence. Boosting the bandwidth for improvisation turns out to be an invitation to innovation. Treating prototypes as conversation pieces turns out to be the best way to discover what you yourself are really trying to accomplish.

Michael Schrage (1999)

“… generative design research [is] an approach to bring the people we serve through design directly into the design process in order to ensure that we can meet their needs and dreams for the future. Generative design research gives people a language with which they can imagine and express their ideas and dreams for future experience. The ideas and dreams can, in turn, inform and inspire stakeholders in the design and development process.

(Sanders & Stappers, 2012, p. 8)

The concept of "Design Thinking" is very much in vogue these days and I share the associated optimism that there is much that everyone can learn from engaging with the experiences of designers. However, my own experiences with design suggest that the label 'thinking' is misleading. For me the ultimate lesson from design experiences is the importance of coupling perception (analysis and evaluation) with action (creation of various artifacts). For me Schrage's concept of "Serious Play" and Sanders and Stapper's concept of "Co-Creation" provide more accurate descriptions of the power of design experiences for helping people to be more productive and innovative in solving complex problems. The key idea is that thinking does not happen in a disembodied head or brain, but rather, through physically and socially engaging with the world.

A number of years ago, I was part of a brief chat with Arthur Iberall (who designed one of the first suits for astronauts) and he was asked how he approached design? His response was: "I just build it. Then I throw it against the wall and build it again. Till eventually I can't see the wall. At that point I am beginning to get a good understanding of the problem."

The experience of building artifacts is where designers have an advantage on most of us. Building artifacts and interacting with the artifacts is an essential part of the learning and discovery process. Literally grasping and interacting with concrete objects and situations is a prerequisite for mentally grasping them. Trying to build and use something provides an essential test of assumptions and design hypotheses. In fact, I would argue that the process of creating artifacts can be a stronger test of an idea, than more classical experiments. The reason is that the same wrong assumptions that led to the idea to be tested are often also informing the design of the experiment to test the idea.

Thus, an important step in assessing an idea is to get it out of your head and into some kind of physical or social artifact (e.g, a storyboard, a persona, a wireframe, a scenario, an MVP, a simulation).

As a Cognitive Systems Engineer, I am strongly convinced of the value of Cognitve Work Analysis (CWA) as described by Kim Vicente (1999) and others (Naikar, 2013; Stanton, et al., 2018). However, although not necessarily intended by Kim or others, people often treat CWA as a prerequisite for design. That is, there is an implication that a thorough CWA must be completed prior to building any thing.  However, I have found that it is impossible to do a thorough CWA without building things along the way. In my experience, it is best to think of CWA as a co-requisite with design in an iterative process, as illustrated in the Figure below.

The figure illustrates my experiences with the development of the Cardiac Consultant App that is designed to help Family Practice Physicians to assess cardiovascular health. The first phase of this development was Tim McEwen's dissertation work at Wright State University. Tim and I did an extensive evaluation of the literature on cardiovascular health as part of our CWA. I particularly remember us trying to decompose the Framingham Risk equations. Discovering a graphical representation for the Cox Hazard function underlying this model was a key to our early representation concept. With Randy Green's help, we were able to code a Minimally Viable Product (MVP) that Tim could evaluate as part of his dissertation work. Note that MVP does not mean minimal functionality. Rather, the goal is to get sufficient functionality for empirical evaluation in a realistic context with MINIMAL developmental effort. The point of the MVP is to create an artifact for testing design assumptions and for learning about the problem.

I was able to take the MVP that we generated in connection with Tim's dissertation to Mile Two, where the UX designers and developers were able to refine the MVP into a fully functioning web App (CVDi). I have to admit that I was quite surprised by how much the concept was improved through working with the UX designers at Mile Two. This involved completely abandoning the central graphic of the MVP, that had provided us with important insights into the Framingham model. We lost the graphic, but carried the insights forward and the new design allowed us to incorporate additional risk models into the representation (e.g., Reynolds Risk Model).

Despite the improvements, there was a major obstacle to implementing the design in a healthcare setting. The stand alone App required a physician to manually enter data from the patient's record (EHR system) into the App. This is where Asymmetric came into the picture. Asymmetric had extensive experience with the FIHR API and they offered to collaborate with Mile Two to link our interface directly to the EHR system - eliminating the need for physicians to manually enter data. In the course of building the FIHR backend, the UX group at Asymmetric offered suggestions for additional improvements to the interface representation, leading to the Cardiac Consultant. Again, I was pleasantly surprised by the value added by these changes.

So, the ultimate point of this story is to illustrate a Serious Play process that involves iteratively creating artifacts and then using the artifacts to elicit feedback in the analysis and discovery process. The artifacts are critical for pragmatically grounding assumptions and hypotheses. Further, the artifacts provide a concrete context for engaging a wide range of participants (e.g., domain experts and technologists) in the discovery process (participatory design).

I have found that it is impossible to do a thorough CWA without building things along the way. In my experience, it is best to think of CWA as a co-requisite with design in an iterative process, rather than a prerequisite.

At the end of the day, Design Thinking is more about doing (creating artifacts), than about 'thinking' in the conventional sense.

Works Cited

Naikar, N. 2013. Work Domain Analysis. Boca Raton, FL: CRC Press.

Sanders, E.B. -N, Stappers, P.J. (2012). Convivial Toolbox. Amsterdam, BIS Publishers.

Schrage, M. (1999). Serious Play: How the World’s Best Companies Simulate to Innovate. Cambridge, MA: Harvard Business School Press.

Stanton, N.A., Salmon, P.M., Walker, G.H. & Jenkins, D.P. (2018). Cognitive Work Analysis. Boca Raton, FL: CRC Press.

Vicente, K.J. (1999). Cognitive Work Analysis. Mahwah, NJ: Erlbaum.

3

What does an Ecological Interface Design (or an EID) look like?

As one of the people who has contributed to the development of the EID approach to interface design, I often get a variation of this question (Bennett & Flach, 2011). Unfortunately, the question is impossible to answer because it is based on a misconception of what EID is all about.  EID does not refer to a particular form of interface or representation, rather it refers to a process for exploring work domains in the hope of discovering representations to support productive thinking about complex problems.

Consider the four interfaces displayed below.  Do you see a common form? All four of these interfaces were developed using an EID approach. Yet, the forms of representation appear to be very different.

What makes these interfaces "ecological?"

The most important aspect of the EID approach is a commitment to doing a thorough Cognitive Work Analysis (CWA) with the goal of uncovering the deep structures of the work domain (i.e., the significant ecological constraints) and to designing representations in which these constraints provide a background context for evaluating complex situations.

  • In the DURESS interface, Vicente (e.g., 1999) organized the information to reflect fundamental properties of thermodynamic processes related to mass and energy balances.
  • The TERP interface, designed by Amelink and colleagues ( 2005) was inspired by innovations in the design of autopilots based on energy parameters (potential and kinetic energy). The addition of energy parameters helped to disambiguate the relative role of the throttle and stick for regulating the landing path.
  • In the CVD interface (McEwen et al., 2014) published models of cardiovascular risk (e.g. Framingham and Reynolds Risk Models) became the background for evaluating combinations of clinical values (e.g., cholesterol levels, blood pressure) and for making treatment recommendations.
  • In the RAPTOR interface, Bennett and colleagues (2008) included a Force Ratio graphic to provide a high-level view of the overall state of a conflict (e.g., who is winning).

Although interviews of operators can be a valuable part of any CWA, these are typically not sufficient. With EID the goal is not to match the operators' mental models, but rather to shape the mental models. For example, the Energy Path in the TERP interface was not inspired by interviews with pilots. In fact, most pilots were very skeptical about whether the TERP would help. The TERP was inspired by insights from Aeronautical Engineers who discovered that control systems that used energy states as feedback resulted in more stable automatic control solutions.

With EID the goal is not to match the operators' mental models, but rather to shape the mental models toward more productive ways of thinking.

A second common aspect of representations designed from an EID perspective is a configural organization. Earlier research on interface design was often framed in terms of an either/or contrast between integral versus separable representations. This suggested that you could EITHER support high-level relational perspectives (integral representations) OR provide low-level details (separable representations), but not both.  The EID process is committed to a BOTH/AND perspective, where it is assumed that it is desirable (even necessary) to provide BOTH access to detailed data AND to higher order relations among the variables. In a configural representation the detailed data is organized in a way to make BOTH the detailed data AND more global, relational constraints salient.

For example, in the CVD interface, all of the clinical values that contribute to the cardiovascular risk models are displayed and in addition to presenting a risk estimate (that is an integral function of multiple variables) the relative contribution of each variable is also shown. This allows physicians to not only see the total level of risk, but also to see how much each of the different values is contributing to the risk level.

In configural representations a goal is to leverage the powerful ability of humans to recognize patterns that reflect high-order relations while simultaneously allowing access to specific data as salient details nested within the larger patterns.

The EID process is committed to a both/and perspective, where it is assumed that it is desirable (even necessary) to provide both access to detailed data (the particular) and to higher order relations among the variables (the general).

A third feature of the EID process is the emphasis on supporting adaptive problem solving. The EID approach is based on the belief that there is no single, best way or universal procedure that will lead to a satisfying solution in all cases. Thus, rather than designing for procedural compliance, EID representations are designed to help people to explore a full range of options so that it is possible for them to creatively adapt to situations that (in some cases) could not have been anticipated in advance. Thus, representations designed from an EID perspective typically function as analog simulations that support direct manipulation. By visualizing global constraints (e.g., thermodynamic models or medical risk models) the representations help people to anticipate the consequences of actions. These representations typically allow people to test and evaluate hypotheses by manipulating features of the representation  before committing to a particular course of action or, at least, before going too far down an unproductive or dangerous path of action.

Rather than designing for procedural compliance, EID representations are designed to help people to explore a full range of options so that it is possible for them to creatively adapt to situations that (in some cases) could not have been anticipated in advance.

It should not be too surprising that the forms of representations designed from the EID perspective may be very different. This is because the domains that they are representing can be very different.  The EID approach does not reflect a commitment to any particular form of representation. Rather it is a commitment to providing representations that reflect the deep structure of work domains, including both detailed data and more global, relational constraints. The goal is to provide the kind of insights (i.e., situation awareness) that will allow people to creatively respond to the surprising situations that inevitably arise in complex work domains.

Works Cited

Amelink, H.J.M., Mulder, M., van Paassen, M.M., Flach, J.M. (2005). Theoretical foundations for total energy-based perspective flight-path displays for aircraft guidance. International Journal of Aviation Psychology, 15, 205 – 231.

Bennett, K.B., and Flach, J.M. (2011). Display and Interface Design: Subtle Science, Exact Art. London: Taylor & Francis.

Bennett, K.B., Posey, S.M. & Shattuck, L.G. (2008). Ecological interface design for military command and control. Journal of Cognitive Engineering and Decision Making, 2(4), 349-385.

McEwen, T., Flach, J.M. & Elder, N. (2014). Interfaces to medical information systems: Supporting evidence-based practice. IEEE: Systems, Man, & Cybernetics Annual Meeting, 341-346. San Diego, CA. (Oct 5-8).

Vicente, K.J. (1999). Cognitive Work Analysis. Mahwah, NJ: Erlbaum.

1

Human Error (HE) has long been offered as an explanation for why accidents happen in complex systems. In fact, it is often suggested to be the leading cause of accidents in domains such as aviation and healthcare. As Jens Rasmussen has noted the human is in a very unfortunate position with respect to common explanations for system failures. This is because when you trace backwards in time along any trajectory of events leading to an accident, you will almost always find some act that someone did that contributed to the accident or an act that they failed to do that might have prevented the accident. This act or failure to act is typically labeled as a human error and it is typically credited to be the CAUSE of the accident.

Note that the behavior is only noticed for consideration in hindsight (if there is an accident), otherwise it is typically just unremarkable work behavior.

However, many people (e.g., Dekker, Hollnagel, Rasmussen, Woods) now understand that this explanation trivializes the complexities of work and that blaming humans rarely leads to safety improvements. In a previous blog I noted the parallels between stamping out forest fires and stamping out human error. Stamping out forest fires does not necessarily lead to healthy forests; and stamping out human error does not necessarily lead to safer systems. And in fact, such approaches may actually set the conditions for catastrophic accidents (due to fuel building up in forests, and due to failure to disclose near misses and to learn from experience in complex systems).

While I fully appreciate this intellectually, I had a recent experience on a trip to France that reminded me how powerful the illusion of Human Error can be.

Shortly after my arrival at Charles de Gaulle Airport, as I managed the trains into Paris, my wallet was stolen. It had all my cash and all my credit cards. I was penniless in a country where I didn't know the language. It was quite an experience. The important thing relative to this post was my powerful feeling that I was at fault. Why wasn't I more careful? Why didn't I move my wallet from my back pocket, where I normally carry it (and where it is most comfortable) to my front pockets (as I normally do when I am in dangerous areas)? Why did I have all my money and credit cards in the same place? What a fool I am? It's all my fault!

The illusion is powerful! I guess this reflects a need to believe that I am in control. I know intellectually that this is an illusion. I know that life is a delicate balancing act where a small perturbation can knock us off our feet. I know that when things work, it is not a simple function of my control actions, but the result of an extensive network of social and cultural supports. And I should know that when things don't work, it is typically the result of a cascade of small perturbations in this network of support (e.g., the loss of a nail).

The human error illusion is the flip side of the illusion that we are in control. It is an illusion that trivializes complexity - minimizing the risks of failure and exaggerating the power of control.

Fortunately, I got by with a lot of help from my friends, and my trip to France was not ruined by this event. It turned out to be a great trip and a valuable learning experience.

1

This is the six and final post in a series of blogs to examine some of the implications of a CSE perspective on sociotechnical systems and the implications for design. The table below summarizes some of the ways that CSE has expanded our vision of humans. In this blog the focus will be on design implications.

One of the well known mantras of the Human Factors profession has been:

Know thy user.

This has typically meant that a fundamental role for human factors has been to make sure that system designers are aware of computational limitations (e.g., perceptual thresholds, working memory capacity, potential violations of classical logic due to reliance on heuristics) and expectations (e.g., population stereotypes, mental models) that bound human performance.

It is important to note that these limitations have generally been validated with a wealth of scientific research. Thus, it is important that these limitations be considered by designers. It is important to design information systems so that relevant information is perceptually salient, so that working memory is not over-taxed, and so that expectations and population stereotypes are not violated.

The emphasis on the bounds of human rationality, however, tends to put human factors at the back of the innovation parade. While others are touting the opportunities of emerging technologies, HF is apologizing for the weaknesses of the humans. This feeds into a narrative in which automation becomes the 'hero' and humans are pushed into the background as the weakest link - a source of error and an obstacle to innovation. From the perspective of the technologists - the world would be so much better if we could simply engineer the humans out of the system (e.g., get human drivers off the road in order to increase highway safety).

But of course, we know that this is a false narrative. Bounded rationality is not unique to humans - all technical systems are bounded (e.g., by the assumptions of their designers or in the case of neural nets by the bounds of their training/experience). It is important to understand that the bounds of rationality are a function of the complexity or requisite variety of nature. It is the high dimensionality and interconnectedness of the natural world that creates the bounds on any information processing system (human or robot/automaton) that is challenged to cope in this natural world. In nature there are always potential sources of information that will be beyond the limits of any computational system.

The implication for designing sociotechnical systems is that designers need to take advantage of whatever resources are available to cope with this requisite variety of nature. For CSE, the creative problem solving abilities of humans and human social systems is considered to be one of the resources that designers should be leveraging.  Thus, the muddling of humans (i.e., incrementalism) described by Lindblom is NOT considered to be a weakness, but rather a strength of humans.

Most critics of incrementalism believe that doing better usually means turning away from incrementalism. Incrementalists believe that for complex problem solving it usually means practicing incrementalism more skillfully and turning away from it only rarely. (C.E. Lindblom, 1979, p. 517)

Thus, for designing systems for coping with complex natural problems (e.g., healthcare, economics, security) it is important to appreciate the information limitations of all systems involved (human and otherwise). However, this is not enough. It is also important to consider the capabilities of all systems involved. One of these capabilities is the creative, problem solving capacity of smart humans and human social systems. A goal for design needs to be to support this creative capacity by helping humans to tune into the deep structure of natural problems so that they can skillfully muddle through with the potential of discovering smart solutions to problems that even the designers could not have anticipated.

In order to 'know thy user' it is not sufficient to simply catalog all the limitations. Knowing thy users also entails appreciating the capabilities that users offer with respect to coping with the complexities of nature.

This often involves constructing interface representations that shape human mental models or expectations toward smarter more productive ways of thinking. In other words, the goal of interface design is to provide insights into the deep structure or meaningful dimensions of a problem, so that humans can learn from mistakes and eventually discover clever strategies for coping with the unavoidable complexities of the natural world.

Lindblom, C.E. (1979). Still muddling, not yet through. Public Administration Review, 39(6), 517-526.

“An ant, viewed as a behaving system, is quite simple. The apparent complexity of its behavior over time is largely a reflection of the complexity of the environment in which it finds itself.” — Herbert Simon

In some respect, it is debatable whether psychology completely escaped from the strictures of a Behaviorist perspective with the dawning of an information processing age. Although the computer and information processing metaphors legitimized the study of mental phenomena, these phenomena tended to be visualized as activities or mental behaviors (e.g., encoding, remembering, planning, deciding, etc.). Thus, for human factors, cognitive task analysis tended to focus on specifying the mental activities associated with work.

However, as Simon's parable of the ant suggests, this might lead to the appearance or inference that the cognition is very complex. When in reality, the ultimate source of that complexity may not be in the cognitive system, but in the work situations that provide the context for the activity (i.e., the beach). Thus, Simon's parable is the motivation for work analysis as a necessary complement to task analysis. The focus of work analysis is to describe the functional constraints (e.g., goals, affordances, regulatory constraints, social constraints, etc.) that are shaping the physical and mental activities of workers. The focus of work analysis is on describing work situations, as a necessary context for evaluating both awareness (rationality) and behavior.

While classical task analysis describes what people do, it provides little insight into why they do something. This is the primary value of work analysis, to provide insight into the functional context shaping work behaviors.

A common experience upon watching activities in an unfamiliar work domain is puzzlement at activities that seem somewhat irrational. Why did they do that? However, as one becomes more familiar with the domain, one often discovers that these puzzling activities are actually smart responses to constraints that the experienced workers were attuned to - but that were invisible to an outsider.

Thus, for those of us who see humans as incredibly adaptive systems - it is natural to look to the ecology that the humans are adapting to as the first source for hypotheses to explain that behavior. And for those of us who hope to design information technologies that enhance that adaptive capacity - it is critical that the tools not simply be tuned to human capabilities, but that these tools are also tuned to the demands of the work ecology. For example, a shovel must not only fit the human hands, but it must also fit well with the materials that are to be manipulated.

Thus, work analysis is essential for Cognitive Systems Engineering. It reflects the belief that understanding situations is a prerequisite for understanding awareness.

This is the fourth in a series of entries to explore the differences between an Information Processing Approach to Human Factors and a Meaning Processing Approach to Cognitive Systems Engineering. The table below lists some contrasts between these two perspectives. This entry will focus on the third contrast in the table - the shift from a focus on 'workload' to a focus on 'situation awareness.'

The concept of information developed in this theory at first seems disappointing and bizarre - disappointing because it has nothing to do with meaning, and bizarre because it deals not with a single message but rather with the statistical character of a whole assemble of messages, bizarre also because in these statistical terms the two words information and uncertainty find themselves to be partners. Warren Weaver (1963, p. 27)

The construct of 'workload' is a natural focus for an approach that emphasizes describing and quantifying internal constraints of the human and that assumes that these constraints are independent of the particulars of any specific situation or work context. This fits well with the engineering perspective on quantifying information and for specifying the capacity of fixed information channels as developed by Shannon.  However, the downside of this perspective is that in making the construct of workload independent of 'context,' it thus becomes independent of 'meaning' as suggested in the above quote from Warren Weaver.

Those interested in the impact of context on human cognition became dissatisfied with a framework that focused only on internal constraints (e.g., bandwidth, resources, modality) without consideration for how those constraints interacted with situations. Thus, the construct of Situation Awareness (SA) evolved as an alternative to workload. Unfortunately, many who have been steeped in the information processing tradition have framed SA in terms of internal constraints (e.g., treating levels of SA as components internal to the processing system).

However, others have taken the construct of SA as an opportunity to consider the dynamic couplings of humans and work domains (or situations).  For them, the construct of SA reflects a need to 'situate' cognition within a work ecology and to consider how constraints in that ecology create demands and opportunities for cognitive systems. In this framework, it is assumed that cognitive systems can intelligently adapt to the constraints of situations - utilizing structure in situations to 'chunk' information and as the basis for smart heuristics that reduce the computational burden, allowing people to deal effectively with situations that would overwhelm the channel capacity of a system not tuned to these structural constraints (see aiming off example).

There is no question that humans have limited working memory capacity as suggested by the workload construct. However, CSE recognizes the ability of people to discover and use situated constraints (e.g., patterns) in ways that allow them to do complex work (e.g., play chess, pilot aircraft, drive in dense traffic, play a musical instrument) despite these internal constraints. It is this capacity to attune to structure associated with specific work domains that leads to expert performance.

The design implication of an approach that focuses on workload is to protect the system against human limitations (e.g., bottlenecks) by either distributing the work among multiple people or by replacing humans with automated systems with higher bandwidth. The key is to make sure that people are not overwhelmed by too much data!

The design implication of an approach that focuses on SA is to make the meaningful work domain constraints salient in order to facilitate attunement processes. This can be done through the design of interfaces or through training. The result is to heighten human engagement with the domain structure to facilitate skill and expertise. The key is to make sure that people are well-tuned to the meaningful aspects of work (e.g., constraints and patterns) that allow them to 'see' what needs to be done.