Skip to content

4

Symbols help us make tangible that which is intangible. And the only reason symbols have meaning is because we infuse them with meaning. That meaning lives in our minds, not in the item itself. Only when the purpose, cause or belief is clear can the symbol command great power   (Sinek, 2009, p. 160)

As this quote from Sinek suggests, symbols (e.g., alphabets, flags, icons) are created by humans. Thus, the 'meaning' of the symbols will typically reflect the intentions or purposes motivating their creation. For example, as a symbol, a country's flag might represent the abstract principles on which the country is founded (e.g., liberty and freedom for all). However, it would be a mistake to conclude from this (as many cognitive scientists have) that all 'meaning' lives in our minds. While symbols may be a creation of humans - meaning is NOT.

Let me state this again for emphasis:

Meaning is NOT a product of mind!

As the triadic model of a semiotic system illustrated in the figure below emphasizes meaning emerges from the functional coupling between agents and situations. Further, as Rasmussen (1986) has emphasized this coupling involves not only symbols, but also signs and signals.

Signs (as used by Rasmussen) are different than 'symbols' in that they are grounded in social conventions. So, the choice of a color to represent safe or dangerous, or of an icon to represent 'save' or 'delete' has its origins in the head of a designer. At some point, someone chose 'red' to represent 'danger,' or chose a 'floppy disk' image to represent save.  However, over time this 'choice' of the designer can become established as a social convention.  At that point, the meaning of the the color or the icon is no longer arbitrary. It is no longer in the head of the individual observer. It has a grounding in the social world - it is established as a social convention or as a cultural expectation. People outside the culture may not 'pick-up' the correct meaning, but the meaning is not arbitrary.

Rasmussen used the term sign to differentiate this role in a semiotic system from that of 'symbols' whose meaning is open to interpretation by an observer. The meaning of a sign is not in the head of an observer, for a sign the meaning has been established by a priori rules (social or cultural conventions).

for a sign the meaning has been established by a priori rules (social or cultural conventions)

Signals (as used by Rasmussen) are different than both 'symbols' and 'signs' in that they are directly grounded in the perception-action coupling with the world. So, the information bases for braking your automobile to avoid a potential collision, or for catching a fly ball, or for piloting an aircraft to a safe touchdown on a runway are NOT in our minds! For example, structures in optical flow fields (e.g., angle, angular rate, tau, horizon ratio) provide the state information that allows people to skillfully move through the environment. The optical flow field and the objects and events specified by the invariant structures are NOT in the mind of the observer. These relations are available to all animals with eyes and can be leveraged in automatic control systems with optical sensors. These signals are every bit as meaningful as any symbol or sign yet these are not human inventions. Humans and other animals can discover the meanings of these relations through interaction with the world, and they can utilize these meanings to achieve satisfying interactions with the world (e.g. avoiding collisions, catching balls, landing aircraft), but the human does not 'create' the meaning in these cases.

for a signal the meaning emerges naturally from the coupling of perception and action in a triadic semiotic system. It is not an invention of the mind, but it can be discovered by a mind.

In the field of cognitive science debates have often been cast in terms of whether humans are 'symbol processors,' such that meaning is constructed through mental computations, or whether humans are capable of 'direct perception,' such that meaning is 'picked-up' through interaction with the ecology.  One side places meaning exclusively in the mind, ignoring or at least minimizing the role of structure in the ecology. The other side places meaning in the ecology, minimizing the creative computational powers of mind.

This framing of the question in either/or terms has proven to be an obstacle to progress in cognitive science. Recognizing that the perception-action loop can be closed through symbols, signs, and signals opens the path to a both/and approach with the promise of a deeper understanding of human cognition.

Recognizing that the perception-action loop can be closed through symbols, signs, and signals opens the path to a both/and approach with the promise of a deeper understanding of human cognition.

4

Symbols help us make tangible that which is intangible. And the only reason symbols have meaning is because we infuse them with meaning. That meaning lives in our minds, not in the item itself. Only when the purpose, cause or belief is clear can the symbol command great power   (Sinek, 2009, p. 160)

As this quote from Sinek suggests, symbols (e.g., alphabets, flags, icons) are created by humans. Thus, the 'meaning' of the symbols will typically reflect the intentions or purposes motivating their creation. For example, as a symbol, a country's flag might represent the abstract principles on which the country is founded (e.g., liberty and freedom for all). However, it would be a mistake to conclude from this (as many cognitive scientists have) that all 'meaning' lives in our minds. While symbols may be a creation of humans - meaning is NOT.

Let me state this again for emphasis:

Meaning is NOT a product of mind!

As the triadic model of a semiotic system illustrated in the figure below emphasizes meaning emerges from the functional coupling between agents and situations. Further, as Rasmussen (1986) has emphasized this coupling involves not only symbols, but also signs and signals.

Signs (as used by Rasmussen) are different than 'symbols' in that they are grounded in social conventions. So, the choice of a color to represent safe or dangerous, or of an icon to represent 'save' or 'delete' has its origins in the head of a designer. At some point, someone chose 'red' to represent 'danger,' or chose a 'floppy disk' image to represent save.  However, over time this 'choice' of the designer can become established as a social convention.  At that point, the meaning of the the color or the icon is no longer arbitrary. It is no longer in the head of the individual observer. It has a grounding in the social world - it is established as a social convention or as a cultural expectation. People outside the culture may not 'pick-up' the correct meaning, but the meaning is not arbitrary.

Rasmussen used the term sign to differentiate this role in a semiotic system from that of 'symbols' whose meaning is open to interpretation by an observer. The meaning of a sign is not in the head of an observer, for a sign the meaning has been established by a priori rules (social or cultural conventions).

for a sign the meaning has been established by a priori rules (social or cultural conventions)

Signals (as used by Rasmussen) are different than both 'symbols' and 'signs' in that they are directly grounded in the perception-action coupling with the world. So, the information bases for braking your automobile to avoid a potential collision, or for catching a fly ball, or for piloting an aircraft to a safe touchdown on a runway are NOT in our minds! For example, structures in optical flow fields (e.g., angle, angular rate, tau, horizon ratio) provide the state information that allows people to skillfully move through the environment. The optical flow field and the objects and events specified by the invariant structures are NOT in the mind of the observer. These relations are available to all animals with eyes and can be leveraged in automatic control systems with optical sensors. These signals are every bit as meaningful as any symbol or sign yet these are not human inventions. Humans and other animals can discover the meanings of these relations through interaction with the world, and they can utilize these meanings to achieve satisfying interactions with the world (e.g. avoiding collisions, catching balls, landing aircraft), but the human does not 'create' the meaning in these cases.

for a signal the meaning emerges naturally from the coupling of perception and action in a triadic semiotic system. It is not an invention of the mind, but it can be discovered by a mind.

In the field of cognitive science debates have often been cast in terms of whether humans are 'symbol processors,' such that meaning is constructed through mental computations, or whether humans are capable of 'direct perception,' such that meaning is 'picked-up' through interaction with the ecology.  One side places meaning exclusively in the mind, ignoring or at least minimizing the role of structure in the ecology. The other side places meaning in the ecology, minimizing the creative computational powers of mind.

This framing of the question in either/or terms has proven to be an obstacle to progress in cognitive science. Recognizing that the perception-action loop can be closed through symbols, signs, and signals opens the path to a both/and approach with the promise of a deeper understanding of human cognition.

Recognizing that the perception-action loop can be closed through symbols, signs, and signals opens the path to a both/and approach with the promise of a deeper understanding of human cognition.

2

Early American Functionalist Psychologists, such as William James and John Dewey, viewed cognition through a Pragmatic lens. Thus, for them cognition involved making sense of the world in terms of its functional significance: What can be done? What will the consequences be? More recently, James Gibson (1979) introduced the word “Affordance” to reflect this functionalist perspective; where the term affordance is used to describe an object relative to the possible actions that can be performed on or with the object and the possible consequences of those actions. Don Norman (1988) has introduced the concept of affordance to designers who have found it to be a useful concept for thinking about how a design is experienced by people.

Formalizing Functional Structure

This Functionalist view of the world has been formalized by the philosopher, William Wimsatt (1972), in terms of seven dimensions or attributes for characterizing any object; and by the cognitive systems engineer, Jens Rasmussen (1986), in terms of an Abstraction-Decomposition Space. Figure 1 illustrates some of the parallels between these two methods for characterizing functional properties of an object. The vertical dimension of the Abstraction-Decomposition Space reflects five levels of abstraction that are coupled in terms of a nesting of means-ends constraints. The top level, Functional Purpose, specifies the value constraints on the functional system – what is the ultimate value that is achievable or what is the intended goal or purpose? As you move to lower levels in this hierarchy the focus is successively narrowed down to the specific, physical properties of objects at the lowest Physical Form level of abstraction.

Figure 1. An illustration of how Wimsatt’s functional attributes map into Rasmussen’s Abstraction-Decomposition Space.

An important inspiration for creating the Abstraction-Decomposition Space was Rasmussen’s observations of the reasoning processes of people doing trouble-shooting or fault diagnosis. He observed that the reasoning tended to move along the diagonal in this space. People tended to consider holistic properties of a system at high levels of abstraction (e.g., the primary function of an electronic device) in order to make sense of relations at lower levels of abstraction (e.g., the arrangements of parts). In essence, higher levels of abstraction tended to provide the context for understanding WHY the parts were configured in a certain way. People tended to consider lower levels of abstraction to understand how the arrangements or parts served the higher-level purposes. In essence, lower levels of abstraction provided clues to HOW a particular function would be achieved.

Rasmussen found that in the process of trouble shooting an electronic system, the reasoning tended to move up and down the diagonal of the Abstraction-Decomposition Space. Moving up in abstraction tended to broaden the perspective and to suggest dimensions for selecting properties at lower levels. In essence, the higher level was a kind of filter that determined significance at the lower levels. This filter effectively guided attention and determined how to chunk information and what attributes should be salient at the lower levels. Thus, in the process of diagnosing a fault, experts tended to shift attention across different levels of abstraction until eventually zeroing-in on the specific location of a fault (e.g., finding the break in the circuit or the failed part).

Wimsatt’s formalism for characterizing an object or item in functional terms is summarized in the following statement:

According to theory T, a function of item i, in producing behaviour B, in system S in environment E relative to purpose P is to bring about consequence C.

Figure 1 suggests how Wimsatt’s seven functional attributes of an object might fit within Rasmussen’s Abstraction-Decomposition Space. The object or item (i) as a physical entity corresponds with the lowest level of abstraction and the most specific level of decomposition. The purpose (P) corresponds with the highest level of abstraction at a more global level of decomposition. The Theory (T) and System (S) attributes introduce additional constraints for making sense of the relation between the object and the purpose. Theory (T) provides the kind of holonomic constraints (e.g., physical laws) that Rasmussen considered at the Abstract Function Level. These constraints set limits on how a purpose might be achieved (e.g., the laws of aerodynamics set constraints on how airplanes or wings can serve the purpose of safe travel). The System (S) attributes provide the kind of organizational constraints that Rasmussen considered at the General Function level. These constrains describe the object’s role in relation to other parts of a system in order to serve the higher-level Purpose (P) (e.g., a general function of the wing is to generate lift). The Behavior (B) attribute fits with Rasmussen’s Physical Function level that describes the physical constraints relative to the object’s role as a part of the organization (e.g., here the distinction between fixed and rotary wings comes into play). The Environment (E) attribute crosses levels of abstraction as a way of providing the ‘context of use’ for the object. Finally, the Consequence (C) attribute provides the specific effect that the object produces relative to achieving the purpose (e.g., the specific lift coefficient for a wing of a certain size and shape).

While the details of the mapping in Figure 1 might be debated, there seems to be little doubt that the formalisms suggested by Wimsatt and Rasmussen are rooted in very similar intuitions about how the process of making sense of the world is rooted in a functionalist perspective in which ‘meaning’ is grounded in a network of means-ends relations that associates objects with the higher-level purposes and values that they might serve. This connection between ‘meaning’ and higher levels of abstraction has also been recognized by S.I. Hayakawa with his formalism of the Abstraction Ladder.

Hayakawa used the case of Bessie the Cow to illustrate how higher levels of abstraction provide a broader context for understanding the meaning of a specific object in relation to progressively broader systems of associations (See Figure 2).

Figure 2. An illustration of how Hayakawa’s Abstraction Ladder maps into Rasmussen’s Abstraction-Decomposition Space.

Figure 2 illustrates how the distinctions that Hayakawa introduced with his Abstraction Ladder might map to Rasmussen’s Abstraction-Decomposition Space. It has been noted by Hayakawa and others that building engaging narratives involves moving up and down the Abstraction Ladder (or equivalently moving along the diagonal in the Abstraction Decomposition Space). This is consistent with Rasmussen’s observations about trouble-shooting. Thus, the common intuition is that the process of sensemaking is intimately associated with unpacking the different layers of relations between an object and the larger functional networks or contexts in which it is nested.

The Nature of Expertise

The parallels between expert behaviors in trouble shooting and fault diagnosis by Rasmussen and observations about the implications of Hayakawa’s Abstraction Ladder for constructing interesting narratives might help to explain why case-based learning (Bransford, Brown & Cocking, 2000) is particularly effective for communicating expertise and why narrative approaches for knowledge elicitation (e.g., Klein, 2003; Kurtz & Snowden, 2003) are so effective for uncovering expertise. Even more significantly, perhaps the ‘intuitions’ or ‘gut feel’ of experts may reflect a higher degree of attunement with constraints at higher levels of abstraction. That is, while journeymen may know what to do and how to do it, they may not have the deeper understanding of why one way is better than another (e.g., Sinek, 2009) that differentiates the true experts in a field. In other words, the ‘gut feel’ might reflect the ability of experts to appreciate the coupling between objects and actions with ultimate values and higher-level purposes. Further, this link to value and purpose may have an important emotional component (e.g., Damasio, 1999). This suggests that expertise is not simply a function of knowing more, it may also require caring more. 

Conclusions

As Wieck (1995) noted, an important aspect of sensemaking is what Schön (1983) called problem setting. Weick wrote:

When we set the problem, we select what we will treat as “things” of the situation, we set the boundaries of our attention to it, and we impose upon it a coherence which allows us to say what is wrong and in what directions the situation needs to be changed. Problem setting is a process in which, interactively, we name the things to which we will attend and frame the context in which we will attend to them (Weick, 1995, p. 9).

The fundamental point is that the construct of function as reflected in the formalisms of Wimsatt, Rasmussen, and Hayakawa may provide important clues into the nature of how people set the problem as part of a sensemaking process. In particular, the diagonal in Rasmussen’s Abstraction-Decomposition space may provide clues for how people parse the details of complex situations using filters at different layers of abstraction to ultimately make sense relative to higher functional values and purposes.

Thus, here are some important implications:

  • A functionalist perspective provides important insights into the sensemaking process.
  • This is the common intuition underlying the formalisms introduced by Gibson, Wimsatt, Rasmussen, and Hayakawa.
  • Sensemaking involves navigating across levels of abstraction and levels of detail to identify functional or means-ends relations within a larger network of associations between objects (parts) and contexts (wholes).
  • Links between higher levels of abstraction (values, purposes) and lower levels of abstraction (general functions, components and behaviors) may reflect the significance of couplings between emotions, knowledge, and skill.
  • The various formalisms described here provide important frameworks for understanding any sensemaking process (e.g., fault diagnosis, storytelling, or intel analysis) and have important implications for both eliciting knowledge from experts and for representing information to facilitate the development of expertise through training and interface design. 

Key Sources

  1. Bransford, J. D., Brown, A. L., and Cocking, R. (2000). How People Learn, National Academy Press, Washington, DC.
  2. Damasio, A. (1999). The Feeling of What Happens: Body and emotion in the making of consciousness. Orlando, FL: Harcourt.
  3. Flach, J.M. & Voorhorst, F.A. (2016). What Matters: Putting common sense to work. Dayton, OH: Wright State Library.
  4. Gibson, J.J. (1979). The Ecological Approach to Visual Perception. New York: Houghton Mifflin.
  5. Hayakawa, S.I. (1990). Language in Thought and Action. 5th New York: Houghton Mifflin Harcourt.
  6. Klein, G. (2003). Intuition at Work. New York: Doubleday.
  7. Kurtz, C.F. & Snowden, D.J. (2003). The new dynamics of strategy: Sense-making in a complex and complicated world. IBM Systems Journal, 42, 462-483.
  8. Norman, D.A. (1988). The Psychology of Everyday Things. New York: Basic Books.
  9. Rasmussen, J. (1986). Information Processing and Human-Machine Interaction. New York: North-Holland.
  10. Schön, D.A. (1983). The Reflective Practitioner. New York: Basic Books.
  11. Sinek, S. (2009). Start with Why: How great leaders inspire everyone to take action. New York: Penguin.
  12. Weick, K.E. (1995). Sensemaking in Organizations. Thousand Oaks, CA: Sage.
  13. Wimsatt, W.C. (1972). Teleology and the logical structure of function statements. Hist. Phil. Sci., 3, no. 1, 1-80.

 

2

Early American Functionalist Psychologists, such as William James and John Dewey, viewed cognition through a Pragmatic lens. Thus, for them cognition involved making sense of the world in terms of its functional significance: What can be done? What will the consequences be? More recently, James Gibson (1979) introduced the word “Affordance” to reflect this functionalist perspective; where the term affordance is used to describe an object relative to the possible actions that can be performed on or with the object and the possible consequences of those actions. Don Norman (1988) has introduced the concept of affordance to designers who have found it to be a useful concept for thinking about how a design is experienced by people.

Formalizing Functional Structure

This Functionalist view of the world has been formalized by the philosopher, William Wimsatt (1972), in terms of seven dimensions or attributes for characterizing any object; and by the cognitive systems engineer, Jens Rasmussen (1986), in terms of an Abstraction-Decomposition Space. Figure 1 illustrates some of the parallels between these two methods for characterizing functional properties of an object. The vertical dimension of the Abstraction-Decomposition Space reflects five levels of abstraction that are coupled in terms of a nesting of means-ends constraints. The top level, Functional Purpose, specifies the value constraints on the functional system – what is the ultimate value that is achievable or what is the intended goal or purpose? As you move to lower levels in this hierarchy the focus is successively narrowed down to the specific, physical properties of objects at the lowest Physical Form level of abstraction.

Figure 1. An illustration of how Wimsatt’s functional attributes map into Rasmussen’s Abstraction-Decomposition Space.

An important inspiration for creating the Abstraction-Decomposition Space was Rasmussen’s observations of the reasoning processes of people doing trouble-shooting or fault diagnosis. He observed that the reasoning tended to move along the diagonal in this space. People tended to consider holistic properties of a system at high levels of abstraction (e.g., the primary function of an electronic device) in order to make sense of relations at lower levels of abstraction (e.g., the arrangements of parts). In essence, higher levels of abstraction tended to provide the context for understanding WHY the parts were configured in a certain way. People tended to consider lower levels of abstraction to understand how the arrangements or parts served the higher-level purposes. In essence, lower levels of abstraction provided clues to HOW a particular function would be achieved.

Rasmussen found that in the process of trouble shooting an electronic system, the reasoning tended to move up and down the diagonal of the Abstraction-Decomposition Space. Moving up in abstraction tended to broaden the perspective and to suggest dimensions for selecting properties at lower levels. In essence, the higher level was a kind of filter that determined significance at the lower levels. This filter effectively guided attention and determined how to chunk information and what attributes should be salient at the lower levels. Thus, in the process of diagnosing a fault, experts tended to shift attention across different levels of abstraction until eventually zeroing-in on the specific location of a fault (e.g., finding the break in the circuit or the failed part).

Wimsatt’s formalism for characterizing an object or item in functional terms is summarized in the following statement:

According to theory T, a function of item i, in producing behaviour B, in system S in environment E relative to purpose P is to bring about consequence C.

Figure 1 suggests how Wimsatt’s seven functional attributes of an object might fit within Rasmussen’s Abstraction-Decomposition Space. The object or item (i) as a physical entity corresponds with the lowest level of abstraction and the most specific level of decomposition. The purpose (P) corresponds with the highest level of abstraction at a more global level of decomposition. The Theory (T) and System (S) attributes introduce additional constraints for making sense of the relation between the object and the purpose. Theory (T) provides the kind of holonomic constraints (e.g., physical laws) that Rasmussen considered at the Abstract Function Level. These constraints set limits on how a purpose might be achieved (e.g., the laws of aerodynamics set constraints on how airplanes or wings can serve the purpose of safe travel). The System (S) attributes provide the kind of organizational constraints that Rasmussen considered at the General Function level. These constrains describe the object’s role in relation to other parts of a system in order to serve the higher-level Purpose (P) (e.g., a general function of the wing is to generate lift). The Behavior (B) attribute fits with Rasmussen’s Physical Function level that describes the physical constraints relative to the object’s role as a part of the organization (e.g., here the distinction between fixed and rotary wings comes into play). The Environment (E) attribute crosses levels of abstraction as a way of providing the ‘context of use’ for the object. Finally, the Consequence (C) attribute provides the specific effect that the object produces relative to achieving the purpose (e.g., the specific lift coefficient for a wing of a certain size and shape).

While the details of the mapping in Figure 1 might be debated, there seems to be little doubt that the formalisms suggested by Wimsatt and Rasmussen are rooted in very similar intuitions about how the process of making sense of the world is rooted in a functionalist perspective in which ‘meaning’ is grounded in a network of means-ends relations that associates objects with the higher-level purposes and values that they might serve. This connection between ‘meaning’ and higher levels of abstraction has also been recognized by S.I. Hayakawa with his formalism of the Abstraction Ladder.

Hayakawa used the case of Bessie the Cow to illustrate how higher levels of abstraction provide a broader context for understanding the meaning of a specific object in relation to progressively broader systems of associations (See Figure 2).

Figure 2. An illustration of how Hayakawa’s Abstraction Ladder maps into Rasmussen’s Abstraction-Decomposition Space.

Figure 2 illustrates how the distinctions that Hayakawa introduced with his Abstraction Ladder might map to Rasmussen’s Abstraction-Decomposition Space. It has been noted by Hayakawa and others that building engaging narratives involves moving up and down the Abstraction Ladder (or equivalently moving along the diagonal in the Abstraction Decomposition Space). This is consistent with Rasmussen’s observations about trouble-shooting. Thus, the common intuition is that the process of sensemaking is intimately associated with unpacking the different layers of relations between an object and the larger functional networks or contexts in which it is nested.

The Nature of Expertise

The parallels between expert behaviors in trouble shooting and fault diagnosis by Rasmussen and observations about the implications of Hayakawa’s Abstraction Ladder for constructing interesting narratives might help to explain why case-based learning (Bransford, Brown & Cocking, 2000) is particularly effective for communicating expertise and why narrative approaches for knowledge elicitation (e.g., Klein, 2003; Kurtz & Snowden, 2003) are so effective for uncovering expertise. Even more significantly, perhaps the ‘intuitions’ or ‘gut feel’ of experts may reflect a higher degree of attunement with constraints at higher levels of abstraction. That is, while journeymen may know what to do and how to do it, they may not have the deeper understanding of why one way is better than another (e.g., Sinek, 2009) that differentiates the true experts in a field. In other words, the ‘gut feel’ might reflect the ability of experts to appreciate the coupling between objects and actions with ultimate values and higher-level purposes. Further, this link to value and purpose may have an important emotional component (e.g., Damasio, 1999). This suggests that expertise is not simply a function of knowing more, it may also require caring more. 

Conclusions

As Wieck (1995) noted, an important aspect of sensemaking is what Schön (1983) called problem setting. Weick wrote:

When we set the problem, we select what we will treat as “things” of the situation, we set the boundaries of our attention to it, and we impose upon it a coherence which allows us to say what is wrong and in what directions the situation needs to be changed. Problem setting is a process in which, interactively, we name the things to which we will attend and frame the context in which we will attend to them (Weick, 1995, p. 9).

The fundamental point is that the construct of function as reflected in the formalisms of Wimsatt, Rasmussen, and Hayakawa may provide important clues into the nature of how people set the problem as part of a sensemaking process. In particular, the diagonal in Rasmussen’s Abstraction-Decomposition space may provide clues for how people parse the details of complex situations using filters at different layers of abstraction to ultimately make sense relative to higher functional values and purposes.

Thus, here are some important implications:

  • A functionalist perspective provides important insights into the sensemaking process.
  • This is the common intuition underlying the formalisms introduced by Gibson, Wimsatt, Rasmussen, and Hayakawa.
  • Sensemaking involves navigating across levels of abstraction and levels of detail to identify functional or means-ends relations within a larger network of associations between objects (parts) and contexts (wholes).
  • Links between higher levels of abstraction (values, purposes) and lower levels of abstraction (general functions, components and behaviors) may reflect the significance of couplings between emotions, knowledge, and skill.
  • The various formalisms described here provide important frameworks for understanding any sensemaking process (e.g., fault diagnosis, storytelling, or intel analysis) and have important implications for both eliciting knowledge from experts and for representing information to facilitate the development of expertise through training and interface design. 

Key Sources

  1. Bransford, J. D., Brown, A. L., and Cocking, R. (2000). How People Learn, National Academy Press, Washington, DC.
  2. Damasio, A. (1999). The Feeling of What Happens: Body and emotion in the making of consciousness. Orlando, FL: Harcourt.
  3. Flach, J.M. & Voorhorst, F.A. (2016). What Matters: Putting common sense to work. Dayton, OH: Wright State Library.
  4. Gibson, J.J. (1979). The Ecological Approach to Visual Perception. New York: Houghton Mifflin.
  5. Hayakawa, S.I. (1990). Language in Thought and Action. 5th New York: Houghton Mifflin Harcourt.
  6. Klein, G. (2003). Intuition at Work. New York: Doubleday.
  7. Kurtz, C.F. & Snowden, D.J. (2003). The new dynamics of strategy: Sense-making in a complex and complicated world. IBM Systems Journal, 42, 462-483.
  8. Norman, D.A. (1988). The Psychology of Everyday Things. New York: Basic Books.
  9. Rasmussen, J. (1986). Information Processing and Human-Machine Interaction. New York: North-Holland.
  10. Schön, D.A. (1983). The Reflective Practitioner. New York: Basic Books.
  11. Sinek, S. (2009). Start with Why: How great leaders inspire everyone to take action. New York: Penguin.
  12. Weick, K.E. (1995). Sensemaking in Organizations. Thousand Oaks, CA: Sage.
  13. Wimsatt, W.C. (1972). Teleology and the logical structure of function statements. Hist. Phil. Sci., 3, no. 1, 1-80.

 

The CVDi display for evaluating heart health has been updated. The new version includes an option for SI units.  Also, some of the interaction dynamics have been updated. This is still a work in progress, so we welcome feedback and suggestions for how to improve and expand this interface.

https://mile-two.gitlab.io/CVDI/

 

 

2

Introduction

It has long been recognized that in complex work domains such as management and healthcare, the decision-making behavior of experts often deviates from the prescriptions of analytic or normative logic.  The observed behaviors have been characterized as intuitive, muddling through, fuzzy, heuristic, situated, or recognition-primed. While there is broad consensus on what people typically do when faced with complex problems, the interesting debate, relative to training decision-making or facilitating the development of expertise, is not about what people do, but rather about what people ought to do.

On the one hand, many have suggested that training should focus on increasing conformity with the normative prescriptions.  Thus, the training should be designed to alert people to the generic biases that have been identified (e.g., representativeness heuristic, availability heuristic, overconfidence, confirmatory bias, illusory correlation), to warn people about the potential dangers (i.e., errors) associated with these biases, and to increase knowledge and appreciation of the analytical norms. In short, the focus of training clinical decision making should be on reducing (opportunities for) errors in the form of deviations from logical rationality.

On the other hand, we (and others) have suggested that the heuristics and intuitions of experts actually reflect smart adaptations to the complexities of specific work domains. This reflects the view that heuristics take advantage of domain constraints leading to efficient ways to manage the complexities of complex (ill-structured) problems, such as those in healthcare. As Eva & Norman [2005] suggest, “successful heuristics should be embraced rather than overcome” (p. 871). Thus, to support clinical decision making, training should not focus on circumventing the use of heuristics but should focus on increasing the perspicacity of heuristic decision making, that is, on tuning the (recognition) processes that underlie the adaptive selection and use of heuristics in the domain of interest.

Common versus Worst Things in the ED

In his field study of decision-making in the ED, Feufel [2009] observed that the choices of physicians were shaped by two heuristics: 1) Common things are common; and 2) Worst case. Figure 1 illustrates these two heuristics as two-loops in an adaptive control system. The Common Thing heuristic aligns well with classical Bayesian norms for evaluating the state of the world. It suggests that the hypotheses guiding treatment should reflect a judgment about what is most likely based on the prior odds and the current observations (i.e., what is most common given the symptoms). Note that this heuristic biases physicians toward a ‘confirmatory’ search process, as their observations are guided by beliefs about what might be the common thing. Thus, tests and interventions tend to be directed toward confirming and treating the common thing.

Figure 1. Illustrates the decision-making process as an adaptive control system guided by two complementary heuristics: Common Thing and Worst Thing.

The Worst Case heuristic shifts the focus from ‘likelihood’ to the potential consequences associated with different conditions.  Goldberg, Kuhn, Andrew and Thomas [2002] begin their article on “Coping with Medical Mistakes” with the following example:

 “While moonlighting in an emergency room, a resident physician evaluated a 35-year-old woman who was 6 months pregnant and complaining of a headache. The physician diagnosed a ‘mixed-tension sinus headache.’ The patient returned to the ER 3 days later with an intracerebral bleed, presumably related to eclampsia, and died (p. 289)”

This illustrates an ED physician’s worst nightmare – that a condition that ultimately leads to serious harm to a patient will be overlooked.  The Worst Case heuristic is designed to help guard against this type of error. While considering the common thing, ED physicians are also trained to simultaneously be alert to and to rule-out potential conditions that might lead to serious consequences (i.e., worst cases). Note that the Worst Case heuristic biases physicians toward a disconfirming search strategy as they attempt to rule-out a possible worst thing – often while simultaneously treating the more likely common thing. While either heuristic alone reflects a bounded rationality, the coupling of the two as illustrated in Figure 1 tends to result in a rationality that can be very well tuned to the demands of emergency medicine.

Ill-defined Problems

In contrast to the logical puzzles that have typically been used in laboratory research on human decision-making, the problems faced by ED physicians are ‘ill-defined’ or ‘messy.’ Lopes [1982] suggested that the normative logic (e.g., deduction and induction logic) that works for comparatively simple logical puzzles will not work for the kinds of ill-defined problems faced by ED physicians. She suggests that ill-defined problems are essentially problems of pulling out the ‘signal’ (i.e., the patient’s actual condition) from a noisy background (i.e., all the potential conditions that a patient might have). Thus, the theory of signal detection (or observer theory) illustrated in Figures 2 & 3 provides a more appropriate context for evaluating performance.

Figure 2. The logic of signal detection theory is used to illustrate the challenge of discriminating a worst case from a common thing.

Figure 2 uses a signal detection metaphor to illustrate the potential ambiguities associated with discriminating the Worst Cases from the Common Things in the form of two overlapping distributions of signals. The degree of overlap between the distributions represents the potential similarity between the symptoms associated with the alternatives. The more overlap, the harder it will be to discriminate between potential conditions. The key parameter with respect to clinical judgment is the line labeled Decision Criterion. The placement of this line reflects the criterion that is used to decide whether to focus treatment on the common thing (moving the criteria to the right to reduce false alarms) or the worst thing (moving the criteria to the left to reduce misses). Note that there is no possibility for perfect (i.e., error free) performance. Rather, the decision criterion will determine the trade-off between two types of errors: 1) false alarms – expending resources to rule out the worst case, when the patient’s condition is consistent with the common thing; or 2) misses – treating the common thing, when the worst case is present.

In order to address the question of what is the ‘ideal’ or at least ‘satisfactory’ criterion for discriminating between treating the common thing or the worst thing it is necessary to consider the potential values associated with the treatments and potential consequences as illustrated in the pay-off matrix in Figure 3.  Thus, the decision is not simply a function of finding ‘truth.’ Rather, the decision involves a consideration of values: What costs are associated with the tests that would be required to conclusively rule-out a worst case? How severe would be the health consequences of missing a potential worst case? Missing some things can have far more drastic consequences than missing other things.

Figure 3. The payoff matrix is used to illustrate the values associated with potential errors (i.e., consequences of misses and false alarms).

The key implication of Figures 2 and 3 is that eliminating all errors is not possible. Given enough time, every ED physician will experience both misses and false alarms. That is, there will be cases where they miss a worst case and other cases where they pursue a worst case only to discover that it was the common thing. While perfect performance (zero-error) is an unattainable goal, the number of errors can be reduced by increasing the ability to discriminate between potential patient states (e.g., recognizing the patterns, choosing the tests that are most diagnostic). This would effectively reduce the overlap between the distributions in Figure 2. The long-range or overall consequences of any remaining errors can be reduced by setting the decision criterion to reflect the value trade-offs illustrated in Figure 3. In cases where expensive tests are necessary to conclusively rule-out potential worst cases, this raises difficult ethical questions involving weighing the cost of missing a worst case, versus the expense of the additional tests that in many cases will prove unnecessary.

Conclusion

The problems faced by ED physicians are better characterized in terms of the theory of signal detection, rather than in terms of more classical models of logic that fail to take into account the perceptual dynamics of selecting and interpreting information. In this context, heuristics that are tuned to the particulars of a domain (such as common things and worst cases) are intelligent adaptations to the situation dynamics (rather than compromises resulting from internal information processing limitations). While each of these heuristics is bounded with respect to rationality, the combination tends to provide a very intelligent response to the situation dynamics of the ED. The quality of this adaptation will ultimately depend on how well these heuristics are tuned to the value system (payoff matrix) for a specific context.

Note that while the signal detection theory is typically applied to single discrete observations, the ED is a dynamic situation as illustrated in Figure 1, where multiple samples are collected over time. Thus, a more appropriate model is Observer Theory, which extends the logic of signal detection to dynamic situations, where judgment can be adjusted as a function of multiple observations relevant to competing hypotheses  [see Flach and Voorhorst, 2016; or Jagacinski & Flach, 2003 for discussion of Observer Theory]. However, the implication is the same - skilled muddling involves weighing evidence in order to pull the 'signal' out from a complex, 'noisy' situation.

Finally, it is important to appreciate that with respect to the two heuristics, it is not a case of 'either-or,' rather it is a 'both-and' proposition. That is the heuristics are typically operating concurrently - with the physician often treating the common thing, while awaiting test results to rule out a possible worst case. The challenge is in allocating resources to the concurrent heuristics, while taking into account the associated costs and benefits as reflected in a value system (payoff matrix).

2

Introduction

It has long been recognized that in complex work domains such as management and healthcare, the decision-making behavior of experts often deviates from the prescriptions of analytic or normative logic.  The observed behaviors have been characterized as intuitive, muddling through, fuzzy, heuristic, situated, or recognition-primed. While there is broad consensus on what people typically do when faced with complex problems, the interesting debate, relative to training decision-making or facilitating the development of expertise, is not about what people do, but rather about what people ought to do.

On the one hand, many have suggested that training should focus on increasing conformity with the normative prescriptions.  Thus, the training should be designed to alert people to the generic biases that have been identified (e.g., representativeness heuristic, availability heuristic, overconfidence, confirmatory bias, illusory correlation), to warn people about the potential dangers (i.e., errors) associated with these biases, and to increase knowledge and appreciation of the analytical norms. In short, the focus of training clinical decision making should be on reducing (opportunities for) errors in the form of deviations from logical rationality.

On the other hand, we (and others) have suggested that the heuristics and intuitions of experts actually reflect smart adaptations to the complexities of specific work domains. This reflects the view that heuristics take advantage of domain constraints leading to efficient ways to manage the complexities of complex (ill-structured) problems, such as those in healthcare. As Eva & Norman [2005] suggest, “successful heuristics should be embraced rather than overcome” (p. 871). Thus, to support clinical decision making, training should not focus on circumventing the use of heuristics but should focus on increasing the perspicacity of heuristic decision making, that is, on tuning the (recognition) processes that underlie the adaptive selection and use of heuristics in the domain of interest.

Common versus Worst Things in the ED

In his field study of decision-making in the ED, Feufel [2009] observed that the choices of physicians were shaped by two heuristics: 1) Common things are common; and 2) Worst case. Figure 1 illustrates these two heuristics as two-loops in an adaptive control system. The Common Thing heuristic aligns well with classical Bayesian norms for evaluating the state of the world. It suggests that the hypotheses guiding treatment should reflect a judgment about what is most likely based on the prior odds and the current observations (i.e., what is most common given the symptoms). Note that this heuristic biases physicians toward a ‘confirmatory’ search process, as their observations are guided by beliefs about what might be the common thing. Thus, tests and interventions tend to be directed toward confirming and treating the common thing.

Figure 1. Illustrates the decision-making process as an adaptive control system guided by two complementary heuristics: Common Thing and Worst Thing.

The Worst Case heuristic shifts the focus from ‘likelihood’ to the potential consequences associated with different conditions.  Goldberg, Kuhn, Andrew and Thomas [2002] begin their article on “Coping with Medical Mistakes” with the following example:

 “While moonlighting in an emergency room, a resident physician evaluated a 35-year-old woman who was 6 months pregnant and complaining of a headache. The physician diagnosed a ‘mixed-tension sinus headache.’ The patient returned to the ER 3 days later with an intracerebral bleed, presumably related to eclampsia, and died (p. 289)”

This illustrates an ED physician’s worst nightmare – that a condition that ultimately leads to serious harm to a patient will be overlooked.  The Worst Case heuristic is designed to help guard against this type of error. While considering the common thing, ED physicians are also trained to simultaneously be alert to and to rule-out potential conditions that might lead to serious consequences (i.e., worst cases). Note that the Worst Case heuristic biases physicians toward a disconfirming search strategy as they attempt to rule-out a possible worst thing – often while simultaneously treating the more likely common thing. While either heuristic alone reflects a bounded rationality, the coupling of the two as illustrated in Figure 1 tends to result in a rationality that can be very well tuned to the demands of emergency medicine.

Ill-defined Problems

In contrast to the logical puzzles that have typically been used in laboratory research on human decision-making, the problems faced by ED physicians are ‘ill-defined’ or ‘messy.’ Lopes [1982] suggested that the normative logic (e.g., deduction and induction logic) that works for comparatively simple logical puzzles will not work for the kinds of ill-defined problems faced by ED physicians. She suggests that ill-defined problems are essentially problems of pulling out the ‘signal’ (i.e., the patient’s actual condition) from a noisy background (i.e., all the potential conditions that a patient might have). Thus, the theory of signal detection (or observer theory) illustrated in Figures 2 & 3 provides a more appropriate context for evaluating performance.

Figure 2. The logic of signal detection theory is used to illustrate the challenge of discriminating a worst case from a common thing.

Figure 2 uses a signal detection metaphor to illustrate the potential ambiguities associated with discriminating the Worst Cases from the Common Things in the form of two overlapping distributions of signals. The degree of overlap between the distributions represents the potential similarity between the symptoms associated with the alternatives. The more overlap, the harder it will be to discriminate between potential conditions. The key parameter with respect to clinical judgment is the line labeled Decision Criterion. The placement of this line reflects the criterion that is used to decide whether to focus treatment on the common thing (moving the criteria to the right to reduce false alarms) or the worst thing (moving the criteria to the left to reduce misses). Note that there is no possibility for perfect (i.e., error free) performance. Rather, the decision criterion will determine the trade-off between two types of errors: 1) false alarms – expending resources to rule out the worst case, when the patient’s condition is consistent with the common thing; or 2) misses – treating the common thing, when the worst case is present.

In order to address the question of what is the ‘ideal’ or at least ‘satisfactory’ criterion for discriminating between treating the common thing or the worst thing it is necessary to consider the potential values associated with the treatments and potential consequences as illustrated in the pay-off matrix in Figure 3.  Thus, the decision is not simply a function of finding ‘truth.’ Rather, the decision involves a consideration of values: What costs are associated with the tests that would be required to conclusively rule-out a worst case? How severe would be the health consequences of missing a potential worst case? Missing some things can have far more drastic consequences than missing other things.

Figure 3. The payoff matrix is used to illustrate the values associated with potential errors (i.e., consequences of misses and false alarms).

The key implication of Figures 2 and 3 is that eliminating all errors is not possible. Given enough time, every ED physician will experience both misses and false alarms. That is, there will be cases where they miss a worst case and other cases where they pursue a worst case only to discover that it was the common thing. While perfect performance (zero-error) is an unattainable goal, the number of errors can be reduced by increasing the ability to discriminate between potential patient states (e.g., recognizing the patterns, choosing the tests that are most diagnostic). This would effectively reduce the overlap between the distributions in Figure 2. The long-range or overall consequences of any remaining errors can be reduced by setting the decision criterion to reflect the value trade-offs illustrated in Figure 3. In cases where expensive tests are necessary to conclusively rule-out potential worst cases, this raises difficult ethical questions involving weighing the cost of missing a worst case, versus the expense of the additional tests that in many cases will prove unnecessary.

Conclusion

The problems faced by ED physicians are better characterized in terms of the theory of signal detection, rather than in terms of more classical models of logic that fail to take into account the perceptual dynamics of selecting and interpreting information. In this context, heuristics that are tuned to the particulars of a domain (such as common things and worst cases) are intelligent adaptations to the situation dynamics (rather than compromises resulting from internal information processing limitations). While each of these heuristics is bounded with respect to rationality, the combination tends to provide a very intelligent response to the situation dynamics of the ED. The quality of this adaptation will ultimately depend on how well these heuristics are tuned to the value system (payoff matrix) for a specific context.

Note that while the signal detection theory is typically applied to single discrete observations, the ED is a dynamic situation as illustrated in Figure 1, where multiple samples are collected over time. Thus, a more appropriate model is Observer Theory, which extends the logic of signal detection to dynamic situations, where judgment can be adjusted as a function of multiple observations relevant to competing hypotheses  [see Flach and Voorhorst, 2016; or Jagacinski & Flach, 2003 for discussion of Observer Theory]. However, the implication is the same - skilled muddling involves weighing evidence in order to pull the 'signal' out from a complex, 'noisy' situation.

Finally, it is important to appreciate that with respect to the two heuristics, it is not a case of 'either-or,' rather it is a 'both-and' proposition. That is the heuristics are typically operating concurrently - with the physician often treating the common thing, while awaiting test results to rule out a possible worst case. The challenge is in allocating resources to the concurrent heuristics, while taking into account the associated costs and benefits as reflected in a value system (payoff matrix).

4

Abduction

Peirce introduce Abduction (or Hypothesis) as an alternative to classical forms of rationality (induction, deduction). I contend that this alternative is more typical of everyday reasoning or common sense. And further, that it is a form of rationality that is particularly well suited to both the dynamics of circles and the challenges of complexity. However, my understanding of Abduction may not be representative of how many philosophers or logicians think about it.

In my view, what Peirce was describing is what in more contemporary terms would be described as an adaptive control system, as illustrated in the following figure.This figure represents medical treatment/diagnosis as an adaptive control system. This system has two loops that are coupled.

slide5

The Lower or Inner Loop - Assimilation

The lower loop is akin to what Piaget described as assimilation or  what control theorists would describe as a feedback control system. This system begins by treating the patient based on existing schema (e.g., internal models of typical conditions; or standard procedures). If the consequences of those actions are as expected, then the physician will continue to follow the standard procedures until the 'problem' is resolved. However, if the consequences of following the standard procedures are 'surprising' or 'unexpected' and the  standard approaches are not leading to the desired outcomes, then the second loop becomes important.

The Upper or Outer Loop - Accommodation

The upper loop is  akin to what Piaget described as accommodation and this is what makes the loop 'adaptive' from the perspective of control theory. Other terms for this loop from cognitive psychology are 'metacognition' and 'situation awareness.'

The primary function of the upper loop is to monitor performance of the lower loop for deviations from expectations. Basically, the function is to evaluate whether the hypotheses guiding actions are appropriate to the situation. Are the physician's internal model or expectations consistent with the patients actual condition? In other words, is the patients condition consistent with the expectations underlying the standard procedures?

If the answer is no, then the function of the upper loop is to alter the hypotheses or hypothesis set to find one that is a better match to the patient's actual condition, In other words, the function of the upper loop is to come up with an alternative to the standard treatment plan. In Piaget's terms, the function is to alter the internal schema guiding action.

Muddling Through

The dynamic of the adductive system as illustrated here is very much like what Lindblom described as 'muddling through' or 'incrementalism.' In other words, the logic of this system is trial and error. In facing a situation, decisions and actions are typically guided by  generalization from past successes in similar situations (i.e., the initial hypothesis or  schema; or standard procedure). If the consequences are as expected, then the schema guiding behavior is confirmed and the experience of the physician is not of decision making or problem solving, but rather it is "just doing my job."

If the consequences of the initial trials are not as expected, then skepticism is raised with respect to the underlying schemas and alternatives will be considered. The physician experiences this as problem solving or decision making - "What is going on here? What do I try next?" This process is continued iteratively until a schema or hypothesis leads to a satisfying outcome.

This dynamic is also akin to natural selection. In this context the upper loop is the source of variation and the lower loop provides the fitness test. The variations (i.e., hypotheses) that lead to success (i.e., good fits), will be retained and will provide the basis for generalizing to future situations. When the ecology changes, then new variations (e.g., new  hypotheses or schema) may gain a selective advantage.

Lindblom's term 'incrementalism' reflects the intuition that the process of adjusting the hypothesis set should be somewhat conservative. That is, the adjustments to the hypothesis set should typically be small. In other terms, the system will tend to anchor on hypotheses that have led to success in the past. From a control theoretic perspective this would be a very smart strategy for avoiding instability, especially in risky or highly uncertain environments.

3

In his 1985 book Surely You're Joking Mr. Feynman, Richard Feynman describes what he calls 'Cargo Cult' Science:

In the South Seas there is a cargo cult of people. During the war they saw airplanes land with lots of good materials, and they want the same thing to happen now. So they've arranged to make things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas - he's the controller - and they wait for the airplanes to land. They're doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn't work. No airplanes land. So I call these things cargo cult science, because they follow all the apparent precepts and forms of scientific investigation, but they're missing something essential, because the planes don't land. (p. 310 - 311)

I often worry that academic psychology is becoming a Cargo Cult Science. Psychologists have mastered the arts of experimental design and statistical inference. They do everything right. The form is perfect. But I don't see many airplanes landing. That is, I see lots of publications of clever paradigmatic experiments, but have difficulty extracting much value from this literature for understanding human experience, particularly in the context of complex work - such as clinical medicine. This vast scientific literature does not seem to generalize in ways that suggest practical ways to improve the quality of human experience.

On the surface, these papers appear to be addressing practical issues associated with cognition (e.g., decision making, trust, team work, etc.), but when I dig a bit deeper I am often disappointed, finding that these phenomenon have been trivialized in ways that make it impossible for me to recognize anything that aligns with my life experiences. Thus, I become quite skeptical that the experiments will generalize in any interesting way to more natural contexts. Often the experiments are clever variations on previous research. The experimental designs provide tight control over variables and minimize confounds. The statistical models are often quite elegant. Yet, ultimately the questions asked are simply uninteresting with no obvious implications for practical applications.

Not everyone seems to be caught in this cult. However, those that choose to explore human performance in more natural settings that are more representative of the realities of everyday cognition are often marginalized within the academy and their work is typically dismissed as applied. For all practical purposes, when an academic psychologist says 'applied science' s/he generally means 'not science at all.'

Perhaps, I have simply gotten old and cynical. But I worry that in the pursuit of getting the form of the experiments to be perfect, the academic field of psychology may have lost sight of the phenomenon of human experience.