Skip to content

In 1978, Rich Jagacinski hired me as a graduate research assistant to help with a project to compare peoples' ability to track a target based on kinesthetic feedback with the ability to track the same target based on visual feedback (1). I was totally unprepared at the time, and even after 6 years of graduate school, I was only beginning to understand the technical aspects of 'control theory.' However, eventually, I was able to co-author a text with Rich to help introduce the technical aspects of control theory to other social scientists (2). But this was only the beginning of a long journey to explore the dynamics of complex couplings of humans, technologies, and ecologies.

Eventually, I have come to the conclusion that control theory has absolutely nothing to do with 'control.' Further, the metaphor of the 'steersman' associated with cybernetics is completely misleading as a framework for understanding human performance. This metaphor suggests that humans determine the behavior of an organization, when in fact, many other factors contribute to shaping the ultimate performance of any organization. The myth that humans are 'in control' leads to humans getting too much credit when things work well (the mythical hero leader) and too much blame when things don't work well (the myth that human error is the 'cause' of most accidents).

On the other hand, behaviorism is the myth that human behavior is 'controlled' by the situations (i.e., the stimulus). Despite the fact that even the transfer function for the human tracker must change as a function of the dynamics of the plant being controlled (as reflected in McRuer's Crossover Model); and despite the fact that 'context matters,' it is erroneous to think that the stimulus or the context determines or controls behavior.

So if nothing is 'in control,' how can we possibly understand or explain the performance of even simple organizations. If nothing is in control, does anything 'determine' performance? What matters? (3) The answer lies in the dynamics of coupling or of networks. Ultimately, performance reflects the demands of stability. The behaviors that persist are the behaviors that lead to network stability. Ultimately, control theory and dynamical systems theory are NOT about 'control,' but about 'stability.' Further, stability is an emergent or relational property of a network. It cannot be localized in any of the elements of the network. This is why any theory that tries to attribute 'control' or 'cause' to any component will simply be wrong!

A necessary step toward understanding performance is to abandon the illusion of 'control' (or causality) and to begin framing questions to explore the properties that contribute to the emergence of stability. It is important to shift our focus from the elements in an organism or organization to include the relations across the elements.

(1). Jagacinski, R.J., Flach, J.M., & Gilson, R.D. (1983). A comparison of visual and kinesthetic tactual displays for compensatory tracking. IEEE Transactions on Systems, Man, and Cybernetics, 13(6), 1103-1112.

(2)  Jagacinski, R.J. & Flach, J.M. (2003). Control Theory for Humans: Quantitative approaches to modeling performance. Mahwah, NJ: Erlbaum. ISBN-13: 978-0805822939

(3). Flach, J.M. & Voorhorst, F.A. (2020). A Meaning Processing Approach to Cognition: What Matters? New York: Routledge.

 

I recently listened to a talk by Jamer Hunt on "The Anxious Space between Design and Ethnography" In his talk, he creates a two-dimensional space (four quadrants) to reflect the famous quote from Donald Rumsfeld on "Unknown-Unknowns." He uses this space to illustrate the overlapping territory explored by anthropology and design.

The talk stimulated me to think more generally about relations between basic laboratory research, field research, and design. I was trained in human experimental psychology in a way that emphasized experimental design and controlled laboratory research. This was motivated by a clear bias that doing controlled experiments was the way to do "real" science. However, I was also involved in the development of the aviation psychology laboratory at Ohio State University and was exposed to applied Engineering Psychology in the tradition of Paul Fitts. And much of my career has been focused toward applying cognitive psychology in applied domains such as aviation, driving, and healthcare.

In light of these experiences, I have come to see most laboratory research as situated in the domain of Known-Knowns. Much of the published experimental literature are demonstrations of what we already know. For example, consider all the replications of Fitts' Law or all the variations on visual and memory search or more recently the replications of 'blind sight' experiments. Laboratory research does sometime open up insights into and fill gaps in the Known-Unknown territory, but surprise is extremely rare!

In my experience field research pushes us further into the region of Unknown-Unknowns - increasing the potential for surprise. Increasing the potential to learn something knew.

And design pushes us still further into the region of Unknown-Unknowns, increasing the potential for surprise - increasing the potential for learning and discovery. Also note that as we move deeper into the region of Unknown-Unknown, we also move deeper into the region of Unknown-Knowns. This is the region for reflection, meta-analysis, and metaphysics.  As we move into the Unknown it becomes more important to reconsider and reflect on foundational assumptions and to build theory to connect the dots and integrate across empirical experiments.

I have come to the conclusion that a mature science depends on a healthy coupling between laboratory research, field research, and design.  I believe that the ultimate test of any hypothesis or theory is its ability to motivate solutions to practical problems. I believe that paradigm shifts emerge from the coupling of research and design.

Certainly, experimental research serves a valuable function. However, if you are serious about learning and discovery, then it is important to explore beyond the laboratory, to get out of the territory of the known - to test your hypotheses and theories in practice, and to increase the potential for surprise.

4

Symbols help us make tangible that which is intangible. And the only reason symbols have meaning is because we infuse them with meaning. That meaning lives in our minds, not in the item itself. Only when the purpose, cause or belief is clear can the symbol command great power   (Sinek, 2009, p. 160)

As this quote from Sinek suggests, symbols (e.g., alphabets, flags, icons) are created by humans. Thus, the 'meaning' of the symbols will typically reflect the intentions or purposes motivating their creation. For example, as a symbol, a country's flag might represent the abstract principles on which the country is founded (e.g., liberty and freedom for all). However, it would be a mistake to conclude from this (as many cognitive scientists have) that all 'meaning' lives in our minds. While symbols may be a creation of humans - meaning is NOT.

Let me state this again for emphasis:

Meaning is NOT a product of mind!

As the triadic model of a semiotic system illustrated in the figure below emphasizes meaning emerges from the functional coupling between agents and situations. Further, as Rasmussen (1986) has emphasized this coupling involves not only symbols, but also signs and signals.

Signs (as used by Rasmussen) are different than 'symbols' in that they are grounded in social conventions. So, the choice of a color to represent safe or dangerous, or of an icon to represent 'save' or 'delete' has its origins in the head of a designer. At some point, someone chose 'red' to represent 'danger,' or chose a 'floppy disk' image to represent save.  However, over time this 'choice' of the designer can become established as a social convention.  At that point, the meaning of the the color or the icon is no longer arbitrary. It is no longer in the head of the individual observer. It has a grounding in the social world - it is established as a social convention or as a cultural expectation. People outside the culture may not 'pick-up' the correct meaning, but the meaning is not arbitrary.

Rasmussen used the term sign to differentiate this role in a semiotic system from that of 'symbols' whose meaning is open to interpretation by an observer. The meaning of a sign is not in the head of an observer, for a sign the meaning has been established by a priori rules (social or cultural conventions).

for a sign the meaning has been established by a priori rules (social or cultural conventions)

Signals (as used by Rasmussen) are different than both 'symbols' and 'signs' in that they are directly grounded in the perception-action coupling with the world. So, the information bases for braking your automobile to avoid a potential collision, or for catching a fly ball, or for piloting an aircraft to a safe touchdown on a runway are NOT in our minds! For example, structures in optical flow fields (e.g., angle, angular rate, tau, horizon ratio) provide the state information that allows people to skillfully move through the environment. The optical flow field and the objects and events specified by the invariant structures are NOT in the mind of the observer. These relations are available to all animals with eyes and can be leveraged in automatic control systems with optical sensors. These signals are every bit as meaningful as any symbol or sign yet these are not human inventions. Humans and other animals can discover the meanings of these relations through interaction with the world, and they can utilize these meanings to achieve satisfying interactions with the world (e.g. avoiding collisions, catching balls, landing aircraft), but the human does not 'create' the meaning in these cases.

for a signal the meaning emerges naturally from the coupling of perception and action in a triadic semiotic system. It is not an invention of the mind, but it can be discovered by a mind.

In the field of cognitive science debates have often been cast in terms of whether humans are 'symbol processors,' such that meaning is constructed through mental computations, or whether humans are capable of 'direct perception,' such that meaning is 'picked-up' through interaction with the ecology.  One side places meaning exclusively in the mind, ignoring or at least minimizing the role of structure in the ecology. The other side places meaning in the ecology, minimizing the creative computational powers of mind.

This framing of the question in either/or terms has proven to be an obstacle to progress in cognitive science. Recognizing that the perception-action loop can be closed through symbols, signs, and signals opens the path to a both/and approach with the promise of a deeper understanding of human cognition.

Recognizing that the perception-action loop can be closed through symbols, signs, and signals opens the path to a both/and approach with the promise of a deeper understanding of human cognition.

4

Symbols help us make tangible that which is intangible. And the only reason symbols have meaning is because we infuse them with meaning. That meaning lives in our minds, not in the item itself. Only when the purpose, cause or belief is clear can the symbol command great power   (Sinek, 2009, p. 160)

As this quote from Sinek suggests, symbols (e.g., alphabets, flags, icons) are created by humans. Thus, the 'meaning' of the symbols will typically reflect the intentions or purposes motivating their creation. For example, as a symbol, a country's flag might represent the abstract principles on which the country is founded (e.g., liberty and freedom for all). However, it would be a mistake to conclude from this (as many cognitive scientists have) that all 'meaning' lives in our minds. While symbols may be a creation of humans - meaning is NOT.

Let me state this again for emphasis:

Meaning is NOT a product of mind!

As the triadic model of a semiotic system illustrated in the figure below emphasizes meaning emerges from the functional coupling between agents and situations. Further, as Rasmussen (1986) has emphasized this coupling involves not only symbols, but also signs and signals.

Signs (as used by Rasmussen) are different than 'symbols' in that they are grounded in social conventions. So, the choice of a color to represent safe or dangerous, or of an icon to represent 'save' or 'delete' has its origins in the head of a designer. At some point, someone chose 'red' to represent 'danger,' or chose a 'floppy disk' image to represent save.  However, over time this 'choice' of the designer can become established as a social convention.  At that point, the meaning of the the color or the icon is no longer arbitrary. It is no longer in the head of the individual observer. It has a grounding in the social world - it is established as a social convention or as a cultural expectation. People outside the culture may not 'pick-up' the correct meaning, but the meaning is not arbitrary.

Rasmussen used the term sign to differentiate this role in a semiotic system from that of 'symbols' whose meaning is open to interpretation by an observer. The meaning of a sign is not in the head of an observer, for a sign the meaning has been established by a priori rules (social or cultural conventions).

for a sign the meaning has been established by a priori rules (social or cultural conventions)

Signals (as used by Rasmussen) are different than both 'symbols' and 'signs' in that they are directly grounded in the perception-action coupling with the world. So, the information bases for braking your automobile to avoid a potential collision, or for catching a fly ball, or for piloting an aircraft to a safe touchdown on a runway are NOT in our minds! For example, structures in optical flow fields (e.g., angle, angular rate, tau, horizon ratio) provide the state information that allows people to skillfully move through the environment. The optical flow field and the objects and events specified by the invariant structures are NOT in the mind of the observer. These relations are available to all animals with eyes and can be leveraged in automatic control systems with optical sensors. These signals are every bit as meaningful as any symbol or sign yet these are not human inventions. Humans and other animals can discover the meanings of these relations through interaction with the world, and they can utilize these meanings to achieve satisfying interactions with the world (e.g. avoiding collisions, catching balls, landing aircraft), but the human does not 'create' the meaning in these cases.

for a signal the meaning emerges naturally from the coupling of perception and action in a triadic semiotic system. It is not an invention of the mind, but it can be discovered by a mind.

In the field of cognitive science debates have often been cast in terms of whether humans are 'symbol processors,' such that meaning is constructed through mental computations, or whether humans are capable of 'direct perception,' such that meaning is 'picked-up' through interaction with the ecology.  One side places meaning exclusively in the mind, ignoring or at least minimizing the role of structure in the ecology. The other side places meaning in the ecology, minimizing the creative computational powers of mind.

This framing of the question in either/or terms has proven to be an obstacle to progress in cognitive science. Recognizing that the perception-action loop can be closed through symbols, signs, and signals opens the path to a both/and approach with the promise of a deeper understanding of human cognition.

Recognizing that the perception-action loop can be closed through symbols, signs, and signals opens the path to a both/and approach with the promise of a deeper understanding of human cognition.

2

Early American Functionalist Psychologists, such as William James and John Dewey, viewed cognition through a Pragmatic lens. Thus, for them cognition involved making sense of the world in terms of its functional significance: What can be done? What will the consequences be? More recently, James Gibson (1979) introduced the word “Affordance” to reflect this functionalist perspective; where the term affordance is used to describe an object relative to the possible actions that can be performed on or with the object and the possible consequences of those actions. Don Norman (1988) has introduced the concept of affordance to designers who have found it to be a useful concept for thinking about how a design is experienced by people.

Formalizing Functional Structure

This Functionalist view of the world has been formalized by the philosopher, William Wimsatt (1972), in terms of seven dimensions or attributes for characterizing any object; and by the cognitive systems engineer, Jens Rasmussen (1986), in terms of an Abstraction-Decomposition Space. Figure 1 illustrates some of the parallels between these two methods for characterizing functional properties of an object. The vertical dimension of the Abstraction-Decomposition Space reflects five levels of abstraction that are coupled in terms of a nesting of means-ends constraints. The top level, Functional Purpose, specifies the value constraints on the functional system – what is the ultimate value that is achievable or what is the intended goal or purpose? As you move to lower levels in this hierarchy the focus is successively narrowed down to the specific, physical properties of objects at the lowest Physical Form level of abstraction.

Figure 1. An illustration of how Wimsatt’s functional attributes map into Rasmussen’s Abstraction-Decomposition Space.

An important inspiration for creating the Abstraction-Decomposition Space was Rasmussen’s observations of the reasoning processes of people doing trouble-shooting or fault diagnosis. He observed that the reasoning tended to move along the diagonal in this space. People tended to consider holistic properties of a system at high levels of abstraction (e.g., the primary function of an electronic device) in order to make sense of relations at lower levels of abstraction (e.g., the arrangements of parts). In essence, higher levels of abstraction tended to provide the context for understanding WHY the parts were configured in a certain way. People tended to consider lower levels of abstraction to understand how the arrangements or parts served the higher-level purposes. In essence, lower levels of abstraction provided clues to HOW a particular function would be achieved.

Rasmussen found that in the process of trouble shooting an electronic system, the reasoning tended to move up and down the diagonal of the Abstraction-Decomposition Space. Moving up in abstraction tended to broaden the perspective and to suggest dimensions for selecting properties at lower levels. In essence, the higher level was a kind of filter that determined significance at the lower levels. This filter effectively guided attention and determined how to chunk information and what attributes should be salient at the lower levels. Thus, in the process of diagnosing a fault, experts tended to shift attention across different levels of abstraction until eventually zeroing-in on the specific location of a fault (e.g., finding the break in the circuit or the failed part).

Wimsatt’s formalism for characterizing an object or item in functional terms is summarized in the following statement:

According to theory T, a function of item i, in producing behaviour B, in system S in environment E relative to purpose P is to bring about consequence C.

Figure 1 suggests how Wimsatt’s seven functional attributes of an object might fit within Rasmussen’s Abstraction-Decomposition Space. The object or item (i) as a physical entity corresponds with the lowest level of abstraction and the most specific level of decomposition. The purpose (P) corresponds with the highest level of abstraction at a more global level of decomposition. The Theory (T) and System (S) attributes introduce additional constraints for making sense of the relation between the object and the purpose. Theory (T) provides the kind of holonomic constraints (e.g., physical laws) that Rasmussen considered at the Abstract Function Level. These constraints set limits on how a purpose might be achieved (e.g., the laws of aerodynamics set constraints on how airplanes or wings can serve the purpose of safe travel). The System (S) attributes provide the kind of organizational constraints that Rasmussen considered at the General Function level. These constrains describe the object’s role in relation to other parts of a system in order to serve the higher-level Purpose (P) (e.g., a general function of the wing is to generate lift). The Behavior (B) attribute fits with Rasmussen’s Physical Function level that describes the physical constraints relative to the object’s role as a part of the organization (e.g., here the distinction between fixed and rotary wings comes into play). The Environment (E) attribute crosses levels of abstraction as a way of providing the ‘context of use’ for the object. Finally, the Consequence (C) attribute provides the specific effect that the object produces relative to achieving the purpose (e.g., the specific lift coefficient for a wing of a certain size and shape).

While the details of the mapping in Figure 1 might be debated, there seems to be little doubt that the formalisms suggested by Wimsatt and Rasmussen are rooted in very similar intuitions about how the process of making sense of the world is rooted in a functionalist perspective in which ‘meaning’ is grounded in a network of means-ends relations that associates objects with the higher-level purposes and values that they might serve. This connection between ‘meaning’ and higher levels of abstraction has also been recognized by S.I. Hayakawa with his formalism of the Abstraction Ladder.

Hayakawa used the case of Bessie the Cow to illustrate how higher levels of abstraction provide a broader context for understanding the meaning of a specific object in relation to progressively broader systems of associations (See Figure 2).

Figure 2. An illustration of how Hayakawa’s Abstraction Ladder maps into Rasmussen’s Abstraction-Decomposition Space.

Figure 2 illustrates how the distinctions that Hayakawa introduced with his Abstraction Ladder might map to Rasmussen’s Abstraction-Decomposition Space. It has been noted by Hayakawa and others that building engaging narratives involves moving up and down the Abstraction Ladder (or equivalently moving along the diagonal in the Abstraction Decomposition Space). This is consistent with Rasmussen’s observations about trouble-shooting. Thus, the common intuition is that the process of sensemaking is intimately associated with unpacking the different layers of relations between an object and the larger functional networks or contexts in which it is nested.

The Nature of Expertise

The parallels between expert behaviors in trouble shooting and fault diagnosis by Rasmussen and observations about the implications of Hayakawa’s Abstraction Ladder for constructing interesting narratives might help to explain why case-based learning (Bransford, Brown & Cocking, 2000) is particularly effective for communicating expertise and why narrative approaches for knowledge elicitation (e.g., Klein, 2003; Kurtz & Snowden, 2003) are so effective for uncovering expertise. Even more significantly, perhaps the ‘intuitions’ or ‘gut feel’ of experts may reflect a higher degree of attunement with constraints at higher levels of abstraction. That is, while journeymen may know what to do and how to do it, they may not have the deeper understanding of why one way is better than another (e.g., Sinek, 2009) that differentiates the true experts in a field. In other words, the ‘gut feel’ might reflect the ability of experts to appreciate the coupling between objects and actions with ultimate values and higher-level purposes. Further, this link to value and purpose may have an important emotional component (e.g., Damasio, 1999). This suggests that expertise is not simply a function of knowing more, it may also require caring more. 

Conclusions

As Wieck (1995) noted, an important aspect of sensemaking is what Schön (1983) called problem setting. Weick wrote:

When we set the problem, we select what we will treat as “things” of the situation, we set the boundaries of our attention to it, and we impose upon it a coherence which allows us to say what is wrong and in what directions the situation needs to be changed. Problem setting is a process in which, interactively, we name the things to which we will attend and frame the context in which we will attend to them (Weick, 1995, p. 9).

The fundamental point is that the construct of function as reflected in the formalisms of Wimsatt, Rasmussen, and Hayakawa may provide important clues into the nature of how people set the problem as part of a sensemaking process. In particular, the diagonal in Rasmussen’s Abstraction-Decomposition space may provide clues for how people parse the details of complex situations using filters at different layers of abstraction to ultimately make sense relative to higher functional values and purposes.

Thus, here are some important implications:

  • A functionalist perspective provides important insights into the sensemaking process.
  • This is the common intuition underlying the formalisms introduced by Gibson, Wimsatt, Rasmussen, and Hayakawa.
  • Sensemaking involves navigating across levels of abstraction and levels of detail to identify functional or means-ends relations within a larger network of associations between objects (parts) and contexts (wholes).
  • Links between higher levels of abstraction (values, purposes) and lower levels of abstraction (general functions, components and behaviors) may reflect the significance of couplings between emotions, knowledge, and skill.
  • The various formalisms described here provide important frameworks for understanding any sensemaking process (e.g., fault diagnosis, storytelling, or intel analysis) and have important implications for both eliciting knowledge from experts and for representing information to facilitate the development of expertise through training and interface design. 

Key Sources

  1. Bransford, J. D., Brown, A. L., and Cocking, R. (2000). How People Learn, National Academy Press, Washington, DC.
  2. Damasio, A. (1999). The Feeling of What Happens: Body and emotion in the making of consciousness. Orlando, FL: Harcourt.
  3. Flach, J.M. & Voorhorst, F.A. (2016). What Matters: Putting common sense to work. Dayton, OH: Wright State Library.
  4. Gibson, J.J. (1979). The Ecological Approach to Visual Perception. New York: Houghton Mifflin.
  5. Hayakawa, S.I. (1990). Language in Thought and Action. 5th New York: Houghton Mifflin Harcourt.
  6. Klein, G. (2003). Intuition at Work. New York: Doubleday.
  7. Kurtz, C.F. & Snowden, D.J. (2003). The new dynamics of strategy: Sense-making in a complex and complicated world. IBM Systems Journal, 42, 462-483.
  8. Norman, D.A. (1988). The Psychology of Everyday Things. New York: Basic Books.
  9. Rasmussen, J. (1986). Information Processing and Human-Machine Interaction. New York: North-Holland.
  10. Schön, D.A. (1983). The Reflective Practitioner. New York: Basic Books.
  11. Sinek, S. (2009). Start with Why: How great leaders inspire everyone to take action. New York: Penguin.
  12. Weick, K.E. (1995). Sensemaking in Organizations. Thousand Oaks, CA: Sage.
  13. Wimsatt, W.C. (1972). Teleology and the logical structure of function statements. Hist. Phil. Sci., 3, no. 1, 1-80.

 

2

Early American Functionalist Psychologists, such as William James and John Dewey, viewed cognition through a Pragmatic lens. Thus, for them cognition involved making sense of the world in terms of its functional significance: What can be done? What will the consequences be? More recently, James Gibson (1979) introduced the word “Affordance” to reflect this functionalist perspective; where the term affordance is used to describe an object relative to the possible actions that can be performed on or with the object and the possible consequences of those actions. Don Norman (1988) has introduced the concept of affordance to designers who have found it to be a useful concept for thinking about how a design is experienced by people.

Formalizing Functional Structure

This Functionalist view of the world has been formalized by the philosopher, William Wimsatt (1972), in terms of seven dimensions or attributes for characterizing any object; and by the cognitive systems engineer, Jens Rasmussen (1986), in terms of an Abstraction-Decomposition Space. Figure 1 illustrates some of the parallels between these two methods for characterizing functional properties of an object. The vertical dimension of the Abstraction-Decomposition Space reflects five levels of abstraction that are coupled in terms of a nesting of means-ends constraints. The top level, Functional Purpose, specifies the value constraints on the functional system – what is the ultimate value that is achievable or what is the intended goal or purpose? As you move to lower levels in this hierarchy the focus is successively narrowed down to the specific, physical properties of objects at the lowest Physical Form level of abstraction.

Figure 1. An illustration of how Wimsatt’s functional attributes map into Rasmussen’s Abstraction-Decomposition Space.

An important inspiration for creating the Abstraction-Decomposition Space was Rasmussen’s observations of the reasoning processes of people doing trouble-shooting or fault diagnosis. He observed that the reasoning tended to move along the diagonal in this space. People tended to consider holistic properties of a system at high levels of abstraction (e.g., the primary function of an electronic device) in order to make sense of relations at lower levels of abstraction (e.g., the arrangements of parts). In essence, higher levels of abstraction tended to provide the context for understanding WHY the parts were configured in a certain way. People tended to consider lower levels of abstraction to understand how the arrangements or parts served the higher-level purposes. In essence, lower levels of abstraction provided clues to HOW a particular function would be achieved.

Rasmussen found that in the process of trouble shooting an electronic system, the reasoning tended to move up and down the diagonal of the Abstraction-Decomposition Space. Moving up in abstraction tended to broaden the perspective and to suggest dimensions for selecting properties at lower levels. In essence, the higher level was a kind of filter that determined significance at the lower levels. This filter effectively guided attention and determined how to chunk information and what attributes should be salient at the lower levels. Thus, in the process of diagnosing a fault, experts tended to shift attention across different levels of abstraction until eventually zeroing-in on the specific location of a fault (e.g., finding the break in the circuit or the failed part).

Wimsatt’s formalism for characterizing an object or item in functional terms is summarized in the following statement:

According to theory T, a function of item i, in producing behaviour B, in system S in environment E relative to purpose P is to bring about consequence C.

Figure 1 suggests how Wimsatt’s seven functional attributes of an object might fit within Rasmussen’s Abstraction-Decomposition Space. The object or item (i) as a physical entity corresponds with the lowest level of abstraction and the most specific level of decomposition. The purpose (P) corresponds with the highest level of abstraction at a more global level of decomposition. The Theory (T) and System (S) attributes introduce additional constraints for making sense of the relation between the object and the purpose. Theory (T) provides the kind of holonomic constraints (e.g., physical laws) that Rasmussen considered at the Abstract Function Level. These constraints set limits on how a purpose might be achieved (e.g., the laws of aerodynamics set constraints on how airplanes or wings can serve the purpose of safe travel). The System (S) attributes provide the kind of organizational constraints that Rasmussen considered at the General Function level. These constrains describe the object’s role in relation to other parts of a system in order to serve the higher-level Purpose (P) (e.g., a general function of the wing is to generate lift). The Behavior (B) attribute fits with Rasmussen’s Physical Function level that describes the physical constraints relative to the object’s role as a part of the organization (e.g., here the distinction between fixed and rotary wings comes into play). The Environment (E) attribute crosses levels of abstraction as a way of providing the ‘context of use’ for the object. Finally, the Consequence (C) attribute provides the specific effect that the object produces relative to achieving the purpose (e.g., the specific lift coefficient for a wing of a certain size and shape).

While the details of the mapping in Figure 1 might be debated, there seems to be little doubt that the formalisms suggested by Wimsatt and Rasmussen are rooted in very similar intuitions about how the process of making sense of the world is rooted in a functionalist perspective in which ‘meaning’ is grounded in a network of means-ends relations that associates objects with the higher-level purposes and values that they might serve. This connection between ‘meaning’ and higher levels of abstraction has also been recognized by S.I. Hayakawa with his formalism of the Abstraction Ladder.

Hayakawa used the case of Bessie the Cow to illustrate how higher levels of abstraction provide a broader context for understanding the meaning of a specific object in relation to progressively broader systems of associations (See Figure 2).

Figure 2. An illustration of how Hayakawa’s Abstraction Ladder maps into Rasmussen’s Abstraction-Decomposition Space.

Figure 2 illustrates how the distinctions that Hayakawa introduced with his Abstraction Ladder might map to Rasmussen’s Abstraction-Decomposition Space. It has been noted by Hayakawa and others that building engaging narratives involves moving up and down the Abstraction Ladder (or equivalently moving along the diagonal in the Abstraction Decomposition Space). This is consistent with Rasmussen’s observations about trouble-shooting. Thus, the common intuition is that the process of sensemaking is intimately associated with unpacking the different layers of relations between an object and the larger functional networks or contexts in which it is nested.

The Nature of Expertise

The parallels between expert behaviors in trouble shooting and fault diagnosis by Rasmussen and observations about the implications of Hayakawa’s Abstraction Ladder for constructing interesting narratives might help to explain why case-based learning (Bransford, Brown & Cocking, 2000) is particularly effective for communicating expertise and why narrative approaches for knowledge elicitation (e.g., Klein, 2003; Kurtz & Snowden, 2003) are so effective for uncovering expertise. Even more significantly, perhaps the ‘intuitions’ or ‘gut feel’ of experts may reflect a higher degree of attunement with constraints at higher levels of abstraction. That is, while journeymen may know what to do and how to do it, they may not have the deeper understanding of why one way is better than another (e.g., Sinek, 2009) that differentiates the true experts in a field. In other words, the ‘gut feel’ might reflect the ability of experts to appreciate the coupling between objects and actions with ultimate values and higher-level purposes. Further, this link to value and purpose may have an important emotional component (e.g., Damasio, 1999). This suggests that expertise is not simply a function of knowing more, it may also require caring more. 

Conclusions

As Wieck (1995) noted, an important aspect of sensemaking is what Schön (1983) called problem setting. Weick wrote:

When we set the problem, we select what we will treat as “things” of the situation, we set the boundaries of our attention to it, and we impose upon it a coherence which allows us to say what is wrong and in what directions the situation needs to be changed. Problem setting is a process in which, interactively, we name the things to which we will attend and frame the context in which we will attend to them (Weick, 1995, p. 9).

The fundamental point is that the construct of function as reflected in the formalisms of Wimsatt, Rasmussen, and Hayakawa may provide important clues into the nature of how people set the problem as part of a sensemaking process. In particular, the diagonal in Rasmussen’s Abstraction-Decomposition space may provide clues for how people parse the details of complex situations using filters at different layers of abstraction to ultimately make sense relative to higher functional values and purposes.

Thus, here are some important implications:

  • A functionalist perspective provides important insights into the sensemaking process.
  • This is the common intuition underlying the formalisms introduced by Gibson, Wimsatt, Rasmussen, and Hayakawa.
  • Sensemaking involves navigating across levels of abstraction and levels of detail to identify functional or means-ends relations within a larger network of associations between objects (parts) and contexts (wholes).
  • Links between higher levels of abstraction (values, purposes) and lower levels of abstraction (general functions, components and behaviors) may reflect the significance of couplings between emotions, knowledge, and skill.
  • The various formalisms described here provide important frameworks for understanding any sensemaking process (e.g., fault diagnosis, storytelling, or intel analysis) and have important implications for both eliciting knowledge from experts and for representing information to facilitate the development of expertise through training and interface design. 

Key Sources

  1. Bransford, J. D., Brown, A. L., and Cocking, R. (2000). How People Learn, National Academy Press, Washington, DC.
  2. Damasio, A. (1999). The Feeling of What Happens: Body and emotion in the making of consciousness. Orlando, FL: Harcourt.
  3. Flach, J.M. & Voorhorst, F.A. (2016). What Matters: Putting common sense to work. Dayton, OH: Wright State Library.
  4. Gibson, J.J. (1979). The Ecological Approach to Visual Perception. New York: Houghton Mifflin.
  5. Hayakawa, S.I. (1990). Language in Thought and Action. 5th New York: Houghton Mifflin Harcourt.
  6. Klein, G. (2003). Intuition at Work. New York: Doubleday.
  7. Kurtz, C.F. & Snowden, D.J. (2003). The new dynamics of strategy: Sense-making in a complex and complicated world. IBM Systems Journal, 42, 462-483.
  8. Norman, D.A. (1988). The Psychology of Everyday Things. New York: Basic Books.
  9. Rasmussen, J. (1986). Information Processing and Human-Machine Interaction. New York: North-Holland.
  10. Schön, D.A. (1983). The Reflective Practitioner. New York: Basic Books.
  11. Sinek, S. (2009). Start with Why: How great leaders inspire everyone to take action. New York: Penguin.
  12. Weick, K.E. (1995). Sensemaking in Organizations. Thousand Oaks, CA: Sage.
  13. Wimsatt, W.C. (1972). Teleology and the logical structure of function statements. Hist. Phil. Sci., 3, no. 1, 1-80.

 

Cognitive Systems Engineering (CSE) emerged from Human Factors as researchers began to realize that in order to fully understand human-computer interaction it was necessary to understand the 'work to be done' on the other side of the computer. They began to realize that for an interface to be effective, it had to map into both a 'mind' and onto a 'problem domain.'  They began to realize that a representation only leads to productive thinking if it makes the 'deep structure' of the work domain salient.  Thus, the design of the representation had to be motivated by a deep understanding of the domain (as well as a deep understanding of the mind).

User-eXperience Design (UXD) emerged from Product Design as designers began to realize that they were not simply creating 'objects.' They were creating experiences. They began to realize that products were embedded in a larger context, and that the ultimate measure of the quality of their design was the impact on this larger context - on the user experience. They began to realize that the quality of their designs did not simply lie in the object, but rather in the impact that the object had on the larger experience that it engendered. Designers began to realize that they were not simply shaping objects, but they were shaping experiences. Thus, the design of the object had to be motivated by a deep understanding of the context of use (as well as a deep understanding of the materials or technologies).

The common ground is the user-experience.  CSE and UXD are both about designing experiences. They both require that designers deal with minds, objects, and contexts or ecologies. The motivating contexts have been different, with CSE emerging largely from experiences in safety critical systems (e.g., aviation, nuclear power); and UXD emerging largely from experiences with consumer products (e.g., tooth brushes, doors). However, the common realization is that 'context matters.' The common realization is that the constraints of the 'mind' and the constraints of the 'object' can only be fully understood in relation to a 'context of use.'  The common realization is that 'functions matter.' And that 'functions' are relations between agents, tools, and ecologies.

The CSE and UXD communities have both come to realize that the qualities that matter are not in either the mind or the object, but rather in the experience. They have discovered that the proof of the pudding is in the eating.

Cognitive Systems Engineering (CSE) emerged from Human Factors as researchers began to realize that in order to fully understand human-computer interaction it was necessary to understand the 'work to be done' on the other side of the computer. They began to realize that for an interface to be effective, it had to map into both a 'mind' and onto a 'problem domain.'  They began to realize that a representation only leads to productive thinking if it makes the 'deep structure' of the work domain salient.  Thus, the design of the representation had to be motivated by a deep understanding of the domain (as well as a deep understanding of the mind).

User-eXperience Design (UXD) emerged from Product Design as designers began to realize that they were not simply creating 'objects.' They were creating experiences. They began to realize that products were embedded in a larger context, and that the ultimate measure of the quality of their design was the impact on this larger context - on the user experience. They began to realize that the quality of their designs did not simply lie in the object, but rather in the impact that the object had on the larger experience that it engendered. Designers began to realize that they were not simply shaping objects, but they were shaping experiences. Thus, the design of the object had to be motivated by a deep understanding of the context of use (as well as a deep understanding of the materials or technologies).

The common ground is the user-experience.  CSE and UXD are both about designing experiences. They both require that designers deal with minds, objects, and contexts or ecologies. The motivating contexts have been different, with CSE emerging largely from experiences in safety critical systems (e.g., aviation, nuclear power); and UXD emerging largely from experiences with consumer products (e.g., tooth brushes, doors). However, the common realization is that 'context matters.' The common realization is that the constraints of the 'mind' and the constraints of the 'object' can only be fully understood in relation to a 'context of use.'  The common realization is that 'functions matter.' And that 'functions' are relations between agents, tools, and ecologies.

The CSE and UXD communities have both come to realize that the qualities that matter are not in either the mind or the object, but rather in the experience. They have discovered that the proof of the pudding is in the eating.

Over the last 20 years or so, the vision of how to help organizations improve safety has been changing from a focus on 'stamping out errors' to a focus on 'managing the quality of work.'

This change reflects a similar evolution in how the Forestry service manages fire safety. There was a period when the focus was on 'stamping out forrest fires,' and the poster child for these efforts was Smokey the Bear (Only you can prevent forrest fires). However, the forestry service has learned that a side-effect of an approach that focusses exclusively on preventing fires is the build up of fuel on the forest floors. Because of this build up, when a fire inevitably occurs it can burn at levels that can be catastrophic for forest health. The forest will not naturally recover from the burn.

Smokey the Bear Effect

The forestry service now understands that low intensity fires can be integral to the long term health of a forrest. These low intensity fires help to prevent the build up of fuel and also can promote germination of seeds and new growth.

The alternative to 'stamping out fires' is to manage forrest health. This sometimes involves introducing controlled burns or letting low intensity fires burn themselves out.

The general implication of this is that safety programs should be guided by a vision of health or quality, rather than be simply a reaction to errors. With respect to improving safety, programs focused on health/quality will have greater impacts, than programs designed to 'stamp out errors.' Programs designed to stamp out errors, tend to also end up stamping out the information (e.g., feedback) that is essential for systems to learn from mistakes and to tune to complex, dynamic situations. Like low intensity fires, learning from mistakes and near misses actually contributes to the overall health of a high reliability organization.

This new perspective is beautifully illustrated in Sidney Dekker's new movie that can be viewed on YouTube:

Safety Differently

The CVDi display for evaluating heart health has been updated. The new version includes an option for SI units.  Also, some of the interaction dynamics have been updated. This is still a work in progress, so we welcome feedback and suggestions for how to improve and expand this interface.

https://mile-two.gitlab.io/CVDI/