Skip to content

This is the fourth in a series of entries to explore the differences between an Information Processing Approach to Human Factors and a Meaning Processing Approach to Cognitive Systems Engineering. The table below lists some contrasts between these two perspectives. This entry will focus on the third contrast in the table - the shift from a focus on 'workload' to a focus on 'situation awareness.'

The concept of information developed in this theory at first seems disappointing and bizarre - disappointing because it has nothing to do with meaning, and bizarre because it deals not with a single message but rather with the statistical character of a whole assemble of messages, bizarre also because in these statistical terms the two words information and uncertainty find themselves to be partners. Warren Weaver (1963, p. 27)

The construct of 'workload' is a natural focus for an approach that emphasizes describing and quantifying internal constraints of the human and that assumes that these constraints are independent of the particulars of any specific situation or work context. This fits well with the engineering perspective on quantifying information and for specifying the capacity of fixed information channels as developed by Shannon.  However, the downside of this perspective is that in making the construct of workload independent of 'context,' it thus becomes independent of 'meaning' as suggested in the above quote from Warren Weaver.

Those interested in the impact of context on human cognition became dissatisfied with a framework that focused only on internal constraints (e.g., bandwidth, resources, modality) without consideration for how those constraints interacted with situations. Thus, the construct of Situation Awareness (SA) evolved as an alternative to workload. Unfortunately, many who have been steeped in the information processing tradition have framed SA in terms of internal constraints (e.g., treating levels of SA as components internal to the processing system).

However, others have taken the construct of SA as an opportunity to consider the dynamic couplings of humans and work domains (or situations).  For them, the construct of SA reflects a need to 'situate' cognition within a work ecology and to consider how constraints in that ecology create demands and opportunities for cognitive systems. In this framework, it is assumed that cognitive systems can intelligently adapt to the constraints of situations - utilizing structure in situations to 'chunk' information and as the basis for smart heuristics that reduce the computational burden, allowing people to deal effectively with situations that would overwhelm the channel capacity of a system not tuned to these structural constraints (see aiming off example).

There is no question that humans have limited working memory capacity as suggested by the workload construct. However, CSE recognizes the ability of people to discover and use situated constraints (e.g., patterns) in ways that allow them to do complex work (e.g., play chess, pilot aircraft, drive in dense traffic, play a musical instrument) despite these internal constraints. It is this capacity to attune to structure associated with specific work domains that leads to expert performance.

The design implication of an approach that focuses on workload is to protect the system against human limitations (e.g., bottlenecks) by either distributing the work among multiple people or by replacing humans with automated systems with higher bandwidth. The key is to make sure that people are not overwhelmed by too much data!

The design implication of an approach that focuses on SA is to make the meaningful work domain constraints salient in order to facilitate attunement processes. This can be done through the design of interfaces or through training. The result is to heighten human engagement with the domain structure to facilitate skill and expertise. The key is to make sure that people are well-tuned to the meaningful aspects of work (e.g., constraints and patterns) that allow them to 'see' what needs to be done.

 

First, the span of absolute judgment and the span of immediate memory impose severe limitations on the amount of information that we are able to receive, process, and remember. By organizing the stimulus input simultaneously into several dimensions and successively into a sequence of chunks, we manage to break (or at least stretch) this informational bottleneck.

Second, the process of recoding is a very important one in human psychology and deserves much more explicit attention than it has received. In particular, the kind of linguistic recoding that people do seems to me to be the very lifeblood of the thought processes. Recoding procedures are a constant concern to clinicians, social psychologists, linguists, and anthropologists and yet, probably because recoding is less accessible to experimental manipulation than nonsense syllables or T mazes, the traditional experimental psychologist has contributed little or nothing to their analysis. Nevertheless, experimental techniques can be used, methods of recoding can be specified, behavioral indicants can be found. And I anticipate that we will find a very orderly set of relations describing what now seems an uncharted wilderness of individual differences. George Miller (1956) p. 96-97, (emphasis added).

This continues the discussion about the differences between CSE (meaning processing) and more classical HF (information processing) approaches to human performance summarized in the table below. This post will focus on the second line in the table - shifting emphasis from human limitations to include greater consideration of human capabilities.

Anybody who has taken an intro class in Psychology or Human Factors is familiar with Miller's famous number 7 plus or minus 2 chunks that specifies the capacity of working memory. This is one of the few numbers that we can confidently provide to system designers as a spec of the human operator that needs to be respected. However, this number has little practical value for design unless you can also specify what constitutes a 'chunk.'

Although people know Miller's number, few appreciate the important points that Miller makes in the second half of the paper about the power of 'recoding' and the implications for the functional capacity of working memory. As noted in the opening quote to this blog - people have the ability to 'stretch' memory capacity through chunking. The intro texts emphasize the "limitation," but much less attention has been paid to the recoding "capability" that allows experts to extend their functional memory capacity to deal with large amounts of information (e.g., that allows an expert chess player to recall all the pieces on a chess board based on a very short glance).

Cataloguing visual illusions has been a major thrust of research in perception. Similarly, cataloguing biases has been a major thrust of research in decision making. However, people such as James Gibson have argued that these collections of illusions do not add up to a satisfying theory of how perception works to guide action (e.g., in the control of locomotion). In a similar vein, people such as Gerd Gigerenzer have made a similar case for decision making - that collections of biases (the dark side of heuristics) do not add up to a satisfying theory of decision making in every day life and work. One reason is that in every day life, there is often missing and ambiguous data and incommensurate variables that make it difficult or impossible to apply more normative algorithmic approaches.

One result of the work of Tversky and Kahneman in focusing on decision errors is that the term 'heuristic' is often treated as if it is synonymous with 'bias.' Thus, heuristics illustrate the 'weakness' of human cognition - the bounds of rationality. However, Herbert Simon's early work in artificial intelligence treated heuristics as the signature of human intelligence. A computer program was only considered intelligent if it took advantage of heuristics that creatively used problem constraints to find short cuts to solutions - as opposed to applying mathematical algorithms that solved problems by mechanical brute force.

This emphasis on illusions and biases tends to support the MYTH that humans are the weak link in any sociotechnical system and it leads many to seek ways to replace the human with supposedly more reliable 'automated' systems. For example, recent initiatives to introduce autonomous cars often cite 'reducing human errors and making the system safer' as a motivation for pursuing automatic control solutions.

Thus, the general concept of rationality tends to be idealized around the things that computers do well (use precise data to solve complex algorithms and logical puzzles) and it tends to underplay those aspects of rationality where humans excel (detecting patterns and deviations and creatively adapting to surprise/novelty/ambiguity).

CSE recognizes the importance of respecting the bounds of rationality for humans when designing systems - but also appreciates the limitations of automation (and the fact that when the automation fails - it will typically fall to the human to fill the gap). Further, CSE starts with the premise that 'human errors' are best understood in relation to human abilities. On one hand, a catalogue of human errors will never add up to a satisfying understanding of human thinking. On the other hand, a deeper understanding may be possible if the 'errors' are seen as 'bounds' on abilities.  In other words, CSE assumes that a theory of human rationality must start with a consideration of how thinking 'works,' in order to give coherence to the collection of errors that emerge at the margins.

The implication is that CSE tends to be framed in terms of designing to maximize human engagement and to fully leverage human capabilities, as opposed to more classical human factors that tends to emphasize the need to protect systems against human errors and limitations. This does not need to be framed as human versus machine, rather it should be framed in terms of human-machine collaboration. The ultimate design goal is to leverage the strengths of both humans and technologies to create sociotechnical systems that extend the bounds of rationality beyond the range of either of the components.

A rather detailed account of the nineteenth-century history of the steam engine with governor may help the reader to understand both the circuits and the blindness of the inventors. Some sort of governor was added to the early steam engine, but the engineers ran into difficulties. They came to Clark Maxwell with the complaint that they could not draw a blueprint for an engine with a governor. They had no theoretical base from which to predict how the machine that they had drawn would behave when built and running.

There were several possible sorts of behavior: Some machines went into runaway, exponentially maximizing their speed until they broke or slowing down until they stopped. Others oscillated and seemed unable to settle to any mean Others - still worse - embarked on sequences of behavior in which the amplitude of their oscillation would itself oscillate or would become greater and greater.

Maxwell examined the problem. He wrote out formal equations for relations between the variables at each successive step around the circuit. He found, as the engineers had found, that combining this set of equations would not solve the problem. Finally, he found that the engineers were at fault in not considering time. Every given system embodied relations to time, that is was characterized by time constants determined by the given whole. These constants were not determined by the equations of relationship between successive parts but were emergent properties of the system.

... a subtle change has occurred in the subject of discourse .... It is a difference between talking in a language which a physicist might use to describe how one variable acts upon another and talking in another language about the circuit as a whole which reduces or increases difference. When we say that the system exhibits "steady state" (i.e., that in spite of variation, it retains a median value), we are talking about the circuit as a whole, not about the variations within it. Similarly the question which the engineers brought up to Clark Maxwell was about the circuit as a whole: How can we plan it to achieve a steady state? They expected the answer to be in terms of relations between the individual variables. What was needed and supplied by Maxwell was an answer in terms of time constants of the total circuit. This was the bridge between the two levels of discourse. Gregory Bateson (2002) p. 99-101.

This is a continuation of the previous posting and the discussion of what is unique to a CSE approach relative to more traditional human factors. This post will address the first line in the table below that is repeated from the prior post.

Norbert Wiener's (1948) classic, introducing the Cybernetic Hypothesis, was subtitled "On Control and Communication in the Animal and Machine." The ideas in this book about control systems (machines that were designed to achieve and maintain a goal state) and communication (the idea that information can be quantified) had a significant impact on the framing of research in psychology. It helped shift the focus from behavior to cognition (e.g., Miller, Gallanter, Pribram, 1960).

However, though psychologists began to include feedback in their images of cognitive systems, the early program of research tended to be dominated by the image of an open-loop communication system and the research program tended to focus on identifying stimulus-response associations or transfer functions (e.g., bandwidth) for each component in a series of discrete information processing stages. A major thrust of this research program was to identify the limitations of each subsystem in terms of storage capacity (7 + or - 2 chunk capacity of working memory) and information processing rates (e.g., Hick-Hyman Law, Fitts' Law).

Thus, the Cybernetic Hypothesis inspired psychology to consider "intentions" and other internal aspects associated with thinking, however, it did not free psychology from using the language of the physicist in describing the causal interaction between one variable and another, rather than thinking in terms of properties of the circuit as a whole (i.e., appreciating the emergent properties that arise from the coupling of perception and action). The language of information processing psychology was framed in terms of a dyadic semiotic system for processing symbols as illustrated below.

In contrast, CSE was framed from the start in the language of control theory. This reflected an interest in the role of humans in closing the loop as pilots of aircraft and supervisors of energy production processes. From a control theoretic perspective it was natural to frame the problems of meaning processing as a triadic semiotic system, where the function of cognition was to achieve stable equilibrium with a problem ecology. Note that the triadic semiotic model emerged as a result of the work of functional psychologists (e.g., James & Dewey) and pragmatic philosophers (Peirce), who were most interested in 'mind' as a means for adapting to the pragmatic demands of everyday living.  Dewey's (1896) classic paper on the reflex arc examines the implications of Maxwell's insights (described in the opening quote from Bateson) for psychology:

The discussion up to this point may be summarized by saying that the reflex arc idea, as commonly employed, is defective in that it assumes sensory stimulus and motor response as distinct psychical existences, while in reality they are always inside a coordination and have their significance purely from the part played in maintaining or reconstituting the coordination; and (secondly) in assuming that the quale of experience which precedes the 'motor' phase and that which succeeds it are two different states, instead of the last being always the first reconstituted, the motor phase coming in only for the sake of such mediation. The result is that the reflex arc idea leaves us with a disjointed psychology, whether viewed from the standpoint of development in the individual or in the race, or from that of the analysis of the mature consciousness. As to the former, in its failure to see that the arc of which it talks is virtually a circuit, a continual reconstitution, it breaks continuity and leaves us nothing but a series of jerks, the origin of each jerk to be sought outside the process of experience itself, in either an external pressure of 'environment,' or else in an unaccountable spontaneous variation from within the 'soul' or the 'organism.'  As to the latter, failing to see unity of activity, no matter how much it may prate of unity, it still leaves us with sensation or peripheral stimulus; idea, or central process (the equivalent of attention); and motor response, or act, as three disconnected existences, having to be somehow adjusted to each other, whether through the intervention of an extra experimental soul, or by mechanical push and pull.

Many cognitive scientists and many human factors engineers continue to speak in the language associated with causal, stimulus-response interactions (i.e., jerks) with out an appreciation for the larger system in which perception and action are coupled. They are still hoping to concatenate these isolated pieces into a more complete picture of cognition. In contrast, CSE starts with a view of the whole - of the coupling of perception and action through an ecology - as a necessary context from which to appreciate variations at more elemental levels.

In the early 1960s, we realized from analyses of industrial accidents the need for an integrated approach to the design of human-machine systems. However, we very rapidly encountered great difficulties in our efforts to bridge the gap between the methodology and concepts of control engineering and those from various branches of psychology. Because of its kinship to classical experimental psychology and its behavioristic claim for exclusive use of objective data representing overt activity, the traditional human factors field had very little to offer (Rasmussen, p. ix, 1986).

 

Cognitive Engineering, a term invented to reflect the enterprise I find myself engaged in: neither Cognitive Psychology, nor Cognitive Science, nor Human Factors. (Norman, p. 31, 1986).

 

The growth of computer applications has radically changed the nature of the man-machine interface. First, through increased automation, the nature of the human’s task has shifted from an emphasis on perceptual-motor skills to an emphasis on cognitive activities, e.g. problem solving and decision making…. Second, through the increasing sophistication of computer applications, the man-machine interface is gradually becoming the interaction of two cognitive systems. (Hollnagel & Woods, p. 340, 1999).

As reflected in the above quotes, through the 1980s and 90s there was a growing sense that the nature of human-machine systems was changing, and that this change was creating the demand for a new approach to analysis and design. This new approach was given the label of Cognitive Engineering (CE) or Cognitive Systems Engineering (CSE). Table 1, illustrates one way to characterize the changing role of the human factor in sociotechnical systems. As technologies became more powerful with respect to information processing capabilities, the role of the humans increasingly involved supervision and fault detection. Thus, there was increasing demand for humans to relate the activities of the automated processes to functional goals and values (i.e., pragmatic meaning) and to intervene when circumstances arose (e.g., faults or unexpected situations) such that the automated processes were no longer serving the design goals for the system.  Under these unexpected circumstances, it was often necessary for the humans to create or invent new procedures on the fly, in order to avoid potential hazards or to take advantage of potential opportunities. This demand to relate activities to functional goals and values and to improvise new procedures required meaning processing.

In a following series of posts I will elaborate the differences between information processing and meaning processing as outlined in Table 1. However, before I get into the specific contrasts, it is important to emphasize that relative to early human factors, CSE is an evolutionary change, NOT a revolutionary change. That is, the concerns about information processing that motivated earlier human factors efforts were not wrong and they continue to be important with regards to design. The point of CSE is not that an information processing approach is wrong, but rather that it is insufficient.

The point of CSE is not that an information processing approach is wrong, but rather that it is insufficient.

The point of CSE is that design should not simply be about avoiding overloading the limited information capacities of humans, but it should also seek to leverage the unique meaning processing capabilities of humans. These meaning processing capabilities reflect peoples' ability to make sense of the world relative to their values and intentions, to adapt to surprises, and to improvise in order to take advantage of new opportunities or to avoid new threats. In following posts I will make the case that the overall vision of a meaning processing approach is more expansive than an information processing approach. This broader context will sometimes change the significance of specific information limitations. It will also provide a wider range of options for by-passing these limitations and for innovating to improve the range and quality of performance of sociotechnical systems.

In other words, I will argue for a systems perspective - where the information processing limitations must be interpreted in light of the larger meaning processing context. I will argue that constructs such as 'expertise' can only be fully understood in terms of qualities that emerge at the level of meaning processing.

Hollnagel, E. & Woods, D.D. (1999). Cognitive Systems Engineering: New wine in new bottles. In. J. Human-Computer Studies, 51, 339-356.

Norman, D.A. (1986). Cognitve Engineering. In D.A. Norman & S.W. Draper (Eds). User-Centered System Design, 31-61, Hillsdale, NJ: Erlbaum.

Rasmussen, J. (1986). Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering. New York: North Holland.

Another giant in the field of Cognitive Systems Engineering has been lost. Jens Rasmussen created one of the most comprehensive and integrated foundations for understanding the dynamics of sociotechnical systems available today. Drawing from the fields of semiotics (Eco), control engineering, and human performance, his framework was based on a triadic semiotic dynamic, which he parsed in terms of three overlapping perspectives. The Abstraction Hiearchy (AH) provided a way to characterize the system relative to a problem ecology and the consequences and possibilities for action. He introduced the constructs Skills-, Rules-, and Knowledge (SRK) as a way to emphasize the constraints on the system from the perspective of the observers-actors. Finally, he introduced the construct of Ecological Interface Design (EID) as a way to emphasize the constraints on the system from the perspective of representations.

Jens had a comprehensive vision of the sociotechnical system and we have only begun to plumb the depths of this framework and to fully appreciate its value as both a basic theory of systems and as a pragmatic guide for the engineering and design of safer more efficient systems.

Jens death is a very personal loss for me. He was a valued mentor who saw potential in a very naive, young researcher long before it was evident or deserved. He opened doors for me and created opportunities that proved to be essential steps in my education and professional development. Although I may never realize the potential that Jens envisioned, he set me on a path that has proved to be both challenging and satisfying. I am a better man for having had him as a friend and mentor.

A website has been created to allow future researchers to benefit from Jens' work.

http://www.jensrasmussen.org/

1

The fields of psychology, human factors, and cognitive systems engineering have lost another leader who did much to shape the fields that we know today. I was very fortunate to overlap with Neville Moray on the faculty at the University of Illinois. To a large extent, my ideals for what it means to be a professor were shaped by Neville.  From his examples, I learned that curiosity does not stop at the threshold of the experimental laboratory and that training graduate students requires engagement beyond the laboratory and classroom. Neville was able to bridge the gulfs between basic and applied psychology, between science and art, and between work and play - in a way that made us all question why these gulfs existed at all.

More commentary on Neville's life and the impact he had on many in our field can be found on the HFES website

 

1

The fields of psychology, human factors, and cognitive systems engineering have lost another leader who did much to shape the fields that we know today. I was very fortunate to overlap with Neville Moray on the faculty at the University of Illinois. To a large extent, my ideals for what it means to be a professor were shaped by Neville.  From his examples, I learned that curiosity does not stop at the threshold of the experimental laboratory and that training graduate students requires engagement beyond the laboratory and classroom. Neville was able to bridge the gulfs between basic and applied psychology, between science and art, and between work and play - in a way that made us all question why these gulfs existed at all.

More commentary on Neville's life and the impact he had on many in our field can be found on the HFES website

 

4

Symbols help us make tangible that which is intangible. And the only reason symbols have meaning is because we infuse them with meaning. That meaning lives in our minds, not in the item itself. Only when the purpose, cause or belief is clear can the symbol command great power   (Sinek, 2009, p. 160)

As this quote from Sinek suggests, symbols (e.g., alphabets, flags, icons) are created by humans. Thus, the 'meaning' of the symbols will typically reflect the intentions or purposes motivating their creation. For example, as a symbol, a country's flag might represent the abstract principles on which the country is founded (e.g., liberty and freedom for all). However, it would be a mistake to conclude from this (as many cognitive scientists have) that all 'meaning' lives in our minds. While symbols may be a creation of humans - meaning is NOT.

Let me state this again for emphasis:

Meaning is NOT a product of mind!

As the triadic model of a semiotic system illustrated in the figure below emphasizes meaning emerges from the functional coupling between agents and situations. Further, as Rasmussen (1986) has emphasized this coupling involves not only symbols, but also signs and signals.

Signs (as used by Rasmussen) are different than 'symbols' in that they are grounded in social conventions. So, the choice of a color to represent safe or dangerous, or of an icon to represent 'save' or 'delete' has its origins in the head of a designer. At some point, someone chose 'red' to represent 'danger,' or chose a 'floppy disk' image to represent save.  However, over time this 'choice' of the designer can become established as a social convention.  At that point, the meaning of the the color or the icon is no longer arbitrary. It is no longer in the head of the individual observer. It has a grounding in the social world - it is established as a social convention or as a cultural expectation. People outside the culture may not 'pick-up' the correct meaning, but the meaning is not arbitrary.

Rasmussen used the term sign to differentiate this role in a semiotic system from that of 'symbols' whose meaning is open to interpretation by an observer. The meaning of a sign is not in the head of an observer, for a sign the meaning has been established by a priori rules (social or cultural conventions).

for a sign the meaning has been established by a priori rules (social or cultural conventions)

Signals (as used by Rasmussen) are different than both 'symbols' and 'signs' in that they are directly grounded in the perception-action coupling with the world. So, the information bases for braking your automobile to avoid a potential collision, or for catching a fly ball, or for piloting an aircraft to a safe touchdown on a runway are NOT in our minds! For example, structures in optical flow fields (e.g., angle, angular rate, tau, horizon ratio) provide the state information that allows people to skillfully move through the environment. The optical flow field and the objects and events specified by the invariant structures are NOT in the mind of the observer. These relations are available to all animals with eyes and can be leveraged in automatic control systems with optical sensors. These signals are every bit as meaningful as any symbol or sign yet these are not human inventions. Humans and other animals can discover the meanings of these relations through interaction with the world, and they can utilize these meanings to achieve satisfying interactions with the world (e.g. avoiding collisions, catching balls, landing aircraft), but the human does not 'create' the meaning in these cases.

for a signal the meaning emerges naturally from the coupling of perception and action in a triadic semiotic system. It is not an invention of the mind, but it can be discovered by a mind.

In the field of cognitive science debates have often been cast in terms of whether humans are 'symbol processors,' such that meaning is constructed through mental computations, or whether humans are capable of 'direct perception,' such that meaning is 'picked-up' through interaction with the ecology.  One side places meaning exclusively in the mind, ignoring or at least minimizing the role of structure in the ecology. The other side places meaning in the ecology, minimizing the creative computational powers of mind.

This framing of the question in either/or terms has proven to be an obstacle to progress in cognitive science. Recognizing that the perception-action loop can be closed through symbols, signs, and signals opens the path to a both/and approach with the promise of a deeper understanding of human cognition.

Recognizing that the perception-action loop can be closed through symbols, signs, and signals opens the path to a both/and approach with the promise of a deeper understanding of human cognition.

4

Symbols help us make tangible that which is intangible. And the only reason symbols have meaning is because we infuse them with meaning. That meaning lives in our minds, not in the item itself. Only when the purpose, cause or belief is clear can the symbol command great power   (Sinek, 2009, p. 160)

As this quote from Sinek suggests, symbols (e.g., alphabets, flags, icons) are created by humans. Thus, the 'meaning' of the symbols will typically reflect the intentions or purposes motivating their creation. For example, as a symbol, a country's flag might represent the abstract principles on which the country is founded (e.g., liberty and freedom for all). However, it would be a mistake to conclude from this (as many cognitive scientists have) that all 'meaning' lives in our minds. While symbols may be a creation of humans - meaning is NOT.

Let me state this again for emphasis:

Meaning is NOT a product of mind!

As the triadic model of a semiotic system illustrated in the figure below emphasizes meaning emerges from the functional coupling between agents and situations. Further, as Rasmussen (1986) has emphasized this coupling involves not only symbols, but also signs and signals.

Signs (as used by Rasmussen) are different than 'symbols' in that they are grounded in social conventions. So, the choice of a color to represent safe or dangerous, or of an icon to represent 'save' or 'delete' has its origins in the head of a designer. At some point, someone chose 'red' to represent 'danger,' or chose a 'floppy disk' image to represent save.  However, over time this 'choice' of the designer can become established as a social convention.  At that point, the meaning of the the color or the icon is no longer arbitrary. It is no longer in the head of the individual observer. It has a grounding in the social world - it is established as a social convention or as a cultural expectation. People outside the culture may not 'pick-up' the correct meaning, but the meaning is not arbitrary.

Rasmussen used the term sign to differentiate this role in a semiotic system from that of 'symbols' whose meaning is open to interpretation by an observer. The meaning of a sign is not in the head of an observer, for a sign the meaning has been established by a priori rules (social or cultural conventions).

for a sign the meaning has been established by a priori rules (social or cultural conventions)

Signals (as used by Rasmussen) are different than both 'symbols' and 'signs' in that they are directly grounded in the perception-action coupling with the world. So, the information bases for braking your automobile to avoid a potential collision, or for catching a fly ball, or for piloting an aircraft to a safe touchdown on a runway are NOT in our minds! For example, structures in optical flow fields (e.g., angle, angular rate, tau, horizon ratio) provide the state information that allows people to skillfully move through the environment. The optical flow field and the objects and events specified by the invariant structures are NOT in the mind of the observer. These relations are available to all animals with eyes and can be leveraged in automatic control systems with optical sensors. These signals are every bit as meaningful as any symbol or sign yet these are not human inventions. Humans and other animals can discover the meanings of these relations through interaction with the world, and they can utilize these meanings to achieve satisfying interactions with the world (e.g. avoiding collisions, catching balls, landing aircraft), but the human does not 'create' the meaning in these cases.

for a signal the meaning emerges naturally from the coupling of perception and action in a triadic semiotic system. It is not an invention of the mind, but it can be discovered by a mind.

In the field of cognitive science debates have often been cast in terms of whether humans are 'symbol processors,' such that meaning is constructed through mental computations, or whether humans are capable of 'direct perception,' such that meaning is 'picked-up' through interaction with the ecology.  One side places meaning exclusively in the mind, ignoring or at least minimizing the role of structure in the ecology. The other side places meaning in the ecology, minimizing the creative computational powers of mind.

This framing of the question in either/or terms has proven to be an obstacle to progress in cognitive science. Recognizing that the perception-action loop can be closed through symbols, signs, and signals opens the path to a both/and approach with the promise of a deeper understanding of human cognition.

Recognizing that the perception-action loop can be closed through symbols, signs, and signals opens the path to a both/and approach with the promise of a deeper understanding of human cognition.

I just finished Simon Sinek's (2009) "Start with Why." I was struck by the similarities between Sinek's 'Golden Circle' and Jens Rasmussen's 'Abstraction Hierarchy' (see figure). Both parse systems in terms of  a hierarchical association between why, what, and how.

For Rasmussen - 'what' represented the level of the system being attended; 'why' represented a higher-level of abstraction that reflected the significance of the 'what' relative to the whole system; with the highest level of abstraction reflecting the ultimate WHY of a system - its purpose.  In Rasmussen's system,  'how' represented a more concrete description of significant components of the level being attended (e.g., the component processes serving the 'what' level above).

Sinek's 'why' corresponds with the pinnacle of Rasmussen's Abstraction Hiearchy. It represents the ultimate purpose of the system. However, Sinek reverses Rasmussen's 'what' and 'how.' For Sinek, 'how' represents the processes serving the 'why;' and the 'what' represents the products of these processes.

Although I have been teaching Rasmussen's approach to Cognitive Systems Engineering (CSE) for over 30 years, I think that Sinek's WHY-HOW-WHAT partitioning conforms more naturally with common usage of the terms 'how' and 'what.' So, I think this is a pedagogical improvement on Rasmussen's framework.

However, I found that the overall gist of Sinek's "Start with Why" reinforced many of the central themes of CSE. That is, for a 'cognitive system' the purpose (i.e., the WHY) sets the ultimate context for parsing the system (e.g., processes and objects) into meaningful components. This is an important contrast to classical (objective) scientific approaches to physical systems. Classical scientific approaches have dismissed the 'why' as subjective! The 'why' reflected the 'biases' of observers. But for a cognitive science, the observers are the objects of study!

Thus, cognitive science and cognitive engineering must always start with WHY!