Skip to content

What must be admitted is that the definite images of traditional psychology form but the very smallest part of our minds as they actually live. The traditional psychology talks like one who should say a river consists of nothing but pailsful, spoonsful, quartpotsful, barrelsful and other moulded forms of water. Even were the pails and the pots all actually standing in the stream, still between them the free water would continue to flow. It is just this free water of consciousness that psychologists resolutely overlook. Every definite image in the mind is steeped and dyed in the free water that flows around it. With it goes the sense of its relations, near and remote, the dying echo of whence it came to us, the dawning sense of whither it is to lead. The significance, the value, of the image is all in this hallo or penumbra that surrounds and escorts it, or rather that is fused into one with it and has become bone of its bone and flesh of its flesh; leaving it, it is true, an image of the same thing it was before, but making it an image of that thing newly taken and freshly understood.   William James (1890, p. 255)

When talking about cognitive experiences people often refer to a sense of 'flow.' This concept does not fit easily into conventional cause-effect narratives based on billiard ball collisions (or string of dominos) metaphors.  It suggests the need for a new narrative. I wonder whether it would be possible to follow the lead of physics and consider the possibility of a new narrative where instead of talking about cause - we talk about constraints; and instead of talking about effects - we talk about possibilities.

The physicist, John Wheeler describes the motivation that led physicist to adopt a field narrative:

It is to Aristotle, working in the fourth century B.C., that we owe the popular maxim that ‘nature abhors a vacuum’. It is more accurate to say that people abhor a vacuum. Newton called it an absurdity. Scientists ever since have developed our picture of nature in terms of what I may call ‘local action’, to distinguish it from ‘action at a distance’. The idea of local action rests on the existence of ‘fields’ that transmit action from one place to another. The Sun, for instance, can be said to create a gravitational field, which spreads outward through space, its intensity diminishing as the inverse square of the distance from the Sun. Earth ‘feels’ this gravitational field locally – right where Earth is – and reacts to it by accelerating toward the Sun. The Sun, according to this description, sends its attractive message to Earth via a field rather than reaching out to influence Earth at a distance through empty space. Earth doesn’t have to ‘know’ that there is a sun out there, 93 million miles distant. It only ‘knows’ that there is a gravitational field at its own location. The field, although nearly as ethereal as the ether itself, can be said to have physical reality. It occupies space. It contains energy. Its presence eliminates a true vacuum. We must then be content to define the vacuum of everyday discourse as a region free of matter, but not free of field.

Also Richard Feynman explains the power of the field construct for tracking dynanic relations that extend over space and time:

It [the field construct] would be trivial, just another way of writing the same thing, if the laws of force were simple, but the laws of force are so complicated that it turns out that fields have a reality that is almost independent of the objects which create them. One can do something like shake a charge and produce an effect, a field, at a distance; if one then stops moving the charge, the field keeps track of all the past, because the interaction between two particles is not instantaneous. It is desirable to have some way to remember what happened previously. If the force upon some charge depends upon where another charge was yesterday, which it does, then we need machinery to keep track of what went on yesterday, and that is the character of a field. So when the forces get more complicated, the field becomes more and more real, and this technique becomes less and less of an artificial separation.

Would it be useful to frame the coupling of perception and action in terms of fields of constraint - and to consider how the constraints on information (e.g., Gibson's optical flow fields) specify the possibilities for action (e.g., the safe field of travel) [see Gibson and Crooks,1938]?

Would it be useful to think about event trajectories in terms of fields of possibilities analogous to Minkowski's light cones? However, in thinking of the possibilities for a cognitive system one must consider both the constraints going forward from the present AND the constrains extending backward from a goal or ends - in order to show the possible paths (or means) to an ends.  Thus, an event might look something like this, where the constraints on perception and action limit the paths from where you are (now) to where you are striving to reach (intention).

What do you think?  What would a field theory of cognition look like? Is it possible for cognitive science to escape from classical Newtonian narratives and to follow physics into a dynamic world that flows? Would it be a step forward?

For a deeper dive into this see: Flach, Dekker, & Stappers (2007). Playing twenty questions with nature (the surprise version): reflection on the dynamics of experience. Theoretical Issues in Ergonomics Science, 9:2, 125-154.

2

As the classical story about the blind men and the elephant illustrates - the complexity of natural systems generally exceeds our grasp.  That is, from any one perspective we can only grasp a part of the elephant. Thus, to get a sense of the whole elephant it is necessary to walk around to explore each of the parts and then to mentally stitch the parts together to get a sense of the whole elephant. The different possible positions of the blind men are analogous to the different disciplines in science. Each discipline owns a part of the elephant and the challenge is to combine the various perspectives into a coherent understanding of the elephant.

However, for those who can see, there is another approach for getting a sense of the whole elephant.  We can move away, increasing our distance from the elephant. As we move farther away, we can see less detail, but we can now see relations among the parts that were not visible from up close.  Still, even when the whole elephant is visible, we can only see one side at a time. So, there is no single perspective that allows us to see the whole elephant. There remains the need for some cognitive work to stitch the different perspectives together into a complete understanding of the whole.

Thus, there are two ways that we can change perspectives in exploring the elephant. One way, illustrated by the blind men is through aggregation of the parts. The second way, available to those with eyes, is to change perspective through increasing distance from the elephant. This is a metaphor for abstraction.

One of the key insights of Jens Rasmussen is that to make sense of complex organizations or complex work domains it is necessary to explore through both decomposition/aggregation and through abstraction.  Further, he suggests, based on observations of experts trouble shooting faults in complex technologies, that in terms of thinking productively about complex systems some regions in the abstraction/aggregation space are privileged. In particular, he suggests that at high levels of abstraction the details become less important. For example, there is limited return from reducing a functional purpose into goals, and then further reducing them into sub-goals, which can be further divided into sub-sub-goals. On the other hand, at low levels of abstraction details become more and more important. For example, the shapes of the different components have to fit together in the space available. The threads of one part have to mesh with the threads of another part.  The diagrams below are intended to illustrate an exploration space for exploring complex domains jointly by abstraction and decomposition/aggregation.

The key point is that to better understand the whole elephant we have to use multiple senses and take multiple perspectives.  John Boyd uses the analogy of building a snowmobile to illustrate the importance of combining an analysis mindset (e.g., exploring the components of different vehicles - motors, steering mechanisms, traction), and a synthesis mindset (e.g., recombining those components to satisfy a different purpose - driving over snow). Decomposition/aggregation reflects an analysis mindset, and abstraction reflects a synthesis mindset. The hypothesis is that to think productively about complex problems one needs to be able to move along the diagonal in this abstraction/decomposition space.

I am still struggling to find the right way to represent the multiple layers of constraint that shape the dynamics of organizations. The image above is framed in terms of a nuclear power plant. As with previous depictions, I include three layers of constraint - recognizing that there may be other layers within layers. However, I think these three layers represent important distinctions that are common across many different organizations.

In this representation - the intention is to emphasize that the layers are nested - such that outer layers set the context or the degrees of freedom for operations at inner layers. Each layer is monitoring operator actions and plant performance - but they are filtering the information to reflect different spheres of action, different degrees of granularity, and different time spans.

At the Industry Management level the focus is on standards associated with regulation, investment, and training. At the Plant Management level the focus is on anticipating events, planning, and supervision. At the Operator level, the focus is on operating, monitoring, and maintaining the equipment.

A key aspect of the coupling across the layers is the degrees of constraint. That is, how loosely or tightly the inner loops are constrained by outer loops. The stability or success of an organization will be a function of finding the right balance of constraint relative to the demands of the work environment (e.g., the pace of change, the degree of complexity/uncertainty). The balance is reflected in attributes such as subsidiarity and trust - which describe whether organizations give more or less discretion (degrees of freedom, decision authority) to inner layers.

A general principle is that when the pace of change or the degree of uncertainty is high, organizations that give more discretion/flexibility to people in the inner layers to directly adapt to situations will be more stable. This is largely due to the longer time constants required for sensemaking at the outer layers. In other words, when events on the ground are rapidly changing in surprising ways, there is no time to wait for direction from upper levels of management. The shifting of military organizations such as the US Marines toward more Mission Command styles of leadership is motivated by a desire to be more effective in a highly dynamic, uncertain domain.

On the other hand, when the work domains are relatively stable, the broader perspective and longer integration periods (i.e., broader experience) of outer layers are less likely to lead the organization to over-react to transient changes (e.g., chase fads). Thus, more top-down control or regulation leads to stable, more efficient performance. For example, commercial aviation and nuclear power domains typically benefit from more top-down regulation and constraint that reflect extensive experience in relatively stable domains of operation.

It is important to recognize that the effectiveness of a polycentric control system (or any control system) can only be judged relative to the functional demands of specific work contexts. So, be skeptical of any models that propose to set the 'ideal' or 'optimal' general recipe for success. Solutions that may have been well tuned to the past, may become unstable when the demands of the work domains change.

 

 

In the previous post - I offered a model for a cognitive system in which I attempted to integrate the three dimensions of the Strategic Game described by Boyd (Moral, Mental, and Physical) as nested layers in a polycentric control system. A unique and critical aspect of this model is that each layer functions effectively as an OODA loop closed through the ecology. That is - each layer is involved in directly interacting with the ecology. The difference between layers is the time span over which they are integrating information. The outer layer in the model is integrating across events - to derive general principles that have broad implications for stability. The middle layer is integrating over a specific event - typically taking into account patterns that allow it to recognize possibilities, to apply heuristics, to plan, and to steer activities toward a satisfying outcome. Finally, the inner layer is controlling actions and reacting to stimuli and situations in the context of the beliefs and intentions specified by the outer layers.

However, I think this is difficult for many to grok, because it is difficult to get past conventional assumptions about information processing. Information processing models have typically been formulated as a series of information processing stages with a precedence relation such that later stages in the process only "see" the products of earlier stages. Thus - the deeper stages are required to utilize inference to construct an 'internal model' of the actual ecology from the 'cues' provided by the earlier stages. In other words - there is typically assumed to be a gulf (blanket or filter) that separates our mental experiences from our direct physical interactions with the ecology. Many people can't imagine that abstract principles or beliefs such as "do unto others as they would do unto you" could be actually tested directly as a result of the consequences of actions in the ecology.

However, I hypothesize that this cognitive interplay between moral, mental, and physical dimensions is scale independent and that it applies equally to individuals and to organizations as cognitive systems. Thus, the figure below frames exactly the same model - but in terms of organizational dynamics. The hope is that the organizational perspective will be a little easier for people to grok, because the interplay of the different levels is more observable in an organization than in an individual organism.

In this diagram the inner loop (green) represents the front line workers who are directly acting in the ecology. For example, these might be the first responders (fire, police, and medics) who are directly involved in managing the emergency response.

The next outer layer (orange) represents the incident command center. This includes the incident commanders and managers who are monitoring the situation on the ground, who are managing the logistics, and who are developing plans to allocate responsibility and coordinate the actions of the first responders on the ground. Note that the incident command is getting information from the responders on the ground, but they also have access to information about the ecology (e.g., the availability of external resources) that is not available to the people who are immersed in the details of local situations.

The outer most layer (blue) represents the organizational homes of the various first responders (e.g., the fire department, the police department and the medical system). This outer system is less active in dealing with the specific immediate emergency event - but this loop will be responsible for after action analysis of the event, where it will have information about the ecology that might not have been available to either of the inner layers during the event. This layer is responsible for evaluating the response in relation to other events (e.g., after action reviews and historical analyses of prior events) and to extract the lessons and general principles that will potentially improve responses to future events. These lessons will then be integrated into training processes that will shape how planning and emergency responses will be conducted in future events.

A key aspect of the dynamic is that the (beliefs, intentions, and actions) at the different levels are shaping and being shaped by each of the other layers - though the changes are happening at different time scales. Each layer is operating at a different tempo or bandwidth than the other layers, and most importantly this means that each layer will be able to 'see' or 'pick-up' patterns in the ecology that cannot be seen in the other layers. And it is important to emphasize that these patterns are not 'in the head' but are emergent properties of the direct physical interaction with the ecology. Each layer is an abductive system in that it is forming hypotheses that are then directly tested by future actions in the ecology (not by inferences on a mental model). Every loop is 'tested' against the practical consequences it has toward achieving stable (i.e., satisfying) consequences in the actual ecology.

In sum, I am proposing that the classical partitioning of cognitive systems into a sequence of processing stages is wrong. Yes - information is being processed at multiple different levels - but each level functions much like an OODA loop and the stability of the complete system is dependent on whether the various loops shape interactions with the ecology in productive or satisfying directions.

The figure below is an attempt to illustrate three important dimensions of any cognitive system as layers in a polycentric control system. The three layers are designed to reflect John Boyd’s dimensions of the strategic game: Physical, Mental, and Moral. Note that all three layers are closed loops (effectively OODA loops) that get feedback as a result of the consequences of acting on the ecology. Further, the outer layers set up constraints or determine the degrees of freedom available to inner layers. Finally, it is important to recognize that each layer is working continuously and simultaneously with the other layers, but at a different time scale (i.e., these are not sequential operations).

The inner most layer (green) in this diagram represents the direct coupling with the ecology that reflects perceptual-motor skills. This “represents the world of matter-energy-information all of us are a part of, live in and feed upon” (Osinga, 2007, p. 210). This type of coupling was the focus of J.J. Gibson’s research and constructs such as affordance and optical invariant play important roles. Much of the action at this level is ‘automatic’ requiring little conscious awareness.

The next outer layer (orange) in this diagram reflects conscious thinking related to problem solving and decision making. At this level an intention is formed that will frame subsequent actions in the layer below. This “represents the emotional/intellectual activity we generate to adjust to, or cope with, that physical world” (Osinga, 2007, p. 210). The functioning of this layer is the focus of researchers such as Gary Klein (Recognition-Primed Decision Making, Naturalistic Decision Making) and Gerd Gigerenzer (Ecological Rationality). Important constructs to consider at this level include abduction and heuristic. The intention provides the framing for attention, recognition, and the possible actions at the skill-based level.

The most outer layer (blue) in this diagram reflects the influence of long-term experience and learning. It “represents the cultural codes of conduct or standards of behavior that constrain, as well as sustain and focus, our emotional/intellectual responses” (Osinga, 2007, p. 210). This layer reflects the values, principles, assumptions, and beliefs that shape performance at the other layers. This layer reflects the largely ‘implicit knowledge’ that shapes analysis and synthesis at the conscious thought level below.

This system is a multilayered, adaptive control system that is continually tuning to achieve a stable relation with the ecology. This tuning is happening simultaneously at all three levels, but at different time scales. Ultimately, stability depends on coordinated interaction across the three levels. The general notion is consistent with C.S. Peirce's logic of abduction, Piaget's constructs of assimilation and accommodation, EJ Gibson's construct of attunement, and Friston's Free Energy Principle. Thus, the dynamic is essentially one of minimizing "surprise" (i.e., reducing uncertainty) relative to the continuously changing Ecology.

The unique feature of this model relative to other models is that most models envision the interactions between levels as sequential. That is, there is a precedence relation between the three layers so that the outer layers need to infer the 'ecology' based on cues provided by the inner layer. In contrast, the model presented here suggests that all three levels have direct feedback of the consequences of action - though this feedback is integrated over different time constants. In essence - each layer is tuned to different bandwidths (in terms of the patterns that matter most).

Three paramedics are driving along a rural road in central Ohio when a truck in front of them suddenly seems to go out of control causing several other cars to crash, then hitting some pedestrians who were skating along the road and eventually hitting a tree. The paramedics get out to survey the scene and two of them each go to different victims and begin treating them, the third paramedic surveys the situation and heads back to the car, where he remains throughout the event.

What is he doing in the car?

In fact, after the event was over, the other two paramedics were puzzled, and ask him why he did not immediately begin treating the other victim? His response was "which other victims?" It turns out that neither of the other two paramedics realized that there were more than three victims. 

What was the third paramedic doing in the car? He was taking incident command. He realized that for some of the victims to survive it would be essential to get them to trauma hospitals, where they could get more extensive treatment, within the "golden hour." He realized, that to ensure that all the victims were saved they would need to get ambulances and a care flight helicopter into the scene as quickly as possible. He was on the radio calling for support and providing them directions so that they could get to the rural scene as quickly as possible. This included identifying a potential landing site for the care flight helicopter.

While he was in the car, a number of other people stopped to offer help, including a nurse. He was able to direct these volunteers. Asking the nurse to attend to one of the injured and directing another person to attend to the truck driver who was wandering from his truck in a daze. 

After the incident, this paramedic explained to the other two, "Yes, I could have started to treat one of the victims, but I wanted to make sure that all the victims survived. And I realized that to do that, we needed more resources and somebody would have to take command and coordinate these resources." 

This story illustrates the functions related to three different layers in a polycentric control system.

  • At the mutual adjustment level, two of the paramedics, the nurse, and other volunteers were directly acting to address the immediate demands of the situation.
  • At the planning level, the third paramedic was functioning as an incident commander. He wasn't directly treating patients, but he was attending to larger patterns and trends in order to anticipate needs and to coordinate the resources that would ultimately be critical for achieving a satisfying outcome.
  • And the reason this paramedic took this role was because he had been trained in the National Incident Management System which had been developed based on generations of fire fighting experience with large forest fires. This has become a national standard and many fire and police departments are required to have NIMS training. This establishment and training of a standard reflects contributions at the Standardization or Knowledge-based layer of the emergency organizations. Additionally, the idea of a 'Golden Hour' is a principle based on extensive experiences in emergency medicine. The Standardization Layer reflects a capacity to integrate across an extensive history of past events to pull out principles that provide guideposts (structure, constraints) for organizing operations at the other levels. 

The figure below is an alternative way to represent a polycentric control system to the layered diagram used in prior essays. This representation was developed based on observations and interviews with emergency response personnel. Here the layers are illustrated as nested control loops. These loops are linked through communications (the thin lines) but also through the propagation of constraints (the block arrows), in which outer loops shape or structure the possibilities within the inner loops (e.g., standard practices, command intent, a plan, distributing resources).  The coordination between layers is critical to achieving satisfactory results and this coordination depends both on communications within and between layers and the propagation of constraints (sometimes articulated as common ground, shared expectations, shared mental models, or culture).

Note that neither the previous representation showing layers or the present representation showing nested control loops is complete. Each representation makes some aspects of the dynamic salient - while hiding other aspects of the dynamic and perhaps carrying entailments that are misleading with respect to the actual nature of the dynamic. An assumption of general systems thinking is that there is no perfect representation or model. Thus, it is essential to take multiple perspectives on nature in order to discover the invariants that matter - to distinguish the signal from the noise. 

Three important points:

  • First - the power of a polycentric control system relative to addressing complex situations is that each layer has access to potentially essential information that would be difficult or impossible to access at other layers. However, without effective coordination between the layers some of that information will be impotent.
  • Second - it is easy for people operating within a layer to have tunnel vision, to take the functions of the other layers for granted, and to underestimate the value that the other layers contribute. For example, it is easy for me and Adam to take Fred's art work for granted. However, when Fred is replaced by someone less skilled or with a different style - suddenly the gap becomes clearly evident. 
  • Third - be careful not to get trapped in any single perspective, model, or metaphor. Be careful that your models don't become an iron box that you force the natural world to fit into. Be careful not to fall prey to the illusion that any single model will provide the requisite variety that you need to regulate nature and reach a satisfying outcome. 

Now, the rest of the story: All the victims from the accident above survived. 

4

Only variety can destroy variety

This quote is a concise statement of Ashby's Law of Requisite Variety. Variety in this context refers to the complexity of a problem typically indexed using information statistics. And the important implication of Ashby's law for organizations is that to deal with complexity effectively, an organization must have the flexibility of thought and/or action that is comparable to the complexity of the problem. If an organization is less flexible than the variety of the problem, then there are opportunities that it will not be able to realize or threats that it will not be able to avoid. This concept is fairly intuitive in the context of competitive sports - for example the player who has the broader arsenal of shots (the greater variety) will consistently defeat a player with fewer capabilities. This is because the player with the broader arsenal will eventually find shots that the opponent can't counter. 

Following up on the idea of a control system with multiple layers with different time constants - one of the key implications of the polycentric control model is that none of the layers has the capacity to handle the requisite variety of any complex situation or operational domain. Thus, skill in any complex domain depends on the coupling and coordination between the layers. Each layer has its lane of expertise that allows it to deal with aspects of situations that cannot be adequately dealt with by the other layers. But no lane is completely independent - and activities in one lane shape the possibilities within each of the other lanes. The consequences of the between layer interactions can be that activities in each lane can facilitate or inhibit possibilities within the other layers. 

It is probably naive to think that there are only three layers, but as noted in the previous essay, the three layers shown in this figure parse the system in a way that aligns with numerous observations about the dynamics of cognitive systems and organizations from a wide range of disciplines (e.g., psychology, economics, sociology, control theory, and military science). Also - while I find the idea of filters tuned to different ranges of frequencies to be a useful analog for thinking about the layers - this is a loose analogy and not intended to be taken too literally (or quantitatively). I hope that this won't be too big a distraction to those who are not familiar with thinking in terms of the frequency domain. The goal for this essay is to introduce these three layers, and then following essays will spend more time considering the coupling or interactions across layers. 

The Mutual Adjustment or Skill-based Layer

The bottom layer in the diagram above will typically be sensitive to high frequency bandwidths (i.e., rapidly or suddenly changing events).  This represents the capacity to respond and to adapt quickly to sudden changes and surprises. In an individual this might be the motor control system; in a large organization this might be the front line workers who are most directly engaged in carrying out operations (in military terms this is the tactical level); and on an American football team this layer would reflect the actions of the players during play. Activity at this level typically involves implementing automatic processes or standard procedures to carry out the plans formulated at higher levels. However, this layer will also typically be required to improvise to deal with unexpected variability that was not anticipated in the planning or the development of the automatic processes, heuristics, and standard procedures. For highly dynamic, rapidly changing domains of operation the capacity to improvise at this level will be critical to resilience. For example, the ability of players on the football team to react intelligently during a 'broken play' when the quarterback is forced from the pocket by a defensive surprise. The biggest threat to stability at this level is the potential to chase the noise - that is, to waste energy following fads or spurious changes that are distractions and that do not lead to improvements or long term success. 

The Planning or Rule-based Layer

The middle layer in the diagram above will typically be tuned to intermediate level frequencies - that is, it integrates information over broader ranges of space and time than is possible at the lower level. This layer will be able to pick up event patterns and trends that require a broader perspective than is possible at a lower layer. In an individual, this would represent intentions and conscious heuristics, rules, and plans. In organizations, this is typically the function of managers and in emergency response and military organizations this would reflect the activities in an incident command center. For example, in an air operations center they might be generating plans for the next 6 - 10 hours of operations. In an American football team this layer would reflect the formulation of a game plan and then the communications between coaches (on the field and in the press boxes) and the quarterback to call a play or series of plays. And it will also include the replanning (e.g., at half-time), when deficiencies in the original game plan are discovered. This is the layer where it is possible to gain "top-sight" on a complex situation. However, it takes time to integrate all the information required for top-sight - too long for this layer to effectively respond to sudden surprises or momentary changes. For example, the receivers on a football team can't wait for the coaches to plan a new pass route when the original play breaks down and the quarterback is forced from the pocket. The other challenge to planning is that assumptions and expectations that are valid at the time a plan is formulated may not remain valid over the time it actually takes to implement the plan. 

The Standardization or Knowledge-based Layer 

The top layer in the diagram above will typically be tuned to very low frequencies, which provides the capacity to integrate over broad spans of time to identify reoccurring patterns, very slow trends, and principles (or invariants) that apply across many different events or situations. This layer reflects experience passed down from generation to generation through oral traditions, literature, and culture. For example, for the military domain this layer is informed by a long history that stretches back to Sun-Tzu and beyond. This layer is typically responsible for developing standard practices and expectations - that include specifying operational principles that can be applied successfully across a broad range of situations and then inculcating those practices into the organization through training (e.g., deliberate practice). In an individual, this layer would typically be associated with a persons value system and the enduring patterns that are typically identified as their personality. This system does not directly control action - but it shapes the planning and actions of an individual in general ways that will be apparent across broadly different situations. In an organization, this layer reflects what is often referred to as its culture. In operation - this layer typically involves the top-level leaders in an organization who set the general goals and expectations for the organization and might also specify the command intent as a guide to a specific operation. 

Interactions 

To push the filtering analogy a bit - a linking assumption is that the requisite variety of a complex problem domain will involve signals that are spread across the full range of frequencies. So, a filter that is tuned to a certain bandwidth will miss signals outside that bandwidth. A key factor that will limit the bandwidth of any filter is the effective time constant or the lags associated with feedback.  So - just based on the fact that the time to collect, process and act on information will be necessarily different for the different layers in the polycentric control system suggests that the bandwidth limits for each layer will be different. But of course, there are many other factors that will impact what signals can be seen at each layer (e.g., trust). The main point is that people at the top layer of the polycentric control system (e.g., the C-suite) will have access to information not available to the other layers. But similarly, each of the other layers will have access to information that is not available to the C-suite or to the other layer.  Thus - coordination between layers is necessary to meet the demands of Ashby's law.  The variety of the whole is always greater than the variety of any single layer. Or more simply - each layer offers a unique and potentially valuable perspective towards addressing the operational demands. 

Success ultimately depends on coupling and coordination across the three layers. Each layer has the capacity to complement the other layers and fill in information gaps in the other layers. However, the layers can also function in ways that complicate and inhibit capabilities at the other layers. A recurring theme with respect to a layer getting out of its lane in ways that inhibit capabilities at the other layers are the problems of micro-management or of authoritarian, centralized organizations. As Fredrick von Hayek's has observed with respect to managing economies - no matter how well-intended or intelligent a centralize management organization is - it simply takes too long to collect and make sense of all the potentially relevant information. In essence, the decisions will always be a day late and a dollar short. A centralized control agency is far too sluggish to productively manage large economies. On the other hand, he also recognizes that for free markets to function well, there needs to be effective communication systems and market constraints that entail a certain degree of top-down constraint. When the balance is right - the free market system can self-organize in highly intelligent ways. In the military, the construct of Mission Command reflects an alternative to micro-management that emphasizes the importance of clearly communicating intent (minimal top down constraint) and then empowering lower levels in the organization to work out the operational and tactical details as required by the situation demands that could not have been anticipated in advance.

With respect to Ashby's Law the challenge for organizations (and organisms) is to distribute authority and responsibilities across the layers in a way that is commensurate with the access to information at each of the layers. 

The goal for this essay was to begin differentiating the functions of the separate layers. However, the major systems principle to consider with regards to polycentric control is that the individual layers can only be fully understood or appreciated in the context of the whole.  With respect to functioning successfully in a complex world - no single layer can effectively deal with the demands of requisite variety without the support of the other layers. And ultimately, the power (or weakness) of the whole will emerge from interactions across the layers. 

View Post

As I have discussed before, I was introduced to the quantitative methods for analyzing control systems early in my graduate career and it set an important framework for how I have approached Joint Cognitive Systems. Learning mathematical control theory was a struggle for me and I was so eager to share what I learned with other social scientists that I co-authored a book on control theory with Richard Jagacinski who was my major graduate advisor. However, as I began to look at joint cognitive systems that were more complex than the laboratory target acquisition and tracking tasks, I soon realized that every day life is a lot more complex than the laboratory tasks and that the quantitative models that worked for simple servomechanisms and for experimental conditions that required people to act like simple servomechanisms were of limited value. 

Everyday life and especially organizational dynamics typically involves many interconnected loops with non-trivial interactions. For example, Boyd's OODA Loop, that is often used to describe skilled behavior, is not a single loop but a multi-loop, adaptive control system. 

Joint Cognitive Systems typically have the capacity to learn and to adapt the dynamics of the perception-action coupling to take advantage of that learning. Thus, Joint Cognitive Systems are able to modify or tune the dynamics of the perception-action coupling to fit the unique demands of different situations. The diagram below is one that I developed to show multiple adaptive loops that reflect some of the different strategies that control engineers have used in designing adaptive automatic control systems (e.g., gain scheduling, model-reference adaptive control, self-tuning regulators). Note that the two different styles of arrows reflect different functions - the thin arrows reflect the flow of information that is input to and 'processed' through the dynamics of the different components of the system (i.e., the boxes). However, the block arrows actual operate on the boxes and change or tune the processes within the boxes. For example, in an engineered adaptive control system the result might be to turn down or amp up the sensitivity or 'gain' within a control element. Thus, in adaptive control systems the outer loops typically alter the dynamics of the inner loops. 

While the previous figure was designed to imagine skilled motor control as an adaptive system the next diagram was designed based on observations of decision making in the Emergency Department of a major hospital. The point was to illustrate some of the ways in which organizations learn and adapt as a result of experience. 

I hope that the preceding diagrams can help readers to get a taste of the complexity of control in the natural world, however I am not fully satisfied with them.  I still have a feeling that these diagrams trivialize the real complexity. More recently I have been inspired by the work of Elenor Ostrum on how communities adapt to manage shared resources and to avoid the "tragedy of the commons," and work by the SNAFU catchers (Allspaw, Cook, & Woods) on Dev Ops and managing large internet platforms. It has been suggested that we have to think about layers of control - or polycentric control. 

The groups who have actually organized themselves are invisible to those who cannot imagine organization without rules and regulations imposed by a central authority. (Ostrum, 1999, p. 496)

Each technology shift—manual to automated control to multi-layered networks—extends the range of potential control, and in doing so, the joint cognitive system that performs work in context changes as well. For the new joint cognitive system, one then asks the questions of Hollnagel’s test:

What does it mean to be ‘in control’?

How to amplify control within the new range of possibilities?   (Woods & Branlat, 2010, p. 101)

The following figure is my attempt to illustrate a polycentric control system.  This diagram consists of three layers that seem to have a rough correspondence with Rasmussen's (1986) three levels of cognitive processing (Knowledge-, Rule- and Skill-based) and Thompson's (1967) three means of coordination within organizations (Standardization, Planning, Mutual Adjustment). These levels interact in two distinct ways - one is passing information through direct communication as is typically represented by lines and arrows in standard processing diagrams. However, the second important way is through the propagation of constraints. I am unaware of any convention for diagraming this. In general, higher levels set constraints on the framing of problems at lower levels - in more technical terms - the higher levels impact the degrees of freedom for action at the lower levels. For example, the standards and principles formulated at the highest level set expectations (e.g., through the way people are selected, trained, and rewarded) for the 'proper' way to do planning and the proper way to act. Or the plans, set expectations about responsibilities and actions at the mutual adjustment level.  The constraints typically don't specify the actions in detail - but they do shape the framing of situations and often bound the space of possible actions that are considered. 

Although it is not possible to model Polycentric Control Systems using the same mathematics that was used to model simple servomechanisms, there are important principles associated with stability that will generalize from the simple systems to the more complex systems. Perhaps, the most significant of these is the impact of time delays on the stability of these systems and the implication for the ability to pick-up patterns and to control or respond to events. The effective time delays associated with communication and feedback will set constraints on the bandwidth of the system. This is seen in models of human tracking as the Crossover Model - in which the effective reaction time sets limits on what frequencies the human can follow without becoming unstable. However, this constraint is also seen almost universally in natural systems in terms of the 1/f characteristic that appears so predominantly when examining performance of many natural systems in the frequency domain. In essence, there are always delays associated with the circulation of information (e.g., feedback) in natural systems and this will always set bandwidth limits on the ability to adapt to situations. 

A key attribute of the different layers shown in the diagram of the polycentric control system is that the higher layers will have effective time delays that are progressively longer than the lower layers.  On the one hand, this means that high frequency events require the capacity for elements at the mutual adjustment level to be in control (a requirement for subsidiarity). For example, it means that in highly dynamic situations the people on the ground (at the mutual adjustment level) may need to be free to adapt to unexpected situations without waiting for instructions from the higher levels (even if this may require them to deviate from the plan or violate standard operating procedures). On the other hand, this means that higher levels may be better tuned to pick-up patterns that require integration of information over space and time that is outside the bandwidth of people who are immersed in responding to local events (slowly evolving patterns or general principles). Thus, for example an incident command center may be able to provide top-down guidance that allows people at the mutual adjustment level of a distributed organization to coordinate and share resources with people who are outside of their field of regard. Or standard operating procedures are developed and trained to prepare people at the mutual adjustment level to deal efficiently with recurring situations. 

So, I hope this short piece has heightened your appreciation for the complexity of natural control systems and wet your appetite to learn more about the dynamics of complex joint cognitive systems. There is a lot more to be said about the nature of polycentric control systems and the implications for the design and management of effective organizations. 

Yes - following on the previous post - I do believe that Cognitive Systems Engineering (CSE) generates juice that is well worth the squeeze. However, I think that it is important to distinguish between CSE as an academic enterprise exploring basic issues about the nature of work and the nature of human cognition versus a component of a design process.

When implemented in a design process, a CSE work analysis is sometimes mistakenly implemented as a prerequisite to other aspects of design (e.g., prototyping). The problem with this is that work analysis is never done. Work domains are not static - they are constantly changing due to new opportunities and new challenges associated with evolving technologies and operational contexts. Thus, there are always new depths to explore and often one question leads to even more questions. If you delay other aspects of design until the work analysis is complete - nothing will ever get built. 

Thus, work analysis should be implemented as a co-requisite to other aspects of design.  For example, customers or operators often have a difficult time articulating why they make certain choices, how new technologies might be helpful, or what they need to work more effectively without some concrete context. One way to provide that context is to create concrete scenarios (e.g., critical incidents). Another way is to provide them with a concrete model or prototype that they can manipulate. Even crude models (e.g., back of the napkin sketches or paper prototypes) can be very effective. In the process of reviewing a scenario or interacting with a prototype customers will sometimes be able to recognize and articulate new insights about the utility of the prototype or potential problems with it. This is reflected in Michael Schrage's concept of 'Serious Play.' in essence, prototypes can help to engage operators and allow them to participate in the idea generation process. This can be a valuable source of knowledge about a work domain. Prototypes can greatly enhance knowledge elicitation and work analysis. 

So, it is not a question of doing work analysis OR building design prototypes - success typically requires BOTH work analysis AND prototyping. And further, there is no fixed precedence. Ideally, the work analysis should be tightly coupled with more concrete aspects of design (e.g., wire framing, prototyping). In this coupling, work analysis can be both feedforward (generating hypotheses) and feedback (evaluating operator responses to concrete implementations). 

With the modern explosion of technologies for managing complex information, work domains are rapidly changing. This requires a CSE perspective to assess the changing opportunities and risks and to generate alternative hypotheses for how to leverage these technologies more effectively to reduce risks and to stay competitive. This ongoing work analysis can be a resource for designing new interfaces and decision tools, for designing alternative concepts of operation, and for developing more effective training processes. However, design decisions can not wait for this work analysis to be complete, because it will never be complete. 

In sum, CSE is both an academic enterprise and a field of practice. As an academic enterprise it focuses on understanding cognition situated in the context of the complexities of work environments. As such, it often challenges the conventional wisdom of a cognitive science based on reductive methods that utilize laboratory puzzles to decouple information processing stages from the dynamics of natural situations. As a field of practice, CSE has to function as a co-requisite of other components of design to probe the complexities of work domains. To be effective in practice, cognitive systems engineers have to learn to be team players and they must be able to coordinate and integrate the work analysis processes with the other design processes.

To be effective in practice cognitive systems engineers have to function on interdisciplinary design teams as humble experts, rather than know-it all academics who want to lead the parade.  

Yes - following on the previous post - I do believe that Cognitive Systems Engineering (CSE) generates juice that is well worth the squeeze. However, I think that it is important to distinguish between CSE as an academic enterprise exploring basic issues about the nature of work and the nature of human cognition versus a component of a design process.

When implemented in a design process, a CSE work analysis is sometimes mistakenly implemented as a prerequisite to other aspects of design (e.g., prototyping). The problem with this is that work analysis is never done. Work domains are not static - they are constantly changing due to new opportunities and new challenges associated with evolving technologies and operational contexts. Thus, there are always new depths to explore and often one question leads to even more questions. If you delay other aspects of design until the work analysis is complete - nothing will ever get built. 

Thus, work analysis should be implemented as a co-requisite to other aspects of design.  For example, customers or operators often have a difficult time articulating why they make certain choices, how new technologies might be helpful, or what they need to work more effectively without some concrete context. One way to provide that context is to create concrete scenarios (e.g., critical incidents). Another way is to provide them with a concrete model or prototype that they can manipulate. Even crude models (e.g., back of the napkin sketches or paper prototypes) can be very effective. In the process of reviewing a scenario or interacting with a prototype customers will sometimes be able to recognize and articulate new insights about the utility of the prototype or potential problems with it. This is reflected in Michael Schrage's concept of 'Serious Play.' in essence, prototypes can help to engage operators and allow them to participate in the idea generation process. This can be a valuable source of knowledge about a work domain. Prototypes can greatly enhance knowledge elicitation and work analysis. 

So, it is not a question of doing work analysis OR building design prototypes - success typically requires BOTH work analysis AND prototyping. And further, there is no fixed precedence. Ideally, the work analysis should be tightly coupled with more concrete aspects of design (e.g., wire framing, prototyping). In this coupling, work analysis can be both feedforward (generating hypotheses) and feedback (evaluating operator responses to concrete implementations). 

With the modern explosion of technologies for managing complex information, work domains are rapidly changing. This requires a CSE perspective to assess the changing opportunities and risks and to generate alternative hypotheses for how to leverage these technologies more effectively to reduce risks and to stay competitive. This ongoing work analysis can be a resource for designing new interfaces and decision tools, for designing alternative concepts of operation, and for developing more effective training processes. However, design decisions can not wait for this work analysis to be complete, because it will never be complete. 

In sum, CSE is both an academic enterprise and a field of practice. As an academic enterprise it focuses on understanding cognition situated in the context of the complexities of work environments. As such, it often challenges the conventional wisdom of a cognitive science based on reductive methods that utilize laboratory puzzles to decouple information processing stages from the dynamics of natural situations. As a field of practice, CSE has to function as a co-requisite of other components of design to probe the complexities of work domains. To be effective in practice, cognitive systems engineers have to learn to be team players and they must be able to coordinate and integrate the work analysis processes with the other design processes.

To be effective in practice cognitive systems engineers have to function on interdisciplinary design teams as humble experts, rather than know-it all academics who want to lead the parade.