Skip to content

The figure below is an attempt to illustrate three important dimensions of any cognitive system as layers in a polycentric control system. The three layers are designed to reflect John Boyd’s dimensions of the strategic game: Physical, Mental, and Moral. Note that all three layers are closed loops (effectively OODA loops) that get feedback as a result of the consequences of acting on the ecology. Further, the outer layers set up constraints or determine the degrees of freedom available to inner layers. Finally, it is important to recognize that each layer is working continuously and simultaneously with the other layers, but at a different time scale (i.e., these are not sequential operations).

The inner most layer (green) in this diagram represents the direct coupling with the ecology that reflects perceptual-motor skills. This “represents the world of matter-energy-information all of us are a part of, live in and feed upon” (Osinga, 2007, p. 210). This type of coupling was the focus of J.J. Gibson’s research and constructs such as affordance and optical invariant play important roles. Much of the action at this level is ‘automatic’ requiring little conscious awareness.

The next outer layer (orange) in this diagram reflects conscious thinking related to problem solving and decision making. At this level an intention is formed that will frame subsequent actions in the layer below. This “represents the emotional/intellectual activity we generate to adjust to, or cope with, that physical world” (Osinga, 2007, p. 210). The functioning of this layer is the focus of researchers such as Gary Klein (Recognition-Primed Decision Making, Naturalistic Decision Making) and Gerd Gigerenzer (Ecological Rationality). Important constructs to consider at this level include abduction and heuristic. The intention provides the framing for attention, recognition, and the possible actions at the skill-based level.

The most outer layer (blue) in this diagram reflects the influence of long-term experience and learning. It “represents the cultural codes of conduct or standards of behavior that constrain, as well as sustain and focus, our emotional/intellectual responses” (Osinga, 2007, p. 210). This layer reflects the values, principles, assumptions, and beliefs that shape performance at the other layers. This layer reflects the largely ‘implicit knowledge’ that shapes analysis and synthesis at the conscious thought level below.

This system is a multilayered, adaptive control system that is continually tuning to achieve a stable relation with the ecology. This tuning is happening simultaneously at all three levels, but at different time scales. Ultimately, stability depends on coordinated interaction across the three levels. The general notion is consistent with C.S. Peirce's logic of abduction, Piaget's constructs of assimilation and accommodation, EJ Gibson's construct of attunement, and Friston's Free Energy Principle. Thus, the dynamic is essentially one of minimizing "surprise" (i.e., reducing uncertainty) relative to the continuously changing Ecology.

The unique feature of this model relative to other models is that most models envision the interactions between levels as sequential. That is, there is a precedence relation between the three layers so that the outer layers need to infer the 'ecology' based on cues provided by the inner layer. In contrast, the model presented here suggests that all three levels have direct feedback of the consequences of action - though this feedback is integrated over different time constants. In essence - each layer is tuned to different bandwidths (in terms of the patterns that matter most).

Three paramedics are driving along a rural road in central Ohio when a truck in front of them suddenly seems to go out of control causing several other cars to crash, then hitting some pedestrians who were skating along the road and eventually hitting a tree. The paramedics get out to survey the scene and two of them each go to different victims and begin treating them, the third paramedic surveys the situation and heads back to the car, where he remains throughout the event.

What is he doing in the car?

In fact, after the event was over, the other two paramedics were puzzled, and ask him why he did not immediately begin treating the other victim? His response was "which other victims?" It turns out that neither of the other two paramedics realized that there were more than three victims. 

What was the third paramedic doing in the car? He was taking incident command. He realized that for some of the victims to survive it would be essential to get them to trauma hospitals, where they could get more extensive treatment, within the "golden hour." He realized, that to ensure that all the victims were saved they would need to get ambulances and a care flight helicopter into the scene as quickly as possible. He was on the radio calling for support and providing them directions so that they could get to the rural scene as quickly as possible. This included identifying a potential landing site for the care flight helicopter.

While he was in the car, a number of other people stopped to offer help, including a nurse. He was able to direct these volunteers. Asking the nurse to attend to one of the injured and directing another person to attend to the truck driver who was wandering from his truck in a daze. 

After the incident, this paramedic explained to the other two, "Yes, I could have started to treat one of the victims, but I wanted to make sure that all the victims survived. And I realized that to do that, we needed more resources and somebody would have to take command and coordinate these resources." 

This story illustrates the functions related to three different layers in a polycentric control system.

  • At the mutual adjustment level, two of the paramedics, the nurse, and other volunteers were directly acting to address the immediate demands of the situation.
  • At the planning level, the third paramedic was functioning as an incident commander. He wasn't directly treating patients, but he was attending to larger patterns and trends in order to anticipate needs and to coordinate the resources that would ultimately be critical for achieving a satisfying outcome.
  • And the reason this paramedic took this role was because he had been trained in the National Incident Management System which had been developed based on generations of fire fighting experience with large forest fires. This has become a national standard and many fire and police departments are required to have NIMS training. This establishment and training of a standard reflects contributions at the Standardization or Knowledge-based layer of the emergency organizations. Additionally, the idea of a 'Golden Hour' is a principle based on extensive experiences in emergency medicine. The Standardization Layer reflects a capacity to integrate across an extensive history of past events to pull out principles that provide guideposts (structure, constraints) for organizing operations at the other levels. 

The figure below is an alternative way to represent a polycentric control system to the layered diagram used in prior essays. This representation was developed based on observations and interviews with emergency response personnel. Here the layers are illustrated as nested control loops. These loops are linked through communications (the thin lines) but also through the propagation of constraints (the block arrows), in which outer loops shape or structure the possibilities within the inner loops (e.g., standard practices, command intent, a plan, distributing resources).  The coordination between layers is critical to achieving satisfactory results and this coordination depends both on communications within and between layers and the propagation of constraints (sometimes articulated as common ground, shared expectations, shared mental models, or culture).

Note that neither the previous representation showing layers or the present representation showing nested control loops is complete. Each representation makes some aspects of the dynamic salient - while hiding other aspects of the dynamic and perhaps carrying entailments that are misleading with respect to the actual nature of the dynamic. An assumption of general systems thinking is that there is no perfect representation or model. Thus, it is essential to take multiple perspectives on nature in order to discover the invariants that matter - to distinguish the signal from the noise. 

Three important points:

  • First - the power of a polycentric control system relative to addressing complex situations is that each layer has access to potentially essential information that would be difficult or impossible to access at other layers. However, without effective coordination between the layers some of that information will be impotent.
  • Second - it is easy for people operating within a layer to have tunnel vision, to take the functions of the other layers for granted, and to underestimate the value that the other layers contribute. For example, it is easy for me and Adam to take Fred's art work for granted. However, when Fred is replaced by someone less skilled or with a different style - suddenly the gap becomes clearly evident. 
  • Third - be careful not to get trapped in any single perspective, model, or metaphor. Be careful that your models don't become an iron box that you force the natural world to fit into. Be careful not to fall prey to the illusion that any single model will provide the requisite variety that you need to regulate nature and reach a satisfying outcome. 

Now, the rest of the story: All the victims from the accident above survived. 

4

Only variety can destroy variety

This quote is a concise statement of Ashby's Law of Requisite Variety. Variety in this context refers to the complexity of a problem typically indexed using information statistics. And the important implication of Ashby's law for organizations is that to deal with complexity effectively, an organization must have the flexibility of thought and/or action that is comparable to the complexity of the problem. If an organization is less flexible than the variety of the problem, then there are opportunities that it will not be able to realize or threats that it will not be able to avoid. This concept is fairly intuitive in the context of competitive sports - for example the player who has the broader arsenal of shots (the greater variety) will consistently defeat a player with fewer capabilities. This is because the player with the broader arsenal will eventually find shots that the opponent can't counter. 

Following up on the idea of a control system with multiple layers with different time constants - one of the key implications of the polycentric control model is that none of the layers has the capacity to handle the requisite variety of any complex situation or operational domain. Thus, skill in any complex domain depends on the coupling and coordination between the layers. Each layer has its lane of expertise that allows it to deal with aspects of situations that cannot be adequately dealt with by the other layers. But no lane is completely independent - and activities in one lane shape the possibilities within each of the other lanes. The consequences of the between layer interactions can be that activities in each lane can facilitate or inhibit possibilities within the other layers. 

It is probably naive to think that there are only three layers, but as noted in the previous essay, the three layers shown in this figure parse the system in a way that aligns with numerous observations about the dynamics of cognitive systems and organizations from a wide range of disciplines (e.g., psychology, economics, sociology, control theory, and military science). Also - while I find the idea of filters tuned to different ranges of frequencies to be a useful analog for thinking about the layers - this is a loose analogy and not intended to be taken too literally (or quantitatively). I hope that this won't be too big a distraction to those who are not familiar with thinking in terms of the frequency domain. The goal for this essay is to introduce these three layers, and then following essays will spend more time considering the coupling or interactions across layers. 

The Mutual Adjustment or Skill-based Layer

The bottom layer in the diagram above will typically be sensitive to high frequency bandwidths (i.e., rapidly or suddenly changing events).  This represents the capacity to respond and to adapt quickly to sudden changes and surprises. In an individual this might be the motor control system; in a large organization this might be the front line workers who are most directly engaged in carrying out operations (in military terms this is the tactical level); and on an American football team this layer would reflect the actions of the players during play. Activity at this level typically involves implementing automatic processes or standard procedures to carry out the plans formulated at higher levels. However, this layer will also typically be required to improvise to deal with unexpected variability that was not anticipated in the planning or the development of the automatic processes, heuristics, and standard procedures. For highly dynamic, rapidly changing domains of operation the capacity to improvise at this level will be critical to resilience. For example, the ability of players on the football team to react intelligently during a 'broken play' when the quarterback is forced from the pocket by a defensive surprise. The biggest threat to stability at this level is the potential to chase the noise - that is, to waste energy following fads or spurious changes that are distractions and that do not lead to improvements or long term success. 

The Planning or Rule-based Layer

The middle layer in the diagram above will typically be tuned to intermediate level frequencies - that is, it integrates information over broader ranges of space and time than is possible at the lower level. This layer will be able to pick up event patterns and trends that require a broader perspective than is possible at a lower layer. In an individual, this would represent intentions and conscious heuristics, rules, and plans. In organizations, this is typically the function of managers and in emergency response and military organizations this would reflect the activities in an incident command center. For example, in an air operations center they might be generating plans for the next 6 - 10 hours of operations. In an American football team this layer would reflect the formulation of a game plan and then the communications between coaches (on the field and in the press boxes) and the quarterback to call a play or series of plays. And it will also include the replanning (e.g., at half-time), when deficiencies in the original game plan are discovered. This is the layer where it is possible to gain "top-sight" on a complex situation. However, it takes time to integrate all the information required for top-sight - too long for this layer to effectively respond to sudden surprises or momentary changes. For example, the receivers on a football team can't wait for the coaches to plan a new pass route when the original play breaks down and the quarterback is forced from the pocket. The other challenge to planning is that assumptions and expectations that are valid at the time a plan is formulated may not remain valid over the time it actually takes to implement the plan. 

The Standardization or Knowledge-based Layer 

The top layer in the diagram above will typically be tuned to very low frequencies, which provides the capacity to integrate over broad spans of time to identify reoccurring patterns, very slow trends, and principles (or invariants) that apply across many different events or situations. This layer reflects experience passed down from generation to generation through oral traditions, literature, and culture. For example, for the military domain this layer is informed by a long history that stretches back to Sun-Tzu and beyond. This layer is typically responsible for developing standard practices and expectations - that include specifying operational principles that can be applied successfully across a broad range of situations and then inculcating those practices into the organization through training (e.g., deliberate practice). In an individual, this layer would typically be associated with a persons value system and the enduring patterns that are typically identified as their personality. This system does not directly control action - but it shapes the planning and actions of an individual in general ways that will be apparent across broadly different situations. In an organization, this layer reflects what is often referred to as its culture. In operation - this layer typically involves the top-level leaders in an organization who set the general goals and expectations for the organization and might also specify the command intent as a guide to a specific operation. 

Interactions 

To push the filtering analogy a bit - a linking assumption is that the requisite variety of a complex problem domain will involve signals that are spread across the full range of frequencies. So, a filter that is tuned to a certain bandwidth will miss signals outside that bandwidth. A key factor that will limit the bandwidth of any filter is the effective time constant or the lags associated with feedback.  So - just based on the fact that the time to collect, process and act on information will be necessarily different for the different layers in the polycentric control system suggests that the bandwidth limits for each layer will be different. But of course, there are many other factors that will impact what signals can be seen at each layer (e.g., trust). The main point is that people at the top layer of the polycentric control system (e.g., the C-suite) will have access to information not available to the other layers. But similarly, each of the other layers will have access to information that is not available to the C-suite or to the other layer.  Thus - coordination between layers is necessary to meet the demands of Ashby's law.  The variety of the whole is always greater than the variety of any single layer. Or more simply - each layer offers a unique and potentially valuable perspective towards addressing the operational demands. 

Success ultimately depends on coupling and coordination across the three layers. Each layer has the capacity to complement the other layers and fill in information gaps in the other layers. However, the layers can also function in ways that complicate and inhibit capabilities at the other layers. A recurring theme with respect to a layer getting out of its lane in ways that inhibit capabilities at the other layers are the problems of micro-management or of authoritarian, centralized organizations. As Fredrick von Hayek's has observed with respect to managing economies - no matter how well-intended or intelligent a centralize management organization is - it simply takes too long to collect and make sense of all the potentially relevant information. In essence, the decisions will always be a day late and a dollar short. A centralized control agency is far too sluggish to productively manage large economies. On the other hand, he also recognizes that for free markets to function well, there needs to be effective communication systems and market constraints that entail a certain degree of top-down constraint. When the balance is right - the free market system can self-organize in highly intelligent ways. In the military, the construct of Mission Command reflects an alternative to micro-management that emphasizes the importance of clearly communicating intent (minimal top down constraint) and then empowering lower levels in the organization to work out the operational and tactical details as required by the situation demands that could not have been anticipated in advance.

With respect to Ashby's Law the challenge for organizations (and organisms) is to distribute authority and responsibilities across the layers in a way that is commensurate with the access to information at each of the layers. 

The goal for this essay was to begin differentiating the functions of the separate layers. However, the major systems principle to consider with regards to polycentric control is that the individual layers can only be fully understood or appreciated in the context of the whole.  With respect to functioning successfully in a complex world - no single layer can effectively deal with the demands of requisite variety without the support of the other layers. And ultimately, the power (or weakness) of the whole will emerge from interactions across the layers. 

View Post

As I have discussed before, I was introduced to the quantitative methods for analyzing control systems early in my graduate career and it set an important framework for how I have approached Joint Cognitive Systems. Learning mathematical control theory was a struggle for me and I was so eager to share what I learned with other social scientists that I co-authored a book on control theory with Richard Jagacinski who was my major graduate advisor. However, as I began to look at joint cognitive systems that were more complex than the laboratory target acquisition and tracking tasks, I soon realized that every day life is a lot more complex than the laboratory tasks and that the quantitative models that worked for simple servomechanisms and for experimental conditions that required people to act like simple servomechanisms were of limited value. 

Everyday life and especially organizational dynamics typically involves many interconnected loops with non-trivial interactions. For example, Boyd's OODA Loop, that is often used to describe skilled behavior, is not a single loop but a multi-loop, adaptive control system. 

Joint Cognitive Systems typically have the capacity to learn and to adapt the dynamics of the perception-action coupling to take advantage of that learning. Thus, Joint Cognitive Systems are able to modify or tune the dynamics of the perception-action coupling to fit the unique demands of different situations. The diagram below is one that I developed to show multiple adaptive loops that reflect some of the different strategies that control engineers have used in designing adaptive automatic control systems (e.g., gain scheduling, model-reference adaptive control, self-tuning regulators). Note that the two different styles of arrows reflect different functions - the thin arrows reflect the flow of information that is input to and 'processed' through the dynamics of the different components of the system (i.e., the boxes). However, the block arrows actual operate on the boxes and change or tune the processes within the boxes. For example, in an engineered adaptive control system the result might be to turn down or amp up the sensitivity or 'gain' within a control element. Thus, in adaptive control systems the outer loops typically alter the dynamics of the inner loops. 

While the previous figure was designed to imagine skilled motor control as an adaptive system the next diagram was designed based on observations of decision making in the Emergency Department of a major hospital. The point was to illustrate some of the ways in which organizations learn and adapt as a result of experience. 

I hope that the preceding diagrams can help readers to get a taste of the complexity of control in the natural world, however I am not fully satisfied with them.  I still have a feeling that these diagrams trivialize the real complexity. More recently I have been inspired by the work of Elenor Ostrum on how communities adapt to manage shared resources and to avoid the "tragedy of the commons," and work by the SNAFU catchers (Allspaw, Cook, & Woods) on Dev Ops and managing large internet platforms. It has been suggested that we have to think about layers of control - or polycentric control. 

The groups who have actually organized themselves are invisible to those who cannot imagine organization without rules and regulations imposed by a central authority. (Ostrum, 1999, p. 496)

Each technology shift—manual to automated control to multi-layered networks—extends the range of potential control, and in doing so, the joint cognitive system that performs work in context changes as well. For the new joint cognitive system, one then asks the questions of Hollnagel’s test:

What does it mean to be ‘in control’?

How to amplify control within the new range of possibilities?   (Woods & Branlat, 2010, p. 101)

The following figure is my attempt to illustrate a polycentric control system.  This diagram consists of three layers that seem to have a rough correspondence with Rasmussen's (1986) three levels of cognitive processing (Knowledge-, Rule- and Skill-based) and Thompson's (1967) three means of coordination within organizations (Standardization, Planning, Mutual Adjustment). These levels interact in two distinct ways - one is passing information through direct communication as is typically represented by lines and arrows in standard processing diagrams. However, the second important way is through the propagation of constraints. I am unaware of any convention for diagraming this. In general, higher levels set constraints on the framing of problems at lower levels - in more technical terms - the higher levels impact the degrees of freedom for action at the lower levels. For example, the standards and principles formulated at the highest level set expectations (e.g., through the way people are selected, trained, and rewarded) for the 'proper' way to do planning and the proper way to act. Or the plans, set expectations about responsibilities and actions at the mutual adjustment level.  The constraints typically don't specify the actions in detail - but they do shape the framing of situations and often bound the space of possible actions that are considered. 

Although it is not possible to model Polycentric Control Systems using the same mathematics that was used to model simple servomechanisms, there are important principles associated with stability that will generalize from the simple systems to the more complex systems. Perhaps, the most significant of these is the impact of time delays on the stability of these systems and the implication for the ability to pick-up patterns and to control or respond to events. The effective time delays associated with communication and feedback will set constraints on the bandwidth of the system. This is seen in models of human tracking as the Crossover Model - in which the effective reaction time sets limits on what frequencies the human can follow without becoming unstable. However, this constraint is also seen almost universally in natural systems in terms of the 1/f characteristic that appears so predominantly when examining performance of many natural systems in the frequency domain. In essence, there are always delays associated with the circulation of information (e.g., feedback) in natural systems and this will always set bandwidth limits on the ability to adapt to situations. 

A key attribute of the different layers shown in the diagram of the polycentric control system is that the higher layers will have effective time delays that are progressively longer than the lower layers.  On the one hand, this means that high frequency events require the capacity for elements at the mutual adjustment level to be in control (a requirement for subsidiarity). For example, it means that in highly dynamic situations the people on the ground (at the mutual adjustment level) may need to be free to adapt to unexpected situations without waiting for instructions from the higher levels (even if this may require them to deviate from the plan or violate standard operating procedures). On the other hand, this means that higher levels may be better tuned to pick-up patterns that require integration of information over space and time that is outside the bandwidth of people who are immersed in responding to local events (slowly evolving patterns or general principles). Thus, for example an incident command center may be able to provide top-down guidance that allows people at the mutual adjustment level of a distributed organization to coordinate and share resources with people who are outside of their field of regard. Or standard operating procedures are developed and trained to prepare people at the mutual adjustment level to deal efficiently with recurring situations. 

So, I hope this short piece has heightened your appreciation for the complexity of natural control systems and wet your appetite to learn more about the dynamics of complex joint cognitive systems. There is a lot more to be said about the nature of polycentric control systems and the implications for the design and management of effective organizations. 

Yes - following on the previous post - I do believe that Cognitive Systems Engineering (CSE) generates juice that is well worth the squeeze. However, I think that it is important to distinguish between CSE as an academic enterprise exploring basic issues about the nature of work and the nature of human cognition versus a component of a design process.

When implemented in a design process, a CSE work analysis is sometimes mistakenly implemented as a prerequisite to other aspects of design (e.g., prototyping). The problem with this is that work analysis is never done. Work domains are not static - they are constantly changing due to new opportunities and new challenges associated with evolving technologies and operational contexts. Thus, there are always new depths to explore and often one question leads to even more questions. If you delay other aspects of design until the work analysis is complete - nothing will ever get built. 

Thus, work analysis should be implemented as a co-requisite to other aspects of design.  For example, customers or operators often have a difficult time articulating why they make certain choices, how new technologies might be helpful, or what they need to work more effectively without some concrete context. One way to provide that context is to create concrete scenarios (e.g., critical incidents). Another way is to provide them with a concrete model or prototype that they can manipulate. Even crude models (e.g., back of the napkin sketches or paper prototypes) can be very effective. In the process of reviewing a scenario or interacting with a prototype customers will sometimes be able to recognize and articulate new insights about the utility of the prototype or potential problems with it. This is reflected in Michael Schrage's concept of 'Serious Play.' in essence, prototypes can help to engage operators and allow them to participate in the idea generation process. This can be a valuable source of knowledge about a work domain. Prototypes can greatly enhance knowledge elicitation and work analysis. 

So, it is not a question of doing work analysis OR building design prototypes - success typically requires BOTH work analysis AND prototyping. And further, there is no fixed precedence. Ideally, the work analysis should be tightly coupled with more concrete aspects of design (e.g., wire framing, prototyping). In this coupling, work analysis can be both feedforward (generating hypotheses) and feedback (evaluating operator responses to concrete implementations). 

With the modern explosion of technologies for managing complex information, work domains are rapidly changing. This requires a CSE perspective to assess the changing opportunities and risks and to generate alternative hypotheses for how to leverage these technologies more effectively to reduce risks and to stay competitive. This ongoing work analysis can be a resource for designing new interfaces and decision tools, for designing alternative concepts of operation, and for developing more effective training processes. However, design decisions can not wait for this work analysis to be complete, because it will never be complete. 

In sum, CSE is both an academic enterprise and a field of practice. As an academic enterprise it focuses on understanding cognition situated in the context of the complexities of work environments. As such, it often challenges the conventional wisdom of a cognitive science based on reductive methods that utilize laboratory puzzles to decouple information processing stages from the dynamics of natural situations. As a field of practice, CSE has to function as a co-requisite of other components of design to probe the complexities of work domains. To be effective in practice, cognitive systems engineers have to learn to be team players and they must be able to coordinate and integrate the work analysis processes with the other design processes.

To be effective in practice cognitive systems engineers have to function on interdisciplinary design teams as humble experts, rather than know-it all academics who want to lead the parade.  

Yes - following on the previous post - I do believe that Cognitive Systems Engineering (CSE) generates juice that is well worth the squeeze. However, I think that it is important to distinguish between CSE as an academic enterprise exploring basic issues about the nature of work and the nature of human cognition versus a component of a design process.

When implemented in a design process, a CSE work analysis is sometimes mistakenly implemented as a prerequisite to other aspects of design (e.g., prototyping). The problem with this is that work analysis is never done. Work domains are not static - they are constantly changing due to new opportunities and new challenges associated with evolving technologies and operational contexts. Thus, there are always new depths to explore and often one question leads to even more questions. If you delay other aspects of design until the work analysis is complete - nothing will ever get built. 

Thus, work analysis should be implemented as a co-requisite to other aspects of design.  For example, customers or operators often have a difficult time articulating why they make certain choices, how new technologies might be helpful, or what they need to work more effectively without some concrete context. One way to provide that context is to create concrete scenarios (e.g., critical incidents). Another way is to provide them with a concrete model or prototype that they can manipulate. Even crude models (e.g., back of the napkin sketches or paper prototypes) can be very effective. In the process of reviewing a scenario or interacting with a prototype customers will sometimes be able to recognize and articulate new insights about the utility of the prototype or potential problems with it. This is reflected in Michael Schrage's concept of 'Serious Play.' in essence, prototypes can help to engage operators and allow them to participate in the idea generation process. This can be a valuable source of knowledge about a work domain. Prototypes can greatly enhance knowledge elicitation and work analysis. 

So, it is not a question of doing work analysis OR building design prototypes - success typically requires BOTH work analysis AND prototyping. And further, there is no fixed precedence. Ideally, the work analysis should be tightly coupled with more concrete aspects of design (e.g., wire framing, prototyping). In this coupling, work analysis can be both feedforward (generating hypotheses) and feedback (evaluating operator responses to concrete implementations). 

With the modern explosion of technologies for managing complex information, work domains are rapidly changing. This requires a CSE perspective to assess the changing opportunities and risks and to generate alternative hypotheses for how to leverage these technologies more effectively to reduce risks and to stay competitive. This ongoing work analysis can be a resource for designing new interfaces and decision tools, for designing alternative concepts of operation, and for developing more effective training processes. However, design decisions can not wait for this work analysis to be complete, because it will never be complete. 

In sum, CSE is both an academic enterprise and a field of practice. As an academic enterprise it focuses on understanding cognition situated in the context of the complexities of work environments. As such, it often challenges the conventional wisdom of a cognitive science based on reductive methods that utilize laboratory puzzles to decouple information processing stages from the dynamics of natural situations. As a field of practice, CSE has to function as a co-requisite of other components of design to probe the complexities of work domains. To be effective in practice, cognitive systems engineers have to learn to be team players and they must be able to coordinate and integrate the work analysis processes with the other design processes.

To be effective in practice cognitive systems engineers have to function on interdisciplinary design teams as humble experts, rather than know-it all academics who want to lead the parade.  

Unfortunately, we run into a variation of this question with every customer and every project. And sometimes, we also hear this from other disciplines who participate with us on design teams. There seems to be a general assumption that any time that a design team is not laying out interfaces (wire framing) or writing code is wasted time. There seems to be a general assumption that time invested to gain a deeper understanding of the nature of the work and to generate multiple hypotheses about alternative representations or innovative ways to employ new technologies is wasted. There is an implicit assumption that the customer can specify exactly what they need – they know the answers – they just need to have someone else write the code for them. Or if they are looking for innovation – customers believe it is possible to get instant solutions without any upfront investment in what Michael Schrage calls “Serious Play.”

Smart people think otherwise:

Give me six hours to chop down a tree and I will spend the first four sharpening the axe. Abraham Lincoln

If I had an hour to solve a problem, I’d spend 55 minutes thinking about the problem and five minutes thinking about solutions. Albert Einstein

I am not sure that the world is more complex today than ever before, but it is clear that advanced technologies have opened up possibilities for dealing with the complexities that have never before been available. And further, it seems clear that organizations who stick to old strategies (e.g., large inventory buffers, or hierarchical central command structures), who fail to take advantage of new possibilities and new strategies will not succeed in competition with organizations who do take advantage of them.

Today, an auto manufacturer that offers customers “any color they want as long as it’s black” will not be competitive. Today, a military organization that fails to leverage advanced communication networks and advanced approaches to command and control (e.g., mission command) will not be successful.

Today, the value of taking some time to sharpen the axe, or to understand the problem, before hacking away at the tree with a dull axe or before building a solution to the wrong problem should be more apparent than ever. Today, the value of design teams that include a diverse collection of perspectives should be more apparent than ever. We need UI Designers, UX Designers, Programmers, Computer Scientists, Social Scientists, Engineers, and Systems Thinkers. We need multiple perspectives and we need to explore alternative ways of parsing problems and representing constraints. 

People trained in CSE typically have experience with multiple disciplinary perspectives (e.g., human factors, industrial engineering, systems thinking). They typically have specialized skills related to work analysis: knowledge elicitation, field research methods, ethnographic methods, problem decomposition and representation methods, decision analysis methods and systems modeling methods.

CSE doesn’t have all the skills or all the answers when it comes to the challenge of designing competitive solutions that leverage the opportunities of advanced information technologies. However, they do bring important skills to the table. A design team that utilizes some of the skills that CSE offers is less likely to waste time hacking away with a dull axe or designing the perfect solution to the wrong problem.

Today – design takes a diverse team. And a team that includes a CSE perspective is more likely to cut sharply and to solve the right problem. Yes, the juice produced from a CSE perspective is worth the up front time to do a thorough work analysis, and to explore a variety of ways to represent information to decision makers. 

It's always better to measure twice and cut once!

We often identify natural systems with specific functions. For example, a function shared by most life forms is to survive and propagate. And this global function typically depends on many sub-functions such as obtaining nourishment, regulating temperature, and avoiding potential predators. Similarly, the people who come together to create sociotechnical organizations are generally collaborating to achieve some purpose or intention that is beyond the capacity of any of the individuals working in isolation. In fact, an organization may have multiple purposes – to provide a service, produce a product, and/or to make a profit.

The concept of purpose and its role in shaping performance became a primary consideration for both physical and cognitive systems with the development of Norbert Wiener’s Cybernetic Hypothesis and the associated servo-mechanism metaphor. One of the critical attributes of servomechanisms is closely related to the second question mentioned in my previous essay:

Why things stay the same?

In the context of servo-mechanisms this question is typically framed in terms of stability. A system that is stable tends to resist change. For example, healthy animals typically maintain relatively consistent body temperatures, despite changes in the surrounding environments.

Similarly, engineers have designed mechanisms to regulate the temperatures in buildings. These mechanisms allow people to specify a desired temperature and the control mechanism then combines a sensor – that measures the temperature in a room, with an actuator that activates a heating or cooling mechanism whenever the temperature in the room deviates from the desired temperature. This is an elemental form of muddling – in which behavior is adjusted as a function of feedback provided by the sensors. In a similar way, pilots or drivers will adjust their behavior to keep their vehicles on an intended trajectory. For example, skilled pilots will consistently follow a similar safe path to landing at different airports and under different weather conditions. The similar results will be achieved over and over again – yet the behaviors will be different on every occasion due to the different disturbances (e.g., winds) experienced. A servomechanism has been described as a device that achieves the same results, but in a different way each time.

Note that it is impossible to pre-specify the behaviors that will lead to a safe landing for every situation, because each situation will be different. However, the pilots need to learn or tune their perceptual and motor skills to consistently land safely. The pilots need to learn to recognize what it looks like to be on a safe path and they need to know what actions will counter specific disturbances from that path. Thus, landing an aircraft is much more complex than simply activating a heating or cooling mechanism. And of course, keeping large organizations on the appropriate paths to achieve their missions or goals is even still more complex.

The critical thing to understand about muddling is that it depends on the coupling between perception and action.

Muddling requires both situation awareness (i.e., to be able to sense states relative to goals or intentions, and relative to the capacity for action) and it requires the capacity to act (i.e., to be able to move toward more desirable states or to counter disturbances to prevent undesirable changes in state). Skill is an emergent property of the coupling that reflects relations between situation awareness and the potential for action. It can’t be discovered by studying either perceptual or motor components of an organism in isolation. Further, to the extent that the perceptual and motor systems evolved to serve certain functions (e.g., to control locomotion), it is unlikely that these components can be fully understood without considering the functions that they support. This also applies to large organizations – to fully understand the components and processes of the organization it is essential to consider the ultimate functions, purposes, and value systems that the organization serves.

Another important consideration with respect to organizations that have a closed-loop coupling of perception and action is associated with explaining ‘why’ the organization behaves as it does. To understand an open loop organization, we typically look for causes, which are typically prior events (e.g., the preceding dominoes in the chain of actions and reactions). However, to explain why a closed-loop organization behaves the way it does, we have to ask “what is the function or purpose?” of the organization. In other words, we have to ask what is the organization’s goals or what is it attempting to achieve. Whereas, behavior of open-loop systems tend to be the  effects of prior causes or reactions to past events, closed-loop systems typically are attracted by goals or in pursuit of potentially satisfying future events.

Thus, the behavior of closed-loop systems does not conform to the cause-effect (or stimulus-response) narratives that have been used to describe many isolated physical systems.

Although change is pervasive, with skilled muddling many organisms and organizations actively counter many undesirable changes and often they are able to achieve some degree of stability around states that they find to be satisfying. These satisfying states are typically referred to as the function or purpose of the organizations. For example – achieving and maintaining a specified temperature is the purpose of a temperature regulator; keeping their vehicles in the field of safe travel is the function of pilots or drivers; achieving and maintaining a successful company is the function of corporate executives.

In simple terms, pursuing and maintaining satisfactory states are what organizations do!

However, as suggested in the previous essay, it would be naive to associate the purpose or goals of a complex organization with the ideas in an executive's head or even with the organization's mission statement. These may or may not align with the actual purpose or function of an organization. Unlike the heating control system that is designed by an engineer, complex organizations are self-designing or self-organizing. Thus, stability is an emergent property of organizations reflecting the muddling and mutual adjustments of multiple components.

Ultimately, the proof of the function or purpose is in the doing.

And the doing involves dynamic interactions across all the components, as well as the impacts from the environment. The CEO can't just flap her wings and determine the future trajectory of the organization. Ultimately, the purpose emerges from interactions involving all the components within the organization. 

1

Two questions that interest general systems thinkers are:

Why things change? 2) Why things stay the same?

Let’s consider the first question in this essay, and the second question will be explored in later essays. Change is pervasive – things are flowing, moving from place to place, growing, eroding, aging. During my career, my students and I have explored problems associated with the control of locomotion. How do pilots judge speed or changes in altitude? How were they able to guide their aircraft to a safe touchdown on a runway? How are pilots and drivers able to avoid collisions? Or how are baseball batters able to create collisions with a small ball? Inspired by James Gibson, this research explored the hypothesis that people utilize patterns or structures in optical flow fields to make judgments about and to actively control their own motion.

When Gibson introduced the concept of optical flow fields and the hypothesis that animals could utilize optical invariants to guide locomotion, it was a very difficult construct for many perceptual researchers to grasp. Where is optical flow? If you study eyes or if you study light or optics you won’t discover optical flow. Optical flow is not in the eye and it is not in the light. Optical flow is an example of an emergent property, it arises from relations between the components. Optical flow results when an eye moves relative to surfaces that reflect light. The flow that results from movement relative to reflective surfaces has structure that specifies important aspects of relations between surfaces and moving points of observation. For example, as you are driving down the highway – surfaces that are near to you will flow by relatively quickly, while surfaces that are far from you will flow by relatively slowly. The further a surface is away, the slower it will flow by – and something that is very far away like the moon may seem to not be moving relative to you at all. And when the surface in front of you begins to expand rapidly – you will immediately recognize that a collision is impending. Thus, the structure in flow fields specifies aspects of the 3-dimensional environment that support skilled movement through that environment.

If you look out the window of a car, a train, or an aircraft you will see the world flowing by - optical flow fields are there right before our eyes. Yet, few perceptual researchers could see it! Or if they did, they didn't recognize it as an important phenomenon to explore. 

Few are aware that a key source that helped Gibson to discover the role of optical structure for controlling locomotion was Wolfgang Langewiesche who described how the relation between the point of optical expansion and the horizon helped to specify the path to a safe landing for a pilot.

Interest in change is partly motivated by our interest in making changes – in steering an aircraft to a soft landing or in managing an organization to achieve some function. In the case of controlling vehicles or other mechanical or physical technologies there is typically a linear, proportional relation between action and reaction. Small inputs to the steering wheel typically result in small changes to the direction of travel. However, this is not the case for more complex systems such as weather systems or sociotechnical systems. In complex systems, small actions can be amplified in ways that have large impacts on performance of the whole. The proverbial “Butterfly” effect illustrates this. That is, the idea that the flapping of a butterfly’s wing can impact the nature of a storm some distant time and place in the future. Or that the loss of a nail can result in the loss of a kingdom. Or that a smile can launch a thousand ships.

This is another reminder to be humble.

A sociotechnical system or complex natural system like weather cannot be controlled as simply as steering a car. And in fact, it is questionable whether such systems can be controlled at all. Yes, like the butterfly we can flap our wings, but we can’t anticipate or completely determine the consequences that will result. The performance of a complex organization depends on interactions among many people and no single person determines the outcome. While each component can have an influence, the ultimate outcome will depend on contributions from many people and it will also depend on outside forces and influences.

It is an illusion to think that we can control complex systems - to believe that we are in control or even that we could be in control. In reality, the best anyone can do is to muddle.

If we are observant and careful, we can dip our oars into the water and/or adjust our sails in ways that will influence the direction of our vessel. But the waves and currents have an important vote! And also, our colleagues and friends have an impact. It is a mistake to think that we are the lone captains of our fate - but with a little luck and a lot of help from our friends - we can keep the boat upright and muddle through. 

1

Two questions that interest general systems thinkers are:

Why things change? 2) Why things stay the same?

Let’s consider the first question in this essay, and the second question will be explored in later essays. Change is pervasive – things are flowing, moving from place to place, growing, eroding, aging. During my career, my students and I have explored problems associated with the control of locomotion. How do pilots judge speed or changes in altitude? How were they able to guide their aircraft to a safe touchdown on a runway? How are pilots and drivers able to avoid collisions? Or how are baseball batters able to create collisions with a small ball? Inspired by James Gibson, this research explored the hypothesis that people utilize patterns or structures in optical flow fields to make judgments about and to actively control their own motion.

When Gibson introduced the concept of optical flow fields and the hypothesis that animals could utilize optical invariants to guide locomotion, it was a very difficult construct for many perceptual researchers to grasp. Where is optical flow? If you study eyes or if you study light or optics you won’t discover optical flow. Optical flow is not in the eye and it is not in the light. Optical flow is an example of an emergent property, it arises from relations between the components. Optical flow results when an eye moves relative to surfaces that reflect light. The flow that results from movement relative to reflective surfaces has structure that specifies important aspects of relations between surfaces and moving points of observation. For example, as you are driving down the highway – surfaces that are near to you will flow by relatively quickly, while surfaces that are far from you will flow by relatively slowly. The further a surface is away, the slower it will flow by – and something that is very far away like the moon may seem to not be moving relative to you at all. And when the surface in front of you begins to expand rapidly – you will immediately recognize that a collision is impending. Thus, the structure in flow fields specifies aspects of the 3-dimensional environment that support skilled movement through that environment.

If you look out the window of a car, a train, or an aircraft you will see the world flowing by - optical flow fields are there right before our eyes. Yet, few perceptual researchers could see it! Or if they did, they didn't recognize it as an important phenomenon to explore. 

Few are aware that a key source that helped Gibson to discover the role of optical structure for controlling locomotion was Wolfgang Langewiesche who described how the relation between the point of optical expansion and the horizon helped to specify the path to a safe landing for a pilot.

Interest in change is partly motivated by our interest in making changes – in steering an aircraft to a soft landing or in managing an organization to achieve some function. In the case of controlling vehicles or other mechanical or physical technologies there is typically a linear, proportional relation between action and reaction. Small inputs to the steering wheel typically result in small changes to the direction of travel. However, this is not the case for more complex systems such as weather systems or sociotechnical systems. In complex systems, small actions can be amplified in ways that have large impacts on performance of the whole. The proverbial “Butterfly” effect illustrates this. That is, the idea that the flapping of a butterfly’s wing can impact the nature of a storm some distant time and place in the future. Or that the loss of a nail can result in the loss of a kingdom. Or that a smile can launch a thousand ships.

This is another reminder to be humble.

A sociotechnical system or complex natural system like weather cannot be controlled as simply as steering a car. And in fact, it is questionable whether such systems can be controlled at all. Yes, like the butterfly we can flap our wings, but we can’t anticipate or completely determine the consequences that will result. The performance of a complex organization depends on interactions among many people and no single person determines the outcome. While each component can have an influence, the ultimate outcome will depend on contributions from many people and it will also depend on outside forces and influences.

It is an illusion to think that we can control complex systems - to believe that we are in control or even that we could be in control. In reality, the best anyone can do is to muddle.

If we are observant and careful, we can dip our oars into the water and/or adjust our sails in ways that will influence the direction of our vessel. But the waves and currents have an important vote! And also, our colleagues and friends have an impact. It is a mistake to think that we are the lone captains of our fate - but with a little luck and a lot of help from our friends - we can keep the boat upright and muddle through.