Skip to content

In 1978, Rich Jagacinski hired me as a graduate research assistant to help with a project to compare peoples' ability to track a target based on kinesthetic feedback with the ability to track the same target based on visual feedback (1). I was totally unprepared at the time, and even after 6 years of graduate school, I was only beginning to understand the technical aspects of 'control theory.' However, eventually, I was able to co-author a text with Rich to help introduce the technical aspects of control theory to other social scientists (2). But this was only the beginning of a long journey to explore the dynamics of complex couplings of humans, technologies, and ecologies.

Eventually, I have come to the conclusion that control theory has absolutely nothing to do with 'control.' Further, the metaphor of the 'steersman' associated with cybernetics is completely misleading as a framework for understanding human performance. This metaphor suggests that humans determine the behavior of an organization, when in fact, many other factors contribute to shaping the ultimate performance of any organization. The myth that humans are 'in control' leads to humans getting too much credit when things work well (the mythical hero leader) and too much blame when things don't work well (the myth that human error is the 'cause' of most accidents).

On the other hand, behaviorism is the myth that human behavior is 'controlled' by the situations (i.e., the stimulus). Despite the fact that even the transfer function for the human tracker must change as a function of the dynamics of the plant being controlled (as reflected in McRuer's Crossover Model); and despite the fact that 'context matters,' it is erroneous to think that the stimulus or the context determines or controls behavior.

So if nothing is 'in control,' how can we possibly understand or explain the performance of even simple organizations. If nothing is in control, does anything 'determine' performance? What matters? (3) The answer lies in the dynamics of coupling or of networks. Ultimately, performance reflects the demands of stability. The behaviors that persist are the behaviors that lead to network stability. Ultimately, control theory and dynamical systems theory are NOT about 'control,' but about 'stability.' Further, stability is an emergent or relational property of a network. It cannot be localized in any of the elements of the network. This is why any theory that tries to attribute 'control' or 'cause' to any component will simply be wrong!

A necessary step toward understanding performance is to abandon the illusion of 'control' (or causality) and to begin framing questions to explore the properties that contribute to the emergence of stability. It is important to shift our focus from the elements in an organism or organization to include the relations across the elements.

(1). Jagacinski, R.J., Flach, J.M., & Gilson, R.D. (1983). A comparison of visual and kinesthetic tactual displays for compensatory tracking. IEEE Transactions on Systems, Man, and Cybernetics, 13(6), 1103-1112.

(2)  Jagacinski, R.J. & Flach, J.M. (2003). Control Theory for Humans: Quantitative approaches to modeling performance. Mahwah, NJ: Erlbaum. ISBN-13: 978-0805822939

(3). Flach, J.M. & Voorhorst, F.A. (2020). A Meaning Processing Approach to Cognition: What Matters? New York: Routledge.

 


Imagine a young human factors engineer, recently graduated from a psychology program at a small Midwestern university (in Dayton, Ohio). He goes to work for a large engineering corporation and is told that he will be participating in a project to design the next generation of decision support for Combined Air and Space Operations Centers (CAOCs). One of the primary goals for this decision support is to facilitate the ability to prosecute time-sensitive targets (TSTs). Where does he start? Perhaps he locates a resident domain expert and arranges a meeting. He introduces himself and begins, “Can you tell me everything that I need to know about TST?” Where does the domain expert begin? How does he capture and communicate a lifetime of experience in a 1-hour meeting? Perhaps, he provides a detailed work analysis that had previously been done, or perhaps he turns to his bookshelf and says, “Start by reading this shelf!”
Suppose that there had been a previous work analysis, how valuable do you think it would be? Would it reflect the new technologies and software options that will be implemented in the next generation CAOC? Would it accurately reflect the demands of future air operations? Even if the analysis did anticipate future opportunities and demands, would the data be in a form that would be useful to the young human factors engineer. That is, could he learn what he needed to learn about the domain from reading the work analysis report? Would the data in the report be represented in a way so that the important dimensions of the work problems would be salient?
Flach et al. (2008) have suggested that there is a need for a continuous, work domain analysis. The many different products generated during the work analysis should be archived in a database that can be easily accessed, used, and updated throughout the evolution of a work domain. This reflects the belief that work domain analysis is never complete due to the complexity of work and to the fact that work domains are always evolving to take advantage of new technologies and to keep pace with changing demands. One framework for thinking about how the different products of work design analysis might be organized is the Abstraction-Decomposition framework suggested by Rasmussen (1986; See also Vicente, 1999).
If a new work domain analysis is required for every change in a domain, then cognitive systems and human factors engineers will always be following the design parade, sweeping up the litter from poorly designed systems. The hope is that this living database would allow new researchers and designers to build on the prior work of their peers and that this would allow cognitive systems engineers to be more efficient in responding to evolving challenges and to make early contributions to the design process.

To read more: Flach, J.M., Schwartz, D., Bennett, A., Russell, S. & Hughes, T. (2008). Integrated constraint evaluation: A framework for continuous work analysis. In A.M. Bisantz & C.M. Burns (Eds.) Applications of Cognitive Work Analysis. (p. 273 - 297). London: Taylor & Francis.

To request a pdf copy e-mail john.flach@wright.edu

1

Over a 40+ year career exploring human performance in socio-technical systems I have observed or participated in numerous work domain analyses using various forms of representations such as concept maps, flow diagrams, journey maps, and abstraction hierarchies. It has been obvious that the processes involved in collecting the data and constructing these representations have been enlightening for the people participating in the analyses. However, there is little evidence that anyone who did not participate in the research and generation processes gained much value from the products that were produced.  Thus, future research teams rarely benefit from the work of prior teams. The new teams often have to start with a blank slate and collect their own data and create their own representations.

In 2008 we wrote a chapter in the book "Applications of Cognitive Work Analysis" (Bisantz & Burns) describing a process for creating a living database for archiving the products generated during Work Domain Analysis.  We called this database - Integrated Constraint Evaluation (ICE). The general idea was to use the Abstraction Hierarchy to create an indexing system for storing all the products generated during Work Domain Analysis.  These products could include raw data (e.g., transcriptions from interviews with domain experts, incident reports) and more abstracted data (e.g., concept maps, process diagrams, journey maps, prototypes).

Since writing the chapter we have been trying to find an organization willing to invest the resources to create something akin to ICE. However, the search has been in vane. It is hard to convince organizations that they own a work domain or that an upfront investment to build such a living database would payoff in the long run.

Perhaps, they're right. Or perhaps this is simply an idea that was ahead of its time. We wonder that in a world where people are investing in digital twins and Retrieval Augmented Generation (RAG) AI architectures, whether an idea of creating a living Work Domain database might be seen as an essential or at least complementary component. This archive could help to document the underlying assumptions and rationality behind a particular technology represented by a digital twin. It could also provide a structure for a database that becomes a component in a RAG architecture.

Despite the lack of support, we remain convinced that the products of Work Domain Analysis could be valuable to organizations. It is a shame that organizations lose the potential value of those products when the people who generated them walk out the door.

I recently listened to a talk by Jamer Hunt on "The Anxious Space between Design and Ethnography" In his talk, he creates a two-dimensional space (four quadrants) to reflect the famous quote from Donald Rumsfeld on "Unknown-Unknowns." He uses this space to illustrate the overlapping territory explored by anthropology and design.

The talk stimulated me to think more generally about relations between basic laboratory research, field research, and design. I was trained in human experimental psychology in a way that emphasized experimental design and controlled laboratory research. This was motivated by a clear bias that doing controlled experiments was the way to do "real" science. However, I was also involved in the development of the aviation psychology laboratory at Ohio State University and was exposed to applied Engineering Psychology in the tradition of Paul Fitts. And much of my career has been focused toward applying cognitive psychology in applied domains such as aviation, driving, and healthcare.

In light of these experiences, I have come to see most laboratory research as situated in the domain of Known-Knowns. Much of the published experimental literature are demonstrations of what we already know. For example, consider all the replications of Fitts' Law or all the variations on visual and memory search or more recently the replications of 'blind sight' experiments. Laboratory research does sometime open up insights into and fill gaps in the Known-Unknown territory, but surprise is extremely rare!

In my experience field research pushes us further into the region of Unknown-Unknowns - increasing the potential for surprise. Increasing the potential to learn something knew.

And design pushes us still further into the region of Unknown-Unknowns, increasing the potential for surprise - increasing the potential for learning and discovery. Also note that as we move deeper into the region of Unknown-Unknown, we also move deeper into the region of Unknown-Knowns. This is the region for reflection, meta-analysis, and metaphysics.  As we move into the Unknown it becomes more important to reconsider and reflect on foundational assumptions and to build theory to connect the dots and integrate across empirical experiments.

I have come to the conclusion that a mature science depends on a healthy coupling between laboratory research, field research, and design.  I believe that the ultimate test of any hypothesis or theory is its ability to motivate solutions to practical problems. I believe that paradigm shifts emerge from the coupling of research and design.

Certainly, experimental research serves a valuable function. However, if you are serious about learning and discovery, then it is important to explore beyond the laboratory, to get out of the territory of the known - to test your hypotheses and theories in practice, and to increase the potential for surprise.

Satisfying, Specifying, Affording

There is a prevailing assumption within Western cultures of a dichotomy, where mind and matter refer to fundamentally different (or ontologically distinct) kinds of phenomenon that work according to different principles or laws. On the one hand there is the world of Matter - that constitutes the realm of physical phenomena and that behaves according to the Laws of Physics (e.g., Laws of Motion, Laws of Thermodynamics, etc.). On the other hand is the world of Mind that constitutes the realm of mental phenomena and that behaves according to a completely different set of laws or principles (e.g., psychological or information processing principles).

This is sometimes referred to as the Mind/Body problem - suggesting that our bodies are subject to the Laws of Physics, but that our Minds are subject to different laws related to psychology or information. In essence, we have a world of hardware and a world of software and that these two worlds are fundamentally different, operating according to different laws/principles, and requiring separate sciences.

However, this belief is not universal, and it has been rejected by some notable philosophers/scientists (e.g., William James). James conceived of experience as a unified whole that included both physical (objective) and mental (subjective) constraints.

Just so, I maintain, does a given undivided portion of experience, taken in one context of associates, play the part of a knower, of a state of mind, of 'consciousness'; while in a different context the same undivided bit of experience plays the part of a thing known, of an objective 'content.' In a word, in one group it figures as a thought, in another group as a thing. And, since it can figure in both groups simultaneously we have every right to speak of it as subjective and objective, both at once. (James 1912, Essay I)

Further James argued for a single, unified science that focused on the relations that constituted the totality of experience resulting from the combined impact of the physical and the mental constraints. In our book, "A Meaning Processing Approach to Cognition" Fred Voorhorst and I make the case that there are three duals that are fundamental to a unified science of experience. Each dual reflects specific relations between mental (subjective) and physical (objective) constraints.

The first dual is a concept introduced by James Gibson that has gained increased acceptance from the design community. This is the construct of affordance or affording. An affordance reflects the potential for action that reflects properties of an agent in relation to properties of objects. The prototypical example is the affordance of graspability that reflects the relation between an agents effectors (e.g., hands) and the size, orientation, and shape of objects. Another example of an affordance that reflects the agency of a human-technology system is land-ability. This reflects the properties of a surface relative to the capabilities of an aircraft. Different aircraft (fixed wing versus rotary wing) are capable of landing successfully on different types of surfaces. In control theoretic terms, affordance is closely related to the concept of controllability - reflecting the joint constraints of the agent and the situation on acting.

The second dual is a concept that is related to James Gibson's concept of information (e.g., the optical array) and that was the focus of Eleanor Gibson's research on perceptual learning and attunement.  We refer to this dual as specificity or specifying. This refers to the relation between the sensory/perceptual capabilities or perspicacity of an agent and the structure available in the physical medium (e.g., the light, acoustics, tactile properties, or the properties of a graphical interface). For example, due to the angular projection of light, the motions of an observer relative to the surfaces in the environment are well specified in the optical array (e.g., the imminence of collision, or the angle of approach to a runway). In control theoretic terms, specificity is closely related to the construct of observability - reflecting the quality of the feedback with respect to specifying the states of a process being controlled. E.J. Gibson's work on perceptual learning reflects the insight that it typically takes experience for organisms to discover and attend to those structures in the medium that are discriminating or diagnostic with respect to different functions or intentions. For example, pilots must learn to pick up the optical features that specify a safe approach to landing.

The third dual is a concept that James Gibson included in his definition of affordance, that we feel is better isolated as a third dual. We refer to this dual as satisfaction or satisfying. This refers to the relation between the consequences of an action and the intentions/desires or health of an actor. If the consequences are consistent with the desires of an actor or if they are healthy, then the relation is satisfying. If the consequences are counter to the desires or unhealthy then they are unsatisfying. For example, a food that is healthy or a safe aircraft landing would be satisfying. But a food that was poisonous or a crash landing would be unsatisfying. In control theoretic terms, satisfaction is closely related to the cost function that might be used to determine the quality or optimality of a control solution. In essence, the satisfying dimension reflects the quality of outcomes.

We see these three duals as essential properties of any control system or sensemaking organization. That is, the quality of control, skill, situation awareness, or of sensemaking will depend on whether the affordances (action capabilities) and potential consequences are well specified by the available feedback, which in turn should enable the development of quality (if not optimal) control/decision making that reduces surprises and risks, and increases the potential for satisfying outcomes.

Lindblom (1979) wrote:

Perhaps at this stage in the study and practice of policy making the most common view... is that indeed no more than small or incremental steps - no more than muddling -is ordinarily possible. But most people, including many policy makers, want to separate the 'ought' from the 'is.' They think we should try to do better. So do I. What remains at issue, then? It can be clearly put. Many critics of incrementalism believe that doing better usually means turning away from incrementalism. Incrementalists believe that for complex problem solving it usually means practicing incrementalism more skillfully and turning away from it only rarely.

I am not sure that today is all that different. I think many people still see muddling (incrementalism) as a flawed approach to problem solving and decision making. It still seems to have a negative connotation and there is an implication that there is a better way to manage organizations or a better way to make decisions.

I think people equate muddling with a confused, aimless flailing. They have no concept of skilled muddling. However, for me skilled muddling is an apt description of how experts deal with complexity and uncertainty. It is not a confused, aimless process, but a carefully controlled combination of probing the world and utilizing the resulting feedback to make tentative steps in potentially satisfying directions.

The image I have is of skilled rock climbers - who are well tuned to the opportunities that different holds offer and who have a good understanding of the risks. They have developed strength and agility, so that they have a wide range of affordances in terms of what is reachable and graspable. They are careful about insuring that they have appropriate protection to guard against catastrophic falls. They are constantly establishing a firm base of support before reaching for the next hand or foot hold. And they are well-tuned to the information that allows them to identify the best supports and the viable paths toward their goal.

I see many parallels between muddling and  Peirce's concept of abduction - in which experts use their experience to make tentative hypotheses about the consequences of choices and actions, and then test their hypotheses through acting on them. And then utilize the feedback to adjust their assumptions and framing to reduce surprise (error). Further I see this in terms of a closed-loop triadic semiotic process which three sources of constraint. Each of these sources reflect relations between agents and ecologies.

Satisfying, Specifying, Affording

The first of these relations is the now familiar construct of affordance, which represents the potential for action. These reflect the relation between the effectivities of an agent (or organization) and the properties of objects in the ecology.  In control systems terms this reflects the constraints of the plant dynamics in the forward loops.

The other two constructs are less familiar. The second construct, specifying, reflects the potential for perception or information pick-up. It reflects the relation between the sensor systems (e.g., eyes) of agents and structure in the related ecological mediums (e.g., optical array, or graphical interface). In control systems terms this reflects the constraints on the sensors in the feedback loops.

The third construct, satisfying, reflects the functional constraints on performance. This reflects the relation between the intentions, values, or preferences of agents and the actual consequences of particular choices or actions. In control terms this reflects the comparator where intentions are compared to feedback - resulting in a surprise or error signal.

These three components are ontologically independent. That is an affordance exists independently from the information to specify it and from the intentions of agents. Thus, a cup is graspable even though it is out of eyesight and you don't have a current use for it.

However, these three constructs are tightly coupled with respect to epistemology. Thus, for example, our experiences of affordances depend on how well they are specified, and on whether they help us to satisfy our intentions. You cannot actually grasp the cup unless you can sense it (e.g., it is in eyesight), and you are unlikely to grasp it if you don't have a use for it (e.g., to hold a liquid).

Thus, skilled muddling involves tuning the coupling of agents with their ecologies or situations - increasing the range of affordances, tuning the perceptual system and aligning your intentions with the activities that lead to satisfying consequences. The opening figure illustrates how these dimensions map into Rasmussen's intuitions about the design of cognitive systems that are capable of skilled muddling.

These ideas are developed more extensively in our book "A Meaning Processing Approach to Cognition" (Flach & Voorhorst, 2020).

1

Most organisms and essentially all organizations have multiple layers of interconnected perception-action loops. And it will be generally true that the loops all operated at different time constants, have differential access to information, and have differential levels of authority. One way to think about the coupling across authority levels is that higher-levels set the bounds (or the degrees of freedom) for activity at lower levels. In this context, a key attribute of an organization is the tightness of the couplings between different levels.

For example, the classic Scientific Management approach involves a very tight coupling across levels. With this approach, higher levels (e.g., management) is responsible for determining the 'right way' to work and the management levels have the responsibility to implement training and reward systems to ensure that the lower levels (operators - workers) strictly adhere to the prescribed methods. This is an extremely tight coupling leaving workers very little degrees of freedom, as they are typically punished for deviating from the "one best way" prescribed by management. In essence, the organization functions as a clockwork mechanism. This approach can be successful in static, predictable environments, where the assumptions and models of managers are valid. However, in a changing, complex world (e.g., VUCA environments) this approach will be extremely brittle - because the cycle times for the upper levels of the system will always be too slow to keep pace with the changing environment. This approach will surely fail in any highly competitive environment, with intelligent, adaptive competitors.

One way to loosen the coupling between levels is to replace the "one best way" with a playbook, or a collection of smart mechanisms that are designed to address different situations that the organization might face. Typically, higher levels develop the playbook and are responsible for training the plays and for rewarding/punishing workers as a function of how well they are at selecting the right play for the situation and implementing that play.

The coupling can be relatively tight if the playbook is treated as a prescription such that any deviations or variations from the plays are punished, especially when negative results ensue. Or the coupling can be looser if the plays are treated as suggestions, deviations (e.g., situated adaptations) are expected or even encouraged, and the lower levels are not punished for unsuccessful adaptations.

A still looser coupling is an approach of Mission Command. With Mission Command higher levels in the organization set objectives and values, but they leave it to lower levels of the organization to determine how best to achieve those objectives given changing situations. This approach requires high levels of trust across levels in an organization. Higher levels have to trust in the competence and motivations of the lower levels, and lower levels have to trust that higher levels will have their backs so they will not be blamed or punished when well-motivated adaptations are not successful.

Also, Elinor Ostrum's construct of polycentric governance describes how loosely coupled social systems self-organize to manage share resources in a way that avoids the tragedy of the commons.  Note that Ostrum's work contradicts the common notion that top-down control is essential to prevent competition between local interests from resulting in complete exhaustion of a resource. Her research discovered many situations where coordination and cooperation emerged bottom-up without being imposed by a centralized, external authority.

These are simply some examples of different levels of tightness in the couplings between levels intended as landmarks within a continuum of possibilities. On one end of the continuum are tightly coupled organizations where the couplings across levels are like the meshing of gears in a clock. As couplings become looser the organization becomes less machine-like and more organismic. Metaphors that are often used for looser couplings include coaching and gardening. In organizations with looser couplings higher levels introduce constraints (e.g., guidance, suggestions, resources), but don't determine activities. The higher levels tend the garden, letting the plants (workers) free to grow on their own.

In an increasingly complex world - organizations with tight couplings across levels will tend to be brittle and vulnerable to catastrophic failures due to an inability to keep pace with changing situations. When couplings across levels are looser, the potential for bottom-up innovation and self-organization increases. However, if the coupling is too loose - there is an increasing danger for inefficiencies, conflict, and chaos. Thus, organizations must continuously balance the tightness of the couplings trading off top-down authority and control to increase the capacity for exploration and innovation (adaptive self-organization) at the operational level. In a complex world, loosening the couplings can allow the organization to muddle more skillfully.

At the heart of any consideration of questions of controllability and resilience is the question of degrees of freedom.  This was a central question of concern for Nikolai Bernstein in his explorations of perceptual-motor skill. In the context of motor skills, degrees of freedom refer to the constraints/possibilities for moving our bodies. But more generally - one might think of the degrees of freedom in a system as setting bounds on the potential for action.  When degrees of freedom are high - then there will be many different ways to accomplish the same task or goal.

Generally, the more degrees of freedom a system has - the more resilient it can be. This is because when one route to a goal is blocked, there will be other paths for reaching that goal.  However, degrees of freedom pose a challenge for control - because the more degrees of freedom a system has - the more variables have to be taken into account in the control logic.  In essence, increasing the degrees of freedom increases the potential for disturbances that have to be managed by the controller.

So, the degrees of freedom problem refers to a trade-off between flexibility (resilience potential) and control complexity. More degrees of freedom means there are more ways to be successful in accomplishing a goal, but also more ways to fail. In the context of perceptual motor control, the constructs of coordinative structure and smart mechanism were introduced as hypotheses of how to effectively manage this trade-off.

A coordinative structure or smart mechanism is created by constraining or locking out a subset of the degrees of freedom to create a simpler 'mechanism' (less complex control problem) that is specialized for a particular task or situation. This is somewhat analogous to the smart heuristics described by Gerd Gigerenzer in relation to ecological rationality.

The skill of golf provides a good example of how skilled athletes manage the degrees of freedom problem. Trying to drive a golf ball is a difficult control problem because of the many degrees of freedom in the body. For example, there are many potential disturbances to the path of the club head during a swing - a bend at the elbow of the leading arm, a twist of the head or shoulder, the positions of the legs, etc. It is simply impossible for a person to control all of the different degrees of freedom in real time. So, learning to drive a golf ball consistently involves learning to lock out many of the degrees of freedom (e.g., locking the elbow of the leading arm, holding the head in a fixed position with eyes on the ball, etc.). Thus, pro golfers learn to become a simple 'driving machine' that involves realtime control of only a few degrees of freedom that are specifically chosen to generate power to hit the ball great distances in a specific direction.

So, what good is having a lot of degrees of freedom if we are going to lock them out. The high degrees of freedom allows golfers to become many different kinds of simple mechanisms.  They can lock out a different set of degrees of freedom to become a chipping mechanism, designed to hit a ball with greater control over distance and direction. Or they can lock out another set of degrees of freedom to become a putting mechanism. Thus, a pro golfer will learn to become many different simple control mechanisms that are specialized for different situations they will face on a golf course. Successful golfers not only have different types of golf clubs - but also have specialized control mechanisms that have been 'designed' through extensive practice for specific situations.

Thus, to be resilient it is desirable for organizations to maximize degrees of freedom to increase the space of potential possibilities or to increase the possible routes to satisfying ends. However, the organizations also need to develop (train) smart mechanisms tuned to the demands of different situations - so that they will not be overwhelmed by the need to control too many variables in realtime. And they have to learn to choose the right mechanism for the right situation.  Just as smart heuristics reduce the computational burden, while maintaining high levels of effectiveness with regards to decision making, smart mechanisms reduce the computational demands on control, while maintaining high levels of effectiveness with respect to coordinated action.

Have you ever been in a situation where you are looking for one person and another person, who you didn't expect approaches and remarks "Aren't you going to say hi?"

To which you reply, "I didn't even see you."

And they respond, "But you were looking right at me!"

Many years ago Ulrich Neisser illustrated this phenomenon experimentally and it has since been replicated in numerous experiments in which salient stimuli (e.g., people carrying umbrellas, gorillas, etc.) in the field of view are not 'seen.'

For me, this illustrates the intimate coupling between sensation, perception, cognition, and action or using the OODA Loop framework, between Observing, Orientating, Deciding, and Acting.

I wonder if the use of representations that illustrate the different functions as if they are separate isolated stages of processing might lead to misconceptions about the nature of human experience.  Frankly, I believe that these are not independent stages of processing. Each stage blends (harmonizes) with the other stages such that the dynamics within each function are in part shaped by what is going on in the other stages.  So - what we observe is shaped by our orientation (or framing of the problem), and by the options we are considering, and by our capacity for action. While simultaneously what we are observing is shaping the other functions.

A simple example is research by Dennis Proffitt that shows that our perception of the steepness of a slope is different depending on the weight we are carrying in our backpack.

The bottom line is that when we break human information into a sequence of discrete stages it is likely that we can lose sight of some of the relations from which human experience emerges. The functions are not like a sequence of dominoes, but more like a barber shop quartet, in which each voice (i.e., function) is blending with the others to create a sound that has unique emergent properties.

I wonder if we redrew the OODA loop as overlapping circles that blend together to produce our experiences - it would lead to different intuitions about the nature of cognition and expertise.  What do you think? Does it strike a chord with you?

It seems to me that much of the hype about Artificial Intelligence (AI) reflects the potential power of AI to control things that had traditionally been controlled by humans.  If not replacing humans, the expectation is that humans will be displaced from lower levels of control to higher supervisory levels. Thus, the AI systems are framed to be autonomous control systems (e.g., autopilots).

I think this vision might be reasonable for simple and maybe even complicated domains of operation. However, I think this is the wrong vision when thinking about applying AI to complex or chaotic domains of operation. An alternative that I suggest for applications in domains that are more than complicated is to think about AI as being analogous to a telescope or microscope. That is, the function of AI is to enhance observability.  The function is to use the power of advanced computations and statistics to pull patterns out of data that humans could not otherwise perceive.

In this context, the control problem is framed as a joint cognitive system (rather than as an autonomous system). The role of AI in this joint cognitive system is to enhance observability to shape how humans frame decisions. In terms of Boyd's OODA loop framework, the value of AI is to enrich Observations in ways that constructively shape how humans Orient to a problem or frame the Decision processes.  Thus, humans are engaged and empowered (not displaced), and the ultimate quality of control is enhanced.