Skip to content

1

Two questions that interest general systems thinkers are:

Why things change? 2) Why things stay the same?

Let’s consider the first question in this essay, and the second question will be explored in later essays. Change is pervasive – things are flowing, moving from place to place, growing, eroding, aging. During my career, my students and I have explored problems associated with the control of locomotion. How do pilots judge speed or changes in altitude? How were they able to guide their aircraft to a safe touchdown on a runway? How are pilots and drivers able to avoid collisions? Or how are baseball batters able to create collisions with a small ball? Inspired by James Gibson, this research explored the hypothesis that people utilize patterns or structures in optical flow fields to make judgments about and to actively control their own motion.

When Gibson introduced the concept of optical flow fields and the hypothesis that animals could utilize optical invariants to guide locomotion, it was a very difficult construct for many perceptual researchers to grasp. Where is optical flow? If you study eyes or if you study light or optics you won’t discover optical flow. Optical flow is not in the eye and it is not in the light. Optical flow is an example of an emergent property, it arises from relations between the components. Optical flow results when an eye moves relative to surfaces that reflect light. The flow that results from movement relative to reflective surfaces has structure that specifies important aspects of relations between surfaces and moving points of observation. For example, as you are driving down the highway – surfaces that are near to you will flow by relatively quickly, while surfaces that are far from you will flow by relatively slowly. The further a surface is away, the slower it will flow by – and something that is very far away like the moon may seem to not be moving relative to you at all. And when the surface in front of you begins to expand rapidly – you will immediately recognize that a collision is impending. Thus, the structure in flow fields specifies aspects of the 3-dimensional environment that support skilled movement through that environment.

If you look out the window of a car, a train, or an aircraft you will see the world flowing by - optical flow fields are there right before our eyes. Yet, few perceptual researchers could see it! Or if they did, they didn't recognize it as an important phenomenon to explore. 

Few are aware that a key source that helped Gibson to discover the role of optical structure for controlling locomotion was Wolfgang Langewiesche who described how the relation between the point of optical expansion and the horizon helped to specify the path to a safe landing for a pilot.

Interest in change is partly motivated by our interest in making changes – in steering an aircraft to a soft landing or in managing an organization to achieve some function. In the case of controlling vehicles or other mechanical or physical technologies there is typically a linear, proportional relation between action and reaction. Small inputs to the steering wheel typically result in small changes to the direction of travel. However, this is not the case for more complex systems such as weather systems or sociotechnical systems. In complex systems, small actions can be amplified in ways that have large impacts on performance of the whole. The proverbial “Butterfly” effect illustrates this. That is, the idea that the flapping of a butterfly’s wing can impact the nature of a storm some distant time and place in the future. Or that the loss of a nail can result in the loss of a kingdom. Or that a smile can launch a thousand ships.

This is another reminder to be humble.

A sociotechnical system or complex natural system like weather cannot be controlled as simply as steering a car. And in fact, it is questionable whether such systems can be controlled at all. Yes, like the butterfly we can flap our wings, but we can’t anticipate or completely determine the consequences that will result. The performance of a complex organization depends on interactions among many people and no single person determines the outcome. While each component can have an influence, the ultimate outcome will depend on contributions from many people and it will also depend on outside forces and influences.

It is an illusion to think that we can control complex systems - to believe that we are in control or even that we could be in control. In reality, the best anyone can do is to muddle.

If we are observant and careful, we can dip our oars into the water and/or adjust our sails in ways that will influence the direction of our vessel. But the waves and currents have an important vote! And also, our colleagues and friends have an impact. It is a mistake to think that we are the lone captains of our fate - but with a little luck and a lot of help from our friends - we can keep the boat upright and muddle through. 

8

A System is a way of looking at the world.

What is a system? I like Gerald Weinberg’s (1975) answer to this question, that “a system is a way of looking at the world.” This emphasizes that a system is analogous to a piece of art – it is a representation or model that an observer creates. It emphasizes that the system is not an ‘objective’ thing that exists independent of the observer. Rather, a system is a representation. As a representation it will make some things about the phenomenon being represented salient, and it will hide other things. I’ve long understood this position intellectually – but it has taken me much longer to appreciate the deeper meaning and the practical implications of this definition.

Early in my career, I was exposed to control theory due to working with Rich Jagacinski to model human tracking performance as a graduate student. Ever since, I see closed-loop systems everywhere I look. It seemed obvious to me that the language of control theory and the various representations (e.g., time histories, Bode plots of frequency response characteristics, state space diagrams) provided unique and valuable insights into nature. And I have bored countless students and colleagues to tears as I tried to explain the implications of gains and time delays for stability in closed-loop systems. The power of control theory led to an arrogant sense that I had a privileged view of nature! I sought out those who shared this perspective, and I expended a lot of energy to convince other social scientists that the language of control theory was essential for understanding human performance.

The mistake was not to believe that control theory is a valuable lens for exploring nature. The mistake was to think that it is the best or only path. My infatuation for control theory colored everything I looked at. Everything I observed, every paper I read, every debate or discussion with a colleague was filtered through the logic of control theory. I classified people with respect to whether they ‘got it’ or not! I tended to discount everything that I could not frame in the context of control theory. The problem was that I was so intent on preaching the ‘truth’ of control theory, that I stopped listening to other perspectives.

The deeper implication of Weinberg’s definition is captured in his principle that

“the things we see more frequently are more frequent: 1) because there is some physical reason to favor certain states; or 2) because there is some mental reason.”

While I still believe that control theory captures some important aspects of nature, I now realize that the reason I see it everywhere, and the reason that I dismiss other perspectives is in part due to my own mental fixation. I now realize that it is impossible to separate these two possibilities from within control theory. You simply can’t tell whether your observations reflect natural constraints of the phenomenon or whether they reflect constraints of your perspective – if you only stand in one place. This is nicely illustrated by the Ames room illusion.

This is important because I am not the only victim. Over my career, I have watched others get locked into specific perspectives and have observed vicious debates as people defend one perspective against another. In an Either/Or world, there is a sense that only one perspective can be ‘true.’ So, if my perspective is right, yours must be wrong. I’ve watched constructivists war with ecological psychologists. I saw the development of nonlinear perspectives, and suddenly everything in nature was nonlinear – and all the insights from linear control theory were dismissed.

Gradually, I have come to understand that an important implication of the first principle of General Systems Thinking is:

To be humble.

Nature is incredibly complex relative to our sensemaking capabilities. Any representation or model that will makes sense to us, will only capture part of that complexity. Thus, every representation will be biased in some way. But also, many different perspectives can be valid in different ways. The challenge of General Systems Thinking is to be a better, more generous listener. Don’t let your skill with a particular perspective or a particular set of analytical tools blind you to the potential value of other perspectives. This is not simply about listening to other scientific perspectives. This is not simply about a debate between constructivists and ecologists, or between linear and nonlinear analytical tools. This is about listening to other forms of experience. Listening to the poets and artists. Listening to domain practitioners, listening to people from all levels of an organization.

In some sense General Systems Thinking is an attempt to find a balance between openness and skepticism. On the one hand, we need to be skeptical about all perspectives or models - including our own. On the other hand, we should be open to the potential values of different models and we need to be capable of using multiple models as we seek to distinguish the constraints that are intrinsic to the phenomenon of interest from the constraints of specific perspectives on that phenomenon. Our enthusiasm for the perspectives that we find to be most useful should be tempered by an appreciation of the potential of other perspectives and an openness to the insights that they offer. We need to move beyond the Either/Or debates to embrace a Both/And attitude of collaboration.

I can’t stop seeing closed-loop systems everywhere I look, and I will continue to share my passion for control theory with those who are interested, but I am working to temper my enthusiasm; to be a better listener; and to be a more generous colleague.  

In sum, a system is a representation created by an observer; and General Systems Thinking is a warning against getting trapped by a narrow perspective on nature. It is a reminder that nature is far too complex to be fully captured within any single representation that we could create or that we could grasp. If you aren't boggled by the complexity of nature, you aren't looking carefully enough. 

Samuel Pierpont Langley was a preeminent scientist/engineer of his time. In 1903, the nation's eyes were on his latest efforts to solve the problem of human powered flight. He had done the calculations, had tested his aerodrome models and was finally ready to scale up the models and put his solution to the ultimate test. On October 7th his assistant climbed on to the aerodrome and was launched over the Potomac River - and immediately crashed into the river. A second attempt on December 8th had the same result. Langley's failure led many to conclude that the solution to human powered flight would not be realized in their lifetimes. Yet, nine days later, two unknown brothers from Dayton, OH made the first human powered flight at Kitty Hawk. 

How was the Wright Brother's approach different than Langley's? What was the key to their success? One of the first questions that the Wright's asked was how would an airplane be controlled. And when they inquired to the Smithsonian about prior work on control they were surprised to learn that nothing had been done. The key to their success was the discovery that the problem of steering an aircraft was different than steering a boat. A simple rudder wouldn't do. To turn an aircraft required banking it. Until the Wrights (who built bicycles and carefully observed birds) no one could imagine that a pilot would need to be able to bank the aircraft to achieve a coordinated turn. So, the Wright's started with the question of how to put control into the hands of smart human's. They tested their control system using kites and gliders and spent hours to become skilled at using the controls before adding an engine. 

For Langley - the problem was to design a flying machine. But for the Wrights - the problem was to design a joint cognitive system - that included the pilot as a critical component of the system. 

Today, the pressing challenge is not how to pilot a single aircraft, but how to manage a complex, distributed multi-layered network (e.g., an international business conglomerate, a complex civil air space, a distributed all-domain military organization). This has been described as a polycentric control problem (e.g., Ostrum, 1999; Woods & Branlat, 2010). 

Many people are attempting to engineer solutions to this polycentric control problem utilizing the latest advancements in artificial intelligence, machine learning, and natural language processing. The power of these technologies is quite amazing and the engineering is advancing at a remarkable rate. However, I worry that some do not fully  appreciate the lesson of Langley and the Wrights.

They are framing the problem around the technologies and forgetting that ultimate success depends on the quality of the joint cognitive system.

For example, there seems to be an implicit belief among some that the technology will eliminate the fog and friction associated with managing a complex, distributed organization. Others seem to believe that the technology will allow control to be centralized into the hands of a single person (i.e., pilot or commander).

However, the works of Ostrum and others illustrate that polycentric control problems demand that we let go of the illusion of an omnipotent, centralized controller. Polycentric control problems require harnessing the power of a network of people and technologies with diverse experiences and a variety of skills. Further, it involves creating the conditions for these diverse components to self-organize around common functional objectives, to make dynamic trade-offs among conflicting values, and to resiliently adapt to unanticipated disturbances. 

Note that, within the joint cognitive systems framework, the Wrights did amazing research on the design of wings and propellers, discovering errors in previous models of lift. So, advancing the technology is an important component of the solution. But advancing the technology is NOT enough! Ultimately - success depends on the ability of people to engage with the technology and steer the technology in the right direction. The technology is only part of the system being engineered. Those who treat the people as an afterthought will find themselves designing systems that are unstable and that don't get far off the ground before crashing into the river of complexity. 

Clock time has been a critical dimension for representing human behavior. This has included chronometric analysis using reaction time as a cue for inferring the nature of internal information processing activities and plots of time histories to visualize patterns of activity in time. However, there are other ways to visualize systems dynamics that are less dependent on clock time as an explicit dimension. The construct of field is one important way to visualize dynamical constraints that exist over time (and space), rather than in time. Feynmann (1963) describes the field construct:

 

It would be trivial, just another way of writing the same thing, if the laws of force were simple, but the laws of force are so complicated that it turns out that fields have a reality that is almost independent of the objects which create them. One can do something like shake a charge and produce an effect, a field, at a distance; if one then stops moving the charge, the field keeps track of all the past, because the interaction between the particles is not instantaneous. It is desirable to have some way to remember what happened previously. If the force upon some charge depends upon where another charge was yesterday, which it does, then we need machinery to keep track of what went on yesterday, and that is the character of a field. So when the forces get more complicated, the field becomes more and more real, and this technique becomes less and less of an artificial separation.

 

Inspired by Gibson's construct of the Field of Safe Travel, we used a state space to represent constraints associated with a desk top driving simulation in what engineers refer to as a state space diagram. In this representation we plot the constraints associated with maximal acceleration and maximal braking as critical landmarks for understanding driver performance. The points indicate where braking was initiated, the open symbols represent performance early in training and the solid symbols represent performance after practicing. Many have described these types of results as if the driver had learned to perceive Time to Contact, however, we prefer to describe the results as evidence that the driver has learned the constraints of his vehicle and has discovered an optimal, bang-bang solution to the task of approaching and stopping before an obstacle as fast as possible. That is, full acceleration till they reach a point where full braking will stop them before reaching the obstacle. In essence, the driver has learned the dynamics of the simulation. Note this was a table top simulation - so there were no impact of g forces. The point is that the state space represents the constraints over time (i.e., the field; the unique action constraints on Superman and Spiderman) in ways that time histories do not.

 

 

Another way to represent patterns over time, rather than instances in time is to use Fourier Analysis. This is a means of representing events as a collection of sinusoidal patterns rather than a collection of points in time. The frequency domain representations allow dynamical systems to be described as observers or control systems that are tuned to certain patterns (e.g., frequencies). This is consistent with E.J. Gibson's theory of perceptual attunement. The basic idea is that with experience, people learn to detect patterns (or structure) in events and that they can use those patterns to anticipate the future and to synchronize their activities with the patterns in constructive ways. 

So, while time histories and chronometric analysis of human behavior can lead to important insights into human performance, it is important for social scientists to consider other ways to visualize the dynamics of behavior. The exclusive use of the clock tends to suggest a causal narrative. Whereas, other narratives (tuning to constraints or to patterns) are suggested by alternative representations.

Each perspective provides unique insights and suggests different metaphors, and no perspective captures the complete story. 

4

The construct of wicked problems reflects situations where intuitions based on conventional logic will often not be adequate. Wicked problems are chaotic. Thus, conventional ways to decompose problems based on the intuitions of linear analytic techniques or traditional causal reasoning will fail. It is here where ‘experience’ and ‘wisdom’ are the best guides. It is here where Captain Kirk (a bias toward action), Spock (logic), and Dr. McCoy (emotions) must work together to keep the boat stable amid the waves of epistemic uncertainty.

In dealing with wicked problems the head and heart must trust each other and work together to muddle through. For these situations pragmatics take priority – the right choice is the one that works! And often the only reason it works will be that you did what was required to make it work!

This involves more than classical logic. It often requires going forward and following your intuitions, even when conventional logic says to turn back. It involves passion, persistence, and discipline. For these situations, it is not about making the ‘right choice.’ It is about making your choices work! And typically, this will require the efforts of both head (mind) and heart (body).

Again, it is tempting to argue about whether the head or heart should lead. But this reflects a dualistic trap based on either/or reasoning. It is ultimately a matter of coordination between heart and head. It is not about abandoning analytic aspects of cognition, but rather recognizing that the heart is a necessary partner. Success depends on effective coordination between heart and head in order to make the choices work out right! And still there are no guarantees. Serendipity and luck also play a role. 

Without Captain Kirk, Spock and Dr. McCoy might get caught in infinite analytical loops and never pull the trigger, but without Spock and Dr. McCoy, Captain Kirk may not be able to make his choices work! And despite their combined efforts the waves of uncertainty may still get the final vote! 

 

Muddling literally involves a trial and error process of "feeling" our way through a complex ecology to satisfying outcomes. This involves applying our experiences and utilizing domain constraints (e.g., landmarks) in a smart way. However, muddling involves more than just "knowledge' or 'intelligence.' It also involves feelings. It takes passion to persist in the face of inevitable errors. It takes a cool head to face the inevitable risks. It takes discipline to persist when progress is slow. And it takes a well tuned value system to set appropriate goals and to gauge progress toward those goals. 

Information processing models of cognition, based on a computer metaphor have tended to consider emotions to be a primitive source of 'noise' that interferes with successful adaptation and performance. In this context, the difficulties that Phinneas Gage had in adjusting to life after his accident have been attributed to damage to an internal executive that normally suppresses more primitive instincts associated with emotions. However, recent research by Damasio suggests that the difficulties that Gage experienced may be due to a lack of emotional constraints - that is, a kind of autism in which the emotions are cut-off from the executive processing, leading to a cold logical process - that is poorly tuned to the realities of everyday life, that lacks the common sense necessary for consistently achieving satisfaction. 

Damasio's work suggests that rather than thinking about emotions as 'primitive' instincts that are a threat to satisfactory functioning, it might be better to think about emotions as evolutionarily tuned instincts that help to ground our newer cognitive capabilities in the realities of every day situations. Perhaps, the tuning of emotional and aesthetic sensitivities are fundamental to human expertise. This might be part of why Don Norman's claim that 

Beautiful things work better!

is true. Perhaps, expertise ultimately depends on our ability to coordinate our hearts and minds. Perhaps, designing technologies to support human expertise requires developing representations that are both well tuned to the domain constraints and to the aesthetic sensibilities of people. Perhaps, it is the intimate intermingling of the pragmatic with the aesthetic that make Pirsig's construct of Quality so difficult to explain in conventional terms. 

Any philosophic explanation of Quality is going to be both false and true precisely because it is a philosophic explanation. The process of philosophic explanation is an analytic process, a process of breaking something down into subjects and predicates. What I mean (and everybody else means) by the word ‘quality’ cannot be broken down into subjects and predicates. This is not because Quality is so mysterious but because Quality is so simple, immediate and direct.

References

Damasio, A. (1994). Descartes’ Error: Emotion, reason, and the human brain. New York: Penguin Books.

Norman, D.A. (2005). Emotional design. New York: Basic Books.

Pirsig, R.M. (1974). Zen and the art of motorcycle maintenance. New York: Perennial Classics. (p. 254).

In the previous post, the main point was that in complex situations analytic solutions (e.g., maps, classical logic, mathematical modeling) will generally fall short of addressing all the important factors and relations that must be considered to achieve a satisfying outcome. Thus, there will be a need for some degree of muddling to get to a satisfying outcome. By 'muddling' I mean a kind of trial and error process analogous to what C.S. Peirce described as Abductive Inference. That is, we generate hypotheses and then test these hypotheses through acting on them. It is important to note that some hypotheses and some actions are better than others. Productive thinking, then involves generating smart hypotheses and smart tests (i.e., hypotheses that are more plausible and tests that generate useful information or feedback and that are relatively safe). This is consistent with Lindblom's idea of incrementalism - making small, safe adjustments to slowly 'feel' the way to a satisfying outcome. It is also consistent with Gigerenzer's idea of ecological rationality and the smart use of heuristics.

A key aspect of smart or expert muddling is to utilize the natural constraints of situations to reduce the space of possibilities and to minimize the consequences of errors. The aiming off strategy used by sailors and orienteers to solve navigation problems provides a good example of how structure inherent in a problem can provide the basis for heuristic solutions that greatly simplify computational demands. In the sailing context, consider the problem of navigating across the vast Atlantic Ocean from London to Boston in the days before global positioning systems. The ship’s pilot would need to frequently compute the position using whatever landmarks were available (e.g., the stars, the sun etc.). These computations can be very imprecise and on a long trip errors can accumulate so that when the ship initially sights the North American continent - it may not be exactly where intended. In fact, Boston may not be in sight.

A similar problem arises in orienteering, which involves a race across forested country from waypoint to waypoint using a compass and topographic map for navigation. When the next waypoint is a distant bridge across a river, because of the uncertainties associated with compass navigation, there is a high probability that due to accumulated errors, the orienteer will not be able to hit the river at exactly the location of the bridge. What does she do when she gets to the river and the bridge is not visible?

Skilled sailors and skilled orienteers use a strategy of aiming off to solve the problem of error in the computational approaches to navigation. That is, rather than setting their course to Boston or to the bridge, they set their course for a point on the coast south of Boston or to the nearest point on the river below the bridge. That is, they purposely ‘bias’ their path to miss the ultimate target. Why? Is this an ‘error’?

Using a computational solution, when you reach the coast or the river and the target is not in sight, which way do you go? If you use the aiming off strategy you know exactly which way to go. When you see the coast, you should be able to sail with the current, up the coast to Boston. When you reach the river, you know which direction to follow the river in order to find the bridge. With the aiming off strategy, rough computations are used to get into a neighborhood of the goal (to reach the boundary constraint), and then, the local boundary constraint is used to zero-in on the target using directly perceivable feedback. The structural association between the boundary (coast line or river) and the target (Boston or bridge) is information (i.e., a sign or landmark) that specifies the appropriate actions.

As autonomous analytical technologies are integrated into organizations, it is important to also consider the role that smart muddling will play in achieving the goals of the organization. This smart muddling can be supported through the design of direct manipulation/perception interfaces (e.g., Bennett & Flach, 2011; Schneiderman, 2022) that allow people to utilize the power of AI/ML systems to discover patterns (natural constraints), to test hypotheses, and to anticipate the potential risks associated with alternative actions. An important question for designers is

How can we leverage the power of AI/ML systems to help people to muddle more skillfully?

References

Bennett, K.B. & Flach, J.M. (2011). Display and Interface Design: Subtle Science, Exact Art. Boca Raton, FL: CRC Press.

Schneiderman, B (2022). Human-centered AI. Oxford: Oxford University Press.

The literature on leadership suggests that one of the characteristics of effective leaders is a bias toward action. In contrast, poor leadership is often associated with a ‘paralysis of analysis’ in which leaders are unable to act while they weigh all the variables and possibilities that will potentially impact the outcome of a choice. This paralysis often results in loss of opportunities in a dynamic environment where the windows of opportunity are opening and closing.

The penchant for action is illustrated in the legend of Alexander the Great and the Gordian Knot. The Gordian Knot is a metaphor for a complex problem that is intractable when approached by conventional analytical means (i.e., by disentangling it). However, the problem may be easily solved through decisive action (i.e., cutting the knot with a sword).

Many decisions that we make may be effectively Gordian Knots (e.g., buying a house, choosing a career, choosing a mate, deciding to have a family, managing a humanitarian crisis). There are so many factors to consider, the options are so disparate that making comparisons is extremely difficult, and there are often an uncountable number of options to consider (e.g, the perfect house may go on the market tomorrow, interest rates might change).

In addition, there are finite windows of opportunity for effective action (e.g., someone else might buy the house you wanted). Recognizing the challenge of these Gordian Knots, Walker Percy wrote “Lucky is the man who does not secretly believe that every possibility is open to him.” Percy knew that this secret belief could result in a ‘paralysis of analysis’ that would ensure a sad and difficult life.

In fact, this paralysis of analysis is exactly what Damasio observed with people with damage similar to Phineas Gage.  For example, here is Damasio’s description of one of his patients with ventromedial prefrontal lobe damage:

I was discussing with the … patient when his next visit to the laboratory should take place. I suggested two alternative dates, both in the coming month and just a few days apart from each other. The patient pulled out his appointment book and began consulting his calendar. The behavior that ensued, which was witnessed by several investigators, was remarkable. For the better part of a half-hour, the patient enumerated reasons for and against each of the two dates: previous engagements, proximity to other engagements, possible meteorological conditions, virtually anything that one could reasonably think about concerning a simple date…. He was now walking us through a tiresome cost-benefit analysis, an endless and fruitless comparison of options and possible consequences. It took enormous discipline to listen to all of this without pounding on the table and telling him to stop, but we finally did tell him, quietly, that he should come on the second of the alternative dates. His response was equally calm and prompt. He simply said: “That’s fine.” Back the appointment book went into his pocket, and then he was off.

This clearly illustrates the problem of conventional analytical models of rationality based in logic or normative economic models when faced with the complexities of everyday life. They lack a ‘stopping rule.’

The analytic models provide a means for doing the computations, for processing the data, for making comparisons, but the computations will continue blindly as long as data is being fed in. Typically, there are no intrinsic criteria for terminating the computation in order to act. In contrast, heuristics such as those described by Gigerenzer typically have explicit stopping rules. Thus, one of the advantages of heuristics relative to more normative approaches is that heuristics are typically recipes for action, rather than processes for doing computations.

We hypothesize that these intuitive heuristics are the foundations for common sense! They are the swords that allow us to solve the Gordian Knots of everyday life.

In many situations, the quality of the outcome may depend on acting to make the choice right, rather than on waiting to act until the right choice is certain. 

References

Percy, W. (1966). The last gentleman. New York: Picador.

Damasio, A. (1994). Descartes’ Error: Emotion, reason, and the human brain. New York: Penguin Books. (p. 193-194)

Gigerenzer, G. (2007). Gut feelings. The intelligence of the unconscious. New York: Penguin Books.

Many problems of life are ill-structured (e.g., humanitarian or military operations). That is, they are complex and rife with uncertainty, is it possible to make ‘good’ or ‘right’ decisions? Perhaps, in these complex situations success may depend on making the decision right (or making the decision work), rather than on making the normatively right decision.

In complex situations, the problem may be more analogous to adaptive control (i.e., making continual adjustments to incrementally satisfy the functional goals –muddling through), than the problem of discretely choosing a right option from a fixed set of alternatives. The loose coupling between a ‘right’ choice and a successful adaptation was illustrated by Karl Weick (1995) using the following story illustrated in the cartoon:

The young lieutenant of a small Hungarian detachment in the Alps sent a reconnaissance unit into the icy wilderness. It began to snow immediately, snowed for 2 days, and the unit did not return. The lieutenant suffered, fearing that he had dispatched his own people to death. But on the third day the unit came back. Where had they been? How had they made their way? Yes, they said, we considered ourselves lost and waited for the end. And then one of us found a map in his pocket. That calmed us down. We pitched camp, lasted out the snowstorm and then with the map we discovered our bearings. And here we are. The lieutenant borrowed this remarkable map and had a good look at it. He discovered to his astonishment that it was not a map of the Alps, but a map of the Pyrenees.

Weick concludes that this story suggests the possibility that “when you are lost, any map will do.” He continues:

The soldiers were able to produce a good outcome from a bad map because they were active, they had a purpose (get back to camp), and they had an image of where they were and where they were going. They kept moving, they kept noticing cues, and they kept updating their sense of where they were. As a result, an imperfect map proved to be good enough. The cues they extracted and kept acting on were acts of faith amid indeterminacy that set sensemaking in motion. Once set in motion, sensemaking tends to confirm the faith through its effects on actions that make material what previously had been merely envisioned [sic].  (p. 55)

Charles Lindblom (1959), in his classic paper “The Science of ‘Muddling Through,’” comes to a conclusion similar to Weick's. Lindblom noted that for complex policy decisions, the comprehensive evaluations that are suggested by normative models of decision-making are typically impossible:

Although such an approach can be described, it cannot be practiced except for relatively simple problems and even then only in a somewhat modified form. It assumes intellectual capacities and sources of information that men simply do not possess, and it is even more absurd as an approach to policy when the time and money that can be allocated to a policy problem is limited, as is always the case. (p. 79)

Lindblom offered a more heuristic program of incremental adjustment as a more realistic alternative to the classical, normative approaches to policy making. He called this heuristic, trial and error process: ‘muddling through.’

Twenty-years after his classic paper Lindblom (1979) comments:

Perhaps at this stage in the study and practice of policy making the most common view … is that indeed no more than small or incremental steps – no more than muddling – is ordinarily possible. But most people, including many policy makers, want to separate the ‘ought’ from the ‘is.’ They think we should try to do better. So do I. What remains as an issue, then? It can be clearly put. Many critics of incrementalism believe that doing better usually means turning away from incrementalism. Incrementalists believe that for complex problem solving it usually means practicing incrementalism more skillfully and turning away from it only rarely.

The question in the title of this post is actually a trick question. It is not a matter of better maps OR better muddling. We will always need both. Yes, the quality of decision making can be improved by utilizing the computational power of AI/ML algorithms to produce better decision aids (e.g., maps). But despite the increasing capacity of these information technologies, significant uncertainty will remain due to the nature of complexity and the difficulty in quantifying all the variables that might be important. Thus, success will ultimately depend to some extent on the skilled muddling of humans. As we begin to integrate AI/ML decision aids into our organizations, it is important to keep in mind that ultimately success will depend on the quality of the Joint Cognitive System - that includes smart technologies and smart people. 

References:

Weick, K.E. (1995). Sensemaking in organizations. Thousand Oaks, CA: Sage Publications.

Lindblom, C.E. (1959). The Science of ‘Muddling Through.’ Public Administration Review, 19, 2, 79-88.

Lindblom, C.E. (1979). Still Muddling, Not Yet Through. Public Administration Review, 39, 6, 517-526.

1

Heinz von Forster observed that

Objectivity is the delusion that observations could be made without an observer.

I think this has important implications for those of us interested in studying human experience. The idea that we can be objective observers of human experience may be a delusion. In fact, in many contexts the constraints we impose as observers may account for most of what we see. For example, early in my career I did research on human tracking and was at first amazed that classical or optimal control models could account for up to 90% of the variance. Now I look back and think - why was I surprised that when I constrained the situation so that the person had to behave like a simple servomechanism to succeed the resulting behaviors could be accounted for using models of simple control systems.

The physicist John Wheeler used the Surprise Version of the 20 Questions game to illustrate the impact of observers and the epistemological implications at the Quantum level.

In contrast to the normal Twenty Questions game where the people in the room decide on a single word to be the target, in the Surprise Version the people in the room each independently choose their own words with the constraint that once the game begins, the word that they have in mind must not contradict any previous answers to questions. This may require that the people in the room have to change their word over the course of the game to be consistent with the responses of other people. Thus, the "correct" word is a moving target that is shaped, in part, by the questions asked. In fact, the final target may not have been in anyone's mind when the questioner re-enters the room. Wheeler concludes his story that

In the real world of quantum physics, no elementary phenomenon is a phenomenon until it is an observed phenomenon. In the surprise version of the game no word is a word until that word is promoted to reality by the choice of questions asked and answers given. ‘Cloud’ sitting there waiting to be found as we entered the room? Pure delusion!

The lesson of the Surprise Version of 20 Questions is particularly relevant to exploring human experience. Just as at the quantum level, the actions of the observer (i.e., scientist) can be a significant source of variance. This impact is typically referred to as the demand characteristics. These demand characteristics (e.g., the awareness of participants that they are being observed) can be important factors in shaping the experience (i.e., performance) that results.

Also, it is important to realize that human actions (e.g., innovative technologies) are shaping the opportunities for experience more globally. For example, new forms of travel and communication greatly extend our abilities to explore the world and to collaborate with people in distant locations. Cell phones have dramatically changed how people coordinate social activities – “Just call me when you get out of class and I will let you know where I am, so that we can meet.” Thus, we should not think about human experience as an “object” that exists “out there” independent from the experience of the scientist or designer. Experience is not a stationary object that can be isolated and studied in a vacuum as if it is independent of an observer. Experience is constantly evolving in response to changing physical and social contexts.