Skip to content

Skilled Muddling

In the previous post, the main point was that in complex situations analytic solutions (e.g., maps, classical logic, mathematical modeling) will generally fall short of addressing all the important factors and relations that must be considered to achieve a satisfying outcome. Thus, there will be a need for some degree of muddling to get to a satisfying outcome. By 'muddling' I mean a kind of trial and error process analogous to what C.S. Peirce described as Abductive Inference. That is, we generate hypotheses and then test these hypotheses through acting on them. It is important to note that some hypotheses and some actions are better than others. Productive thinking, then involves generating smart hypotheses and smart tests (i.e., hypotheses that are more plausible and tests that generate useful information or feedback and that are relatively safe). This is consistent with Lindblom's idea of incrementalism - making small, safe adjustments to slowly 'feel' the way to a satisfying outcome. It is also consistent with Gigerenzer's idea of ecological rationality and the smart use of heuristics.

A key aspect of smart or expert muddling is to utilize the natural constraints of situations to reduce the space of possibilities and to minimize the consequences of errors. The aiming off strategy used by sailors and orienteers to solve navigation problems provides a good example of how structure inherent in a problem can provide the basis for heuristic solutions that greatly simplify computational demands. In the sailing context, consider the problem of navigating across the vast Atlantic Ocean from London to Boston in the days before global positioning systems. The ship’s pilot would need to frequently compute the position using whatever landmarks were available (e.g., the stars, the sun etc.). These computations can be very imprecise and on a long trip errors can accumulate so that when the ship initially sights the North American continent - it may not be exactly where intended. In fact, Boston may not be in sight.

A similar problem arises in orienteering, which involves a race across forested country from waypoint to waypoint using a compass and topographic map for navigation. When the next waypoint is a distant bridge across a river, because of the uncertainties associated with compass navigation, there is a high probability that due to accumulated errors, the orienteer will not be able to hit the river at exactly the location of the bridge. What does she do when she gets to the river and the bridge is not visible?

Skilled sailors and skilled orienteers use a strategy of aiming off to solve the problem of error in the computational approaches to navigation. That is, rather than setting their course to Boston or to the bridge, they set their course for a point on the coast south of Boston or to the nearest point on the river below the bridge. That is, they purposely ‘bias’ their path to miss the ultimate target. Why? Is this an ‘error’?

Using a computational solution, when you reach the coast or the river and the target is not in sight, which way do you go? If you use the aiming off strategy you know exactly which way to go. When you see the coast, you should be able to sail with the current, up the coast to Boston. When you reach the river, you know which direction to follow the river in order to find the bridge. With the aiming off strategy, rough computations are used to get into a neighborhood of the goal (to reach the boundary constraint), and then, the local boundary constraint is used to zero-in on the target using directly perceivable feedback. The structural association between the boundary (coast line or river) and the target (Boston or bridge) is information (i.e., a sign or landmark) that specifies the appropriate actions.

As autonomous analytical technologies are integrated into organizations, it is important to also consider the role that smart muddling will play in achieving the goals of the organization. This smart muddling can be supported through the design of direct manipulation/perception interfaces (e.g., Bennett & Flach, 2011; Schneiderman, 2022) that allow people to utilize the power of AI/ML systems to discover patterns (natural constraints), to test hypotheses, and to anticipate the potential risks associated with alternative actions. An important question for designers is

How can we leverage the power of AI/ML systems to help people to muddle more skillfully?

References

Bennett, K.B. & Flach, J.M. (2011). Display and Interface Design: Subtle Science, Exact Art. Boca Raton, FL: CRC Press.

Schneiderman, B (2022). Human-centered AI. Oxford: Oxford University Press.

Leave a Reply

Your email address will not be published. Required fields are marked *