It seems to me that much of the hype about Artificial Intelligence (AI) reflects the potential power of AI to control things that had traditionally been controlled by humans. If not replacing humans, the expectation is that humans will be displaced from lower levels of control to higher supervisory levels. Thus, the AI systems are framed to be autonomous control systems (e.g., autopilots).
I think this vision might be reasonable for simple and maybe even complicated domains of operation. However, I think this is the wrong vision when thinking about applying AI to complex or chaotic domains of operation. An alternative that I suggest for applications in domains that are more than complicated is to think about AI as being analogous to a telescope or microscope. That is, the function of AI is to enhance observability. The function is to use the power of advanced computations and statistics to pull patterns out of data that humans could not otherwise perceive.
In this context, the control problem is framed as a joint cognitive system (rather than as an autonomous system). The role of AI in this joint cognitive system is to enhance observability to shape how humans frame decisions. In terms of Boyd's OODA loop framework, the value of AI is to enrich Observations in ways that constructively shape how humans Orient to a problem or frame the Decision processes. Thus, humans are engaged and empowered (not displaced), and the ultimate quality of control is enhanced.