Skip to content

The Ultimate Promise of Automation: To Replace or To Engage Humans?

Moshe Vardi (a computer science professor at Rice University in Texas) is reported to have made the following claim at a meeting of the American Association for the Advancement of Science: "We are approaching the time when machines will be able to outperform humans at almost any task." He continued, suggesting that this raises an important question of society: "If machines are capable of doing almost any work humans can do, what will humans do?"

These claims illustrate the common assumption that an ultimate impact of increasingly capable automation will be to replace humans in the workplace. It is generally assumed that machines will not simply replace humans, but that they will eventually supersede humans in terms of increased speed, increased reliability (e.g., eliminating human errors), and reduced cost.  Thus, making the humans superfluous.

However, anyone familiar with the history of automation in safety critical systems such as aviation or process control will be very skeptical about the assumption that automation will replace humans. The history of automation in those domains shows that automation displaces humans, changing their role and responsibilities. However, the value of having smart humans in these systems can't be over-estimated. While the automated systems increasingly know HOW to do things; they are less able to understand WHY to do things. Thus, automated systems are less able to know whether something ought to be done - especially when situations change in ways that were not anticipated in the design of the automation.

If you look at the history of personal computing (e.g., Dealers of Lightning by Hiltzik) you will see that the major break-throughs were associated with innovations at the interface (e.g., the GUI), not in the power of the processors. The real power of computers is the ability to more fully engage humans with complex problems - to allow us to see things and think about things in ways that we never could before. For example, to see patterns in healthcare data that allow us to anticipate problems and to refine and target treatments to enhance the positive benefits.

Yes, automation will definitely change the nature of human work in the future. However, fears that humans will be replaced are ill-grounded. The ultimate impact of increasing automation will be to change which aspects of human capabilities will be most valuable - muscular strength will be less valued; classical mathematical or logical ability will be less valued (e.g., the machines can do the Bayesiam calculations much faster than we can); BUT creative thinking, empathy, and wisdom will be increasingly valued. The automation can compute what the risks of a choice are (e.g., likelihood of a new medical procedure succeeding), but when it comes to complex questions associated with the quality of life the automation cannot tell us when the risk is worth taking.

Automation will get work done more efficiently and reliably than humans could do without it, BUT we still need smart humans to decide what work is worth doing! There is little benefit in getting to the wrong place faster.  The more powerful the automation, the more critical will human discernment be in order to point it in the right direction. A system where the technologies are used to engage humans more deeply in complex problems will be much wiser than any fully automated system! 

4 thoughts on “The Ultimate Promise of Automation: To Replace or To Engage Humans?

  1. Rob Hutton

    and let's not forget the issue of values and value systems that impact whether a decision the "right" decision. One of the visions is that humans and automation work together in a coordinated potentially collaborative way ("automation as a team player" Woods, Klein et al). However, in human to human teams we trust our team mates in part due to our shared value systems. "I know if I give him a task, he'll do it according to the goals of the task, but as important the implicit human values guide actions such as 'don't harm anyone unnecessarily'". I don't see much work in this area on the importance of values, except in the explicit area of ethical decision making, which addresses the big decisions, to an extent, but not the myriad of little decisions that support muddling through with multiple other actors/agents (including intelligent or autonomous systems). Work is conducted within the context of human endeavour, we build tools/automation to further the human purpose. Humans share most values implicitly, or discuss them as part of team working. How does the human-machine system build trust and achieve coordinated work success without a means to understand whether values, which implicitly guide decisions and actions, are shared or not?

    Reply
    1. John Flach

      Yes! Agree wholeheartedly. This is part of what I mean by WHY! Ultimately, the performance of observer (e.g., signal detection) and control systems can only be judged relative to a value system (e.g., payoff matrix, cost function). But there has been little work to integrate value systems into automatic observers and controllers. Generally, the cost functions are chosen to simplify the analysis (e.g., quadratic cost function), rather than to reflect the pragmatics of everyday situations. Also, relevant is Damasio's work on emotions -- perhaps the emotional response is critical for comparing values that are not easily quantified.

      Reply
  2. Rob Hutton

    and let's not forget the issue of values and value systems that impact whether a decision the "right" decision. One of the visions is that humans and automation work together in a coordinated potentially collaborative way ("automation as a team player" Woods, Klein et al). However, in human to human teams we trust our team mates in part due to our shared value systems. "I know if I give him a task, he'll do it according to the goals of the task, but as important the implicit human values guide actions such as 'don't harm anyone unnecessarily'". I don't see much work in this area on the importance of values, except in the explicit area of ethical decision making, which addresses the big decisions, to an extent, but not the myriad of little decisions that support muddling through with multiple other actors/agents (including intelligent or autonomous systems). Work is conducted within the context of human endeavour, we build tools/automation to further the human purpose. Humans share most values implicitly, or discuss them as part of team working. How does the human-machine system build trust and achieve coordinated work success without a means to understand whether values, which implicitly guide decisions and actions, are shared or not?

    Reply
    1. John Flach

      Yes! Agree wholeheartedly. This is part of what I mean by WHY! Ultimately, the performance of observer (e.g., signal detection) and control systems can only be judged relative to a value system (e.g., payoff matrix, cost function). But there has been little work to integrate value systems into automatic observers and controllers. Generally, the cost functions are chosen to simplify the analysis (e.g., quadratic cost function), rather than to reflect the pragmatics of everyday situations. Also, relevant is Damasio's work on emotions -- perhaps the emotional response is critical for comparing values that are not easily quantified.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *