• +1-703-684-6777
  • info@evtol.news
Coming to Terms: “Levels of Autonomy” is Unsound
  • 21 Dec 2020 09:26 AM
  • 1

Coming to Terms: “Levels of Autonomy” is Unsound

By Daniel I. Newman

Vertiflite, January/February 2021

This article is the second in a new series on terms used in our expanding vertical flight community, addressing the uses of terminology that threaten to become routine expressions or idioms — or already are — but are misleading or erroneous. Let us know what you think at editor@vtol.org.

Autonomy, among the hottest topics in aviation today, lacks a clear and agreed-upon definition, though a consensus is emerging.

Multiple aviation “level of autonomy” scales have been put forward, such as Autonomy Levels for Unmanned Systems (ALFUS) by US government’s National Institute of Standards and Technology (NIST). But the adjective “autonomous” is binary: either an actor (person or system) performs a task autonomously or it has supervision. There is no gray area or partial state for that task. According to Merriam-Webster autonomy is “the quality or state of being self-governing” and exists only when the authority is delegated by the user. It is the highest form of automation, with the human user out of the loop because they trust and accept liability. ALFUS refers to this as “human independence.”

An eminent colleague coined the term “supervised autonomy” as a diplomatic or emotional crutch for concerned stakeholders to mollify fears during development. It can be useful, but it’s an oxymoron.

So, autonomy is not an attribute of the device, it is an attribute of the device in operation. A system can be designed and trained to have extraordinary capability to successfully perform tasks independently, but it cannot operate autonomously unless it does so free of oversight, riding without the training wheels or parental oversight. Automation with ability to function becomes autonomy only by choice of the person with liability who trusts that the system is capable. Autonomy is the system and the circumstances and the trust.

In a Nov. 19 BreakingDefense.com article, “Let Your Robots Off The Leash — Or Lose: AI Experts,” author Sydney Freedberg recognizes this distinction of user prerogative with the statement, “the Army is putting real soldiers in charge of simulated robots that can operate much more autonomously
— if the humans let them.”

Autonomous operation is a choice by the user — by that user at that time. The same user on a different day or a different user with different concerns who does not trust that device in that use will oversee it (verify) until they trust it. A designer can plan and hope for autonomous use of their device, but autonomous use requires the real-time agreement of the user.

Some argue that task complexity sets the distinction between automation and autonomy, and only the most complex systems are autonomous. However, even the least complex systems can be autonomous. The claim that a task too simple (e.g. a modern elevator) is just “automation” is clearly a slippery slope and arbitrary with no final authority. The user oversight discriminator between autonomy and automation eliminates task complexity as a discriminator, and so is at once objective and elegant.

There is critical distinction between observation and supervision, where the latter includes the capability and willingness to intervene. In some cases there is no possibility to supervise a task, such as unmanned systems on Mars where the latency precludes high bandwidth control. And in some cases no capability to control is provided. In these cases, autonomous operation is required. But where supervision is an option (e.g. a real-time stop button), it is a real-time, context- based, tactical situational choice by the user, regardless of the designers’ plans or prowess. Observing, with the capability and willingness to act but not acting, is not autonomy. If you watch and cannot act, it is observation and so an autonomous operation. If you watch and can act — and will act — then it is supervision, and so not autonomous operation. If you can act and do not watch, it is autonomous operation.

Ronald Reagan’s “trust but verify” is invalid, as trust means no need to verify. So, the two cannot exist in parallel but occur in series — trust until there is a reason you do not, then supervise and verify to re-establish trust. As any manager knows, trust is the necessary condition for delegating any responsibility, whether authorizing humans or machines. There are many issues associated with trust, including familiarity and predictability (not the same as explainability). But this discussion only addresses the need for trust and not the means to achieve it. As with “supervised autonomy,” the phrase “trust but verify” may be a rallying cry, or a convenient label for incomplete achievement of the desired state, but it is unsound.

The assessment of trust must be done at the discrete task level. Fully autonomous unmanned aircraft system (UAS) operation is possible for tasks with which the system is  trusted, such as a performing simple transit with autonomous takeoff, navigate, sense and avoid, and land, all under benign conditions. But the exact same UAS will not be trusted for complex tasks (or in inclement conditions), so supervision will be required, but only during specific tasks, so the rest of the mission incurs no operator workload. This has been referred   to as “partial autonomy,” but to be clear, by this definition no task is “partially” autonomous — it is a mix of tasks with some supervised and some not (i.e. autonomous). This will be a key to 1vN control, with one operator responsible for many UAS.

Alternatively, instead of assembling a fully autonomous UAS mission bottoms-up from trusted tasks, one can start with a complex, high-workload crewed mission and reduce workload with automation — for example the Rotorcraft Pilot’s Associate (RPA) program — and when sufficiently trusted, those tasks would be performed "autonomously."

The only difference between automation and autonomy is the level of supervision exercised by an external entity. And that entity may be another machine. It is irrelevant what the system does or does not do. The system does what it is told to do. It is only the lack of supervision that defines something as autonomous. So, the developer creates automation, and the user grants autonomy.

The term of use must be “Levels of Automation,” where the highest level of automation is autonomy. Autonomous is a binary. No gray area. Simple. The hard part is the trust.


Terry D. Welander

The long standing rules for piloted aircraft does not want to be changed, adjusted or messed with; if you talk with anyone familiar with piloted or piloting rules. That leaves drones and government has written extensive rules for drones. So autonomy does not exist within the existing rule structure. All rules must be followed or consequences ensue. And you are correct: levels of autonomy do not exist unless you consider the many facets of existing flight rules; which is not the way most people view flight rules. Flight rules are viewed for any specific flight and generally not generically. So without talking about flight rules specifically with autonomy; you are spinning your wheels or creating a distraction from flight. The main subject almost always comes first: flight. Add on what you want; but it must be included with flight.

Leave a Comment