Regulators and accident investigators are not yet ready for autonomous UTM operations

By David Gleave

Global aviation regulators and accident investigators are not yet ready for autonomous UAS traffic management (UTM) system deployment, especially in terms of understanding how to develop and update regulations to take account of machine-learning processes active in different parts of the drone eco-system.

The introduction of UTM and merger with the current CNS/ATM system presents a series of challenges associated with the new technology and operations.

The series of international standards, recommended practices and guidelines published by the International Civil Aviation Organization (ICAO) and standards bodies have evolved over the years. The format is based on “you shall do this” (standard) and “you should do this” (recommended practice). The reasoning behind why a particular regulation has been written is not usually given, which is almost the definition of a nightmare for a disruptive technology’s introduction.

  • Does compliance with the complete set of regulations address all the hazards that my intended operation faces?
  • If all of the hazards are addressed, then are the assumptions behind the regulations valid for my intended operation?
  • What was the target level of safety at the time the regulations were drawn up and is that still valid for my intended operation?
  • Which entity owns which part of the accident sequence and how do the different organizations interact?

Whichever standard is chosen as the basis for regulation, the UTM infrastructure provider will have to verify that the standard is appropriate for what they intend to do. Only then can a “safety-by-compliance” argument have any validity.

The regulator will need to be convinced that the implementation of machine-learning will lead to an acceptable level of safety prior to introduction of a UTM system.  Traditional control and feedback loops are understood and their ability to work can be demonstrated as well as what develops when any part of the control loop fails.  But how can every operational, environmental and meteorological problem that could be encountered be learnt by the machine and a successful decision algorithm outcome be guaranteed prior to implementation?

Can a qualitative argument be made that even an imperfect machine learning system will actually be safer than a conventional human-hardware-software system?  Every safety engineer knows that the conventional motor car would never be allowed if it were suddenly designed today.  Leaving a human with a steering task that is far more suited to a machine would never be approved.  The same argument can be made for ATM: a machine would be better monitoring for flightpath deviations from a known route, rather than monitoring by an air traffic controller.  The problem with this aviation example is that it is really a standard set of monitoring algorithms deployed in the monitoring task, rather than machine learning.

How will the system decide between two alternative fatal options? Will the system select the option to kill one pedestrian in a city centre five times more often than to cause a multiple vehicle accident that may kill five occupants?  ATM regulators generally think about accident prevention rather than consequence management.  How will they be trained to think more as risk managers rather than accident preventers?

The safety management system of the UTM provider should have a whole section dedicated to proactive learning from the monitoring of flights that occur in testing, commissioning and routine operation.  This will require the downloading and analysis of significant amounts of data at the end of each flight.  How will this data be downloaded and when? Will an intermediate storage solution, such as a memory card, be used to store data from several flights or will it be broadcast via datalink during flight or at the end of a flight?

Which organization will be responsible for the data analysis? Will it be the supplier of the machine-learning algorithms or the operators of the UTM system? What will be the data analysis criteria? How will the staff be trained for this role? How will any necessary changes that are identified be developed and implemented within the system? When can changes be made in a 24-hours a day operational system? What will be the reversionary procedures in use at the time to assure continued operational safety?

The International Society of Air Safety Investigators’ guidelines for UAS accident/incident investigation has started to look at the issues surrounding a UAS crash.  However, there is nothing associated with investigating machine-learning algorithms either on-board the flying machine or in the ground UTM system.  How is the industry going to educate the accident investigators about their element of the system?  What data are required to be recorded for investigation both on the ground and received in the flying machine? Will the required data survive an impact and fire? How will the decision-making process on-board and on the ground be recorded and replayed in a meaningful manner? Will this part be the organizational inertia that delays the whole UTM system because the current CNS/ATM investigation guidelines do not mention UTM at all?

There have been mistakes already made within the aviation system, including ATM, that would be applicable to UTM.  How will any new UTM provider demonstrate that they have identified the events and taken account of the lessons from those accidents, incidents and occurrences within their system before operations commence?  Is there an industry body set up to learn the events that occur within the UTM community and spread that knowledge around the globe such that events are not repeated?  Is the feedback to go to the software house that wrote the program or will it stop at the system integrator or not even reach the competing UTM provider?  Will the corporate lawyers suddenly stop all safety-event communication for fear of civil and/or criminal prosecution?  How will this reaction vary between countries?

The retention of corporate knowledge (ROCK) will become an important part of the safety argument.  The machine learning algorithms will have to be very well documented and a pool of human knowledge maintained.  This can be difficult in a work environment that relies on contractor labour used by a software house that is not a key element of the system integrator’s business.

One of the elements that has plagued hazard identification is the attempt to predict events that are not logical.  For example, how will the UTM system cope with unintentional but deliberate jamming of GPS signals around city centres?  Short range GPS blocking devices are sold to stop GPS tracking of vans by their owners. The signal blocking here is deliberate but its interference with UTM is unintentional.  This is a known issue but relatively irrational behaviour as far as predictive hazard identification techniques are concerned and may not be picked up.

Far more difficult is the consideration of how will a machine-learning system adapt in a predictable way to the corner of the Johari window, made famous by Donald Rumsfeld, of “there are also unknown unknowns—the ones we don’t know we don’t know.” Whilst considerable work has been carried out to try to make this corner of the window as small as possible, it is still new ground for safety regulators to examine and approve.

The flying machines have limited capability for diversion to alternative landing sites when planning to land in dense city areas so any unintended effect that stops a high-capacity vertiport from functioning will be high on the priority list for consideration.  Will the system be able to demonstrate adequate diversion capability and how will appropriate contingency procedures be demonstrated that rely on a learning algorithm?

There will always be calls for human supervision of the system and to step in to recover the system in the event of a machine’s cognitive mistake.  This poses three separate challenges: how to monitor a machine-learning system; how to step in and take over the system which rarely requires intervention; and how to operate a transition during periods of human intervention.

A machine-learning system may appear to have all elements under control but how transparent will the decision making be? Will there be a display that shows that all hazards have been considered and resolved for each individual flight and combination of flight activity? Once the system has decided on a course of action, how will feedback be presented to show that the flying machine has acknowledged the transmitted message, can comply and will comply with the instructions? What happens when the machine gets into a novel situation and cannot resolve the situation? How will this be tested and then how will control be handed over?

Current flight operations have already shown the problems of highly reliable automatic systems suddenly failing and out-of-practice pilots having to take over.  The problems involve both startle reaction as well as acquiring adequate knowledge of the aircraft’s state to be able to take over effectively.  How will supervisors be trained to maintain competency to take over control?

One of the reasons for taking humans out of the control loop is the anticipated scale of UTM deployment and demand for flight operations.  Therefore, by definition, introducing humans back into the loop in the event of a failure will be an instant overload condition.  Having adequate back-up contingency procedures and recovery plans will be necessary to address this issue created from the benefits of machine learning in the first place.

David Gleave is the safety editor for Unmanned Airspace and an independent aviation accident investigator.  He is an experienced ATM and UTM safety consultant having worked in the prediction and analysis of ATM and UTM risk exposure for thirty years.  He has been part of the investigation teams for aircraft accidents including mid-air collisions, collisions on the ground, controlled flight into terrain, windshear induced loss-of-control and runway excursions.

(Image: Shutterstock)

Share this:
Counter Drone System - D-Fend Solutions