Detect and avoid:  how close are we to replacing the pilot as the last line of defence?

An interview with Jon Damush, CEO of Iris Automation, which is developing a range of machine-learning-based, optical detect and avoid systems for commercial drone operators

What are you doing to help accelerate beyond visual line of sight (BVLOS) operations?

I visualise the current regulatory environment as a three-circle Venn diagram where each circle and the intersection of those circles represents your ability and approval to go fly. One would be the aircraft’s certification and all the rules and parts of the rules that apply to this. The other circle would be operational approvals that govern what it is you want to go do. Do you want to carry passengers? Do you want to carry cargo? The third circle is the certification of the pilot. The advent of unpiloted systems has meant that this third circle has disappeared. And all the regs that intertwine very intimately with the other two circles, those linkages are now broken.

This is the struggle that the unmanned community has been facing: how do you re-engineer a system of checks and balances that were built around this tripod when one of the legs has gone?  If you look at how piloted aviation works today, the avoidance of collisions is not down to any one component, it’s a layer of components that add up to mitigating all risks of collision. And the pilot is the last line of defence. The unmanned community has to ask – how do you replace the pilot?

No one technology is going to be able to satisfy all requirements. So it’s going to continue to be layers of technology.

But there are limitations with current detection avoidance systems. For example, many don’t work well when you’re flying over the sea because of the moving background.

It’s our responsibility to be clear about the capabilities of the system. Because unless you do that, then the community will not know how to put mitigations around the parts that work and the parts that don’t.

We are part of a layered mitigation approach in conjunction with other technologies, perhaps other sensing modalities, flight planning, UTM and so on.

But we have real value to offer because a vision system can be real-time, very lightweight, very low power and can be connected directly to the autopilot – which means you don’t need to have the operator in the loop as the ‘last line of defence’. We are not saying we will satisfy all detect and avoid requirements. No. We need to use ADS-B, other transponders and cooperative mechanisms, as many as possible. But put us on board in case those things fail or somebody is not playing by the rules.

What are the next steps to move from trials to fully commercial operations?  

There has to be collaboration with end users, and, even more so, with regulators. They have to be comfortable with understanding what the system does really well and what it can’t do. They have to see how that puzzle-piece fits into the overall safety case to be able to provide operational approvals for commercial missions.

Programmes like BEYOND (https://www.unmannedairspace.info/uncategorized/faa-launches-beyond-research-programme-to-further-nas-uas-integration/) are really useful. The Integration Pilot Program (IPP) (https://www.unmannedairspace.info/uncategorized/faa-concludes-utm-ipp-second-phase-with-virtual-demonstrations-of-bvlos-operations/) maybe has had its fits and starts but it has provided an environment where industry, local government, federal government and the regulator can collaborate, see what’s going on and understand how flying a quadcopter over a building in a campus isn’t that risky.

As a small industry player these programmes have been great because they have given us access to a regulator who would otherwise be very difficult to reach. You need to share everything: the ugliness, the greatness, the warts, the whole nine yards.

It’s exciting to be one of the main members in the BEYOND Reno operation. We’re also a secondary member for other programmes – as a technology provider with our Casia system. But now we are looking forward to exploring additional use cases and to showing the regulator that this is the kind of capability they can trust.

Where are we on the curve of going from theoretical research to implementing commercial operations? Halfway?

I would say halfway is probably right. We do use computer vision and much like the human eye, it’s not great in all environments. That’s the plain and simple truth. We see things better in the day than we do at night and there are certain operational environments that are more challenging than others.

But this team at Iris has done something really novel in that we’re acting as our own inertial measurement unit and we’re using feature tracking from the entire environment to get a very accurate idea of our attitude in space. And based on that, we’re able to see the outliers in that mask, and those outliers are things that are moving in the scene.

The second step, which is also novel, is to apply machine learning which allows us to classify the movers, throw out the ones that don’t matter and focus on the ones that do. With that classification, we’re able to actually get a gross size estimation of the intruder and that, when coupled with the geometry of the optics that we have, allows us to get a range estimation from a single camera. That’s pretty cool.

That’s why the system is lightweight and low power. It works really well today in daylight VFR conditions and when most of the threat is above our aircraft – which fortunately, for drone operations, that’s pretty much true 90% of the time. So if you are flying low-down, doing inspections or carrying something, the real threat from a piloted aircraft comes from above. That was the basis of our waiver approvals in Alaska, Kansas and Canada.

When we’re looking down on targets, that is a harder problem because you have ground clutter. But again, that’s 100% analogous to the human pilot. It’s one of the reasons why airliners and other aircraft are painted differently on the top and the bottom, to help with visual acuity and to be able to see things in a cluttered environment.

With a sky or a cloud background, we get our furthest range detection, out to almost a mile with a Cessna-sized target. Below the horizon we get about half of that. We spend a lot of time in R&D trying to improve that metric, a lot of time researching the world’s availability of sensors and cameras to be able to optimise the optical part of this path. But that’s where we are and that’s why I’d say we are about 50% there.

I guess birds, in flocks or as singles, are always going to be something which you pick up because of the motion, rather than the size. It’s the motion you’re looking for?

It’s a little of both.  If we were sitting at the end of a football pitch on a foggy morning and saw a movement at the other end of the pitch we’d be able to immediately determine whether it was a person, an animal or something else. If it were a person, we could probably tell by the gait whether it was a male or a female and even subclassify them into tall, short, young, old, heavy, thin. This is done in a fraction of a second. The eye-brain combination is miraculous.

We’re a long way from that in the computer vision and the smart camera space.

How do you build a system which identifies a moving object, classifies it, creates an alarm or commands the autonomous drone to take the correct avoiding action, in just a few seconds?

You break it into three parts: detect, alarm, avoid. In a larger unpiloted system, such as a General Atomics Predator or Reaper, there is a human in the loop, albeit remote. You have a radar system on board that detects an intruder; that signal is then sent to the operator; it gets translated into something the operator can understand and then the operator makes a control input and that signal goes back to the aircraft, which then manipulates the controls to move the platform out of the way.

Autonomous smaller aircraft operate through way-point programming, so you don’t have a pilot in the loop. And even if you did, the time lag for the size of aircraft and the kind of collision scenarios you’re trying to avoid, is not sufficient.

So the detect-and-avoid system has to communicate directly with the autopilot and that’s what we do. We integrate our Casia system directly into the autopilot and consult with our customers and partners around how they want to effect the alerting and the avoiding part of the equation.

We tell our customers: “We’re going to give you a target type, a bearing and range but then you do the maths to figure out where that is in real-world space and decide what you want to do about it.” How they want to manage that is largely driven by the configuration and manoeuvrability of the aircraft because that determines the amount of time they will need to be able to effect an avoidance manoeuvre. So we talk directly to the autopilot and then the autopilot takes over.

How far are we in determining the industry-wide standards for this so everybody knows the kit is going to meet the operational performance required by the regulator?

There’s a lot of work going on to determine the challenge point between ensuring separation standards and ensuring there are no collisions. Everybody is trying to figure out where the right point is between these two needs.  In the USA, ASTM is doing a lot of really good work around defining what “Well-Clear” means, what “near mid-air collision” means, what “mid-air collision” means and the requirements for an operator to satisfy those conditions.

If the FAA adopted that standard today as rule, however, nobody would be able to fly unmanned aircraft unless they were solely within a Class Bravo environment where everybody is cooperating. And there’s an asterisk there, too, because the Department of Homeland Security has said you can’t use ADS-B alone as a mitigation factor because it can be spoofed.

Enter another reason for vision. Vision could provide the check and balance for ADS-B. But that’s the only way you could satisfy what’s currently being drafted as the standard today. The regulator wants to drive to rulemaking and is going to provide additional approvals, but those are for specific use cases. The regulator’s role is to evaluate and approve so it needs third parties to arrive with ideas which can be evaluated.

I think the ASTM community has done most of the heavy lifting around this very problem, and I think it’s doing an admirable job. But it has taken a very conservative approach and it doesn’t have the benefit of the specific use case in front of it. The standards community will say “this is what perfect looks like”  and the regulator will say “but the real world is not perfect, let’s see what this applicant wants to go and if it is actually low risk we can allow them to do it.”

 

How mature is your system? If someone has permission to fly commercial BVLOS autonomous operations can you provide them with equipment to support their missions?

Yes. I would feel comfortable saying: “If you’re doing a low-level operation pipeline inspection and all the threats are coming from above, yes, we’re doing that today.” We can integrate with a variety of different aircraft and we have manufacturing partners doing just this. This were the basis of the waivers that we got last year.

That said, this hasn’t been codified into a rule yet, so each case will have to be put together with a separate safety case and put in front of the regulator. But I would feel confident that our system is going to provide a layer of safety to its operators in those use cases that frankly, I would not fly without. Why wouldn’t you have another set of eyes on board to help you avoid a collision?

The acronym BVLOS implies you want to fly further than you can see. But the truth is, that’s not really the key to unlocking economic value. The economic value comes from inverting the human-to-airplane ratio. In the current manned aircraft world it’s one-to-one. In the current unmanned aircraft world it’s multiple-people-to-one-airplane. So it’s upside down. What BVLOS really buys you is the ability to flip that ratio and get to the one-operator-to-multiple-aircraft because now you don’t have to watch your airplane. You want one person sitting in a central command centre monitoring 12 drones – not watching the drones, just getting alerts if something happens and they have to jump into the loop and manage.

Some drone operators say they will be launching commercial, permanent BVLOS operations in eighteen months, with detect and avoid technologies on board. Is this realistic?

Yes, though there’s a lot of things packed in there.  When I hear words like ‘commercial’ and ‘permanent’ that implies the regulator is comfortable with the generic safety case on a recurrent basis. That might be optimistic. From a technology perspective, depending on the size of aircraft, I think it’s reasonable. The biggest constraint for a computer vision system today is optics – optics are glass and glass is heavy.

For a drone that you can hold in your hand we’re not going to be able to produce something that gives you a mile of detection because the physics doesn’t allow us. But for us to be a key part of the safety layers that go into detect and avoid for larger UAS within 18 months, yes, absolutely.

Our investment is going to buttress our weaknesses and do more work to improve our range below the horizon, to expand the operational environments in which our system works. You mentioned over water, that’s a great example. A computer vision algorithm has a hard time detecting features that can be tracked because water moves differently and you see through water at some points so you might be tracking the sea-bed or a wave top. That’s the challenge. Fortunately, most of the real tangible revenue opportunities for commercial operators today are over land.

How do you relate to a UTM system?

Our strategic approach is to not be dependent on any other system but be that last line of defence for the aircraft manufacturer or operator – like a TCAS or ADS-B system. We want to be compatible with all UTM system operators and we have data that will be valuable to the UTM community. If we can identify a non-cooperative target which is not squawking and give a bearing and range from the host aircraft we can share that with everybody. So, I’d see it more like a collaboration, not a dependency.

How will things change in the next two years?

As with any technology, our systems will be smaller, lighter, less power hungry. From the software side what you can expect to see is an expanded set of classifiers from our machine learning database. Every time we fly, we learn more, and every time we get more data, we can train more on the database. That’s the beauty of machine learning, it constantly improves.

You will also see more customisation around the configuration of our system. Today we’re still working to prove it with the operators and regulators but as we learn more from our manufacturing partners and operating customers, we will be able to develop specific configurations to suit their operations. Improving collaboration with manufacturers early is critical because they’re going to optimise their airplane for the mission and that will lead to a different set of aircraft configurations.  We want to build flexibility into our approach because that will give us that ability to customise and really optimise our system’s footprint on any aircraft airframe.

But the people who are flying the vehicles get the real benefit of collision avoidance. Manufacturers get to put the safety promise into their system but it’s the operators and service providers who get the benefit of collision avoidance. So, they’re the ones that are more focused on the software capabilities and the integration of our capabilities with their autopilots to effect an avoidance manoeuvre.  We work with them more on the back end, tuning the capabilities for the operational use case and helping them integrate our system with the autopilot.

And how far are we to understanding when and whether it will be nice to have your system on board – keeping the insurers happy, for example – or mandated?

That’s more of a regulatory question. Will the global regulators mandate on-board detect-and-avoid equipage? Probably ‘yes’ in the long run, especially if you want to try to unlock more ad-hoc file-and-fly type missions, where you don’t know exactly where you’re going and you need to do it now.  I expect that there will be a trend in this direction.

 

Share this:
Counter Drone System - D-Fend Solutions