An article published by Geospatial World reports a low-cost method for autonomous collision detection and avoidance on drones of all sizes developed by a research team using video game software and stereo cameras.
According to the report, a team of researchers from Madrid Polytechnic University, MIT, and Texas A&M University developed a collision monitoring and avoidance system for small drones under 2kg using a simple camera and data processing. With size and constraints in place, typical sensors like LIDAR, radar, and acoustic sensors were not practical.
The researchers were building on a previous 2018 project, reducing power needs and using low-cost conventional cameras and basic computing without any sort of middleware. The end result needed to fully integrate into a small quadcopter and enable that vehicle to monitor and avoid multiple nearby flying vehicles.
The key realization driving the project was that these normal cameras, in pairs, can mimic human depth perception and rely on more digestible data that proves more reliable in practice than some alternatives.
By measuring the differences in geometry of two images taken side-by-side, stereo camera depth mapping enables a system to identify bodies that may pose threats to an autonomous vehicle. In its first iteration of this technology, the team accomplished exactly that identification of threats. Achieving this with simple cameras required reliable frameworks for identifying, sizing, and tracking the motion of those bodies. To construct these frameworks, the team leveraged Microsoft AirSim.
For their purpose of building a collision avoidance system, the depth map research team used AirSim to model complex environments, simulate drones in flight for training, and precisely control image capture for collecting training and modeling content to be consumed by their machine learning application.
With independent left and right perspectives captured, the team was able to create photo-realistic renders for size detection, position detection, and depth mapping of drones both in static positions and in motion. Leveraging these working models, they further economized the product being packaged with their drone by writing the logic in C++ and C/CUDA. These basic languages require no complex middleware to process images and inferences during flight. They were even able to reduce resource needs by using a single storage module for both image storage and interpretation.
After creating models, training and testing an AI, and preparing the test vehicle, the team’s published results show great progress in their working concept. The outfitted drone tested was able to identify, track, and make flight path corrections in reaction to other drones of several sizes in complicated environments that required tracking both static and moving obstructions.
(Image: Geospatial World, source IEEE)
For more information visit: