Limits...
Autonomous Aerial Refueling Ground Test Demonstration--A Sensor-in-the-Loop, Non-Tracking Method.

Chen CI, Koseluk R, Buchanan C, Duerner A, Jeppesen B, Laux H - Sensors (Basel) (2015)

Bottom Line: An essential capability for an unmanned aerial vehicle (UAV) to extend its airborne duration without increasing the size of the aircraft is called the autonomous aerial refueling (AAR).This paper proposes a sensor-in-the-loop, non-tracking method for probe-and-drogue style autonomous aerial refueling tasks by combining sensitivity adjustments of a 3D Flash LIDAR camera with computer vision based image-processing techniques.Results presented in the paper show that using images acquired from a 3D Flash LIDAR camera as real time visual feedback, the ground robot is able to track a moving simulated drogue and continuously narrow the gap between the robot and the target autonomously.

View Article: PubMed Central - PubMed

Affiliation: Advanced Scientific Concepts Inc., 135 East Ortega Street, Santa Barbara, CA 93101, USA. cchen@asc3d.com.

ABSTRACT
An essential capability for an unmanned aerial vehicle (UAV) to extend its airborne duration without increasing the size of the aircraft is called the autonomous aerial refueling (AAR). This paper proposes a sensor-in-the-loop, non-tracking method for probe-and-drogue style autonomous aerial refueling tasks by combining sensitivity adjustments of a 3D Flash LIDAR camera with computer vision based image-processing techniques. The method overcomes the inherit ambiguity issues when reconstructing 3D information from traditional 2D images by taking advantage of ready to use 3D point cloud data from the camera, followed by well-established computer vision techniques. These techniques include curve fitting algorithms and outlier removal with the random sample consensus (RANSAC) algorithm to reliably estimate the drogue center in 3D space, as well as to establish the relative position between the probe and the drogue. To demonstrate the feasibility of the proposed method on a real system, a ground navigation robot was designed and fabricated. Results presented in the paper show that using images acquired from a 3D Flash LIDAR camera as real time visual feedback, the ground robot is able to track a moving simulated drogue and continuously narrow the gap between the robot and the target autonomously.

No MeSH data available.


Strong retro-reflective signals from the drogue. (a–c) simulate the images perceived by the receiver aircraft; (d–f) simulate the images perceived by the tanker.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4481912&req=5

sensors-15-10948-f005: Strong retro-reflective signals from the drogue. (a–c) simulate the images perceived by the receiver aircraft; (d–f) simulate the images perceived by the tanker.

Mentions: Instead of passively using the default settings in the 3D Flash LIDAR camera, experiments were carried out to explore proper settings for the drogue detections task and invaluable information was acquired using a real Navy drogue from PMA 268. The experimental results show the drogue contains retro-reflective materials and it was fortunate in terms of detecting the drogue at all needed distances. Figure 5 summarizes the experimental results. The drogue was facing up and located on the ground. A 3D Flash LIDAR camera was set up about 20 feet (6.1 m) above the drogue on our 2nd floor balcony, facing down perpendicularly. Figure 5a–c mimic images that would be observed from the receiver aircraft. Figure 5a is a regular 2D color image for visual reference purpose and Figure 5b is the intensity image captured by the 3D Flash camera. Figure 5b,c are the same images except the laser energy in Figure 5c is only 0.01% of that in Figure 5b after a neutral density filter is applied. The same strong retro reflective signals are also observed when switching the view point from the receiver aircraft to the tanker side as shown in Figure 5d–f. Although the majority of research related to the probe-and-drogue style autonomous refueling focuses on simulating scenarios of mounting sensors in the receiver aircraft. The possibility of equipping sensors on the tanker side has also been considered. This experiment is designed to help us understand what can be expected from the sensor output under different parameter settings and raise a flag if some limitations are found. Fortunately, there are no obvious show stoppers for either option in terms of received signals.


Autonomous Aerial Refueling Ground Test Demonstration--A Sensor-in-the-Loop, Non-Tracking Method.

Chen CI, Koseluk R, Buchanan C, Duerner A, Jeppesen B, Laux H - Sensors (Basel) (2015)

Strong retro-reflective signals from the drogue. (a–c) simulate the images perceived by the receiver aircraft; (d–f) simulate the images perceived by the tanker.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4481912&req=5

sensors-15-10948-f005: Strong retro-reflective signals from the drogue. (a–c) simulate the images perceived by the receiver aircraft; (d–f) simulate the images perceived by the tanker.
Mentions: Instead of passively using the default settings in the 3D Flash LIDAR camera, experiments were carried out to explore proper settings for the drogue detections task and invaluable information was acquired using a real Navy drogue from PMA 268. The experimental results show the drogue contains retro-reflective materials and it was fortunate in terms of detecting the drogue at all needed distances. Figure 5 summarizes the experimental results. The drogue was facing up and located on the ground. A 3D Flash LIDAR camera was set up about 20 feet (6.1 m) above the drogue on our 2nd floor balcony, facing down perpendicularly. Figure 5a–c mimic images that would be observed from the receiver aircraft. Figure 5a is a regular 2D color image for visual reference purpose and Figure 5b is the intensity image captured by the 3D Flash camera. Figure 5b,c are the same images except the laser energy in Figure 5c is only 0.01% of that in Figure 5b after a neutral density filter is applied. The same strong retro reflective signals are also observed when switching the view point from the receiver aircraft to the tanker side as shown in Figure 5d–f. Although the majority of research related to the probe-and-drogue style autonomous refueling focuses on simulating scenarios of mounting sensors in the receiver aircraft. The possibility of equipping sensors on the tanker side has also been considered. This experiment is designed to help us understand what can be expected from the sensor output under different parameter settings and raise a flag if some limitations are found. Fortunately, there are no obvious show stoppers for either option in terms of received signals.

Bottom Line: An essential capability for an unmanned aerial vehicle (UAV) to extend its airborne duration without increasing the size of the aircraft is called the autonomous aerial refueling (AAR).This paper proposes a sensor-in-the-loop, non-tracking method for probe-and-drogue style autonomous aerial refueling tasks by combining sensitivity adjustments of a 3D Flash LIDAR camera with computer vision based image-processing techniques.Results presented in the paper show that using images acquired from a 3D Flash LIDAR camera as real time visual feedback, the ground robot is able to track a moving simulated drogue and continuously narrow the gap between the robot and the target autonomously.

View Article: PubMed Central - PubMed

Affiliation: Advanced Scientific Concepts Inc., 135 East Ortega Street, Santa Barbara, CA 93101, USA. cchen@asc3d.com.

ABSTRACT
An essential capability for an unmanned aerial vehicle (UAV) to extend its airborne duration without increasing the size of the aircraft is called the autonomous aerial refueling (AAR). This paper proposes a sensor-in-the-loop, non-tracking method for probe-and-drogue style autonomous aerial refueling tasks by combining sensitivity adjustments of a 3D Flash LIDAR camera with computer vision based image-processing techniques. The method overcomes the inherit ambiguity issues when reconstructing 3D information from traditional 2D images by taking advantage of ready to use 3D point cloud data from the camera, followed by well-established computer vision techniques. These techniques include curve fitting algorithms and outlier removal with the random sample consensus (RANSAC) algorithm to reliably estimate the drogue center in 3D space, as well as to establish the relative position between the probe and the drogue. To demonstrate the feasibility of the proposed method on a real system, a ground navigation robot was designed and fabricated. Results presented in the paper show that using images acquired from a 3D Flash LIDAR camera as real time visual feedback, the ground robot is able to track a moving simulated drogue and continuously narrow the gap between the robot and the target autonomously.

No MeSH data available.