Limits...
Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity.

Oh T, Lee D, Kim H, Myung H - Sensors (Basel) (2015)

Bottom Line: To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor.The experimental results were compared with those of the conventional GMappingapproach.The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

View Article: PubMed Central - PubMed

Affiliation: Urban Robotics Laboratory (URL), Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro (373-1 Guseong-dong), Yuseong-gu, Daejeon 305-701, Korea. buljaga@kaist.ac.kr.

ABSTRACT
Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

No MeSH data available.


Projection of the laser scanner's depth data to the checker board image.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4541856&req=5

f9-sensors-15-15830: Projection of the laser scanner's depth data to the checker board image.

Mentions: In order to obtain the 3D coordinates of the feature points from the hybrid laser scanner and camera approach, it is necessary to know the relative pose between these sensors. This entails extrinsic parameter calibration, and the relative pose is determined using Zhang and Pless's method [39]. A total of 38 datasets were exploited for the extrinsic parameter calibration, and the results are as follows:(16)RLC=[−0.9997−0.0245−0.00520.0247−0.9985−0.0483−0.0041−0.04840.9988](17)tLC=[−0.0210−0.0108−0.0189]The projected depth data from the laser scanner on the image using the parameters are shown in Figure 9. The laser scanner's depth data for the checker board are overlaid on the image plane. Figure 9 shows that the calibration is correct, as the checker board image and the projected depth data from the laser scanner match well.


Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity.

Oh T, Lee D, Kim H, Myung H - Sensors (Basel) (2015)

Projection of the laser scanner's depth data to the checker board image.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4541856&req=5

f9-sensors-15-15830: Projection of the laser scanner's depth data to the checker board image.
Mentions: In order to obtain the 3D coordinates of the feature points from the hybrid laser scanner and camera approach, it is necessary to know the relative pose between these sensors. This entails extrinsic parameter calibration, and the relative pose is determined using Zhang and Pless's method [39]. A total of 38 datasets were exploited for the extrinsic parameter calibration, and the results are as follows:(16)RLC=[−0.9997−0.0245−0.00520.0247−0.9985−0.0483−0.0041−0.04840.9988](17)tLC=[−0.0210−0.0108−0.0189]The projected depth data from the laser scanner on the image using the parameters are shown in Figure 9. The laser scanner's depth data for the checker board are overlaid on the image plane. Figure 9 shows that the calibration is correct, as the checker board image and the projected depth data from the laser scanner match well.

Bottom Line: To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor.The experimental results were compared with those of the conventional GMappingapproach.The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

View Article: PubMed Central - PubMed

Affiliation: Urban Robotics Laboratory (URL), Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro (373-1 Guseong-dong), Yuseong-gu, Daejeon 305-701, Korea. buljaga@kaist.ac.kr.

ABSTRACT
Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

No MeSH data available.