Limits...
Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity.

Oh T, Lee D, Kim H, Myung H - Sensors (Basel) (2015)

Bottom Line: To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor.The experimental results were compared with those of the conventional GMappingapproach.The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

View Article: PubMed Central - PubMed

Affiliation: Urban Robotics Laboratory (URL), Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro (373-1 Guseong-dong), Yuseong-gu, Daejeon 305-701, Korea. buljaga@kaist.ac.kr.

ABSTRACT
Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

No MeSH data available.


Experimental results comparing the odometry, GMapping and the proposed method. The trajectory of the proposed method is almost the same as the ground truth. Gray regions depict the walls. (a) Dataset 1; (b) Dataset 2; (c) Dataset 3; (d) Dataset 4.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4541856&req=5

f13-sensors-15-15830: Experimental results comparing the odometry, GMapping and the proposed method. The trajectory of the proposed method is almost the same as the ground truth. Gray regions depict the walls. (a) Dataset 1; (b) Dataset 2; (c) Dataset 3; (d) Dataset 4.

Mentions: Figures 13 and 14 show the comparison results. In Figure 13, the blue solid line indicates the ground truth value, the green rectangle solid line the trajectory from the robot's odometry, the magenta circle solid line the results obtained from GMapping [4] and the cyan dotted line the results obtained by the proposed algorithm. In Figure 14, the magenta circle solid line means the error from GMapping [4] and the cyan dashed line the error by the proposed algorithm. The largest position error is found in the robot odometry, where only the information obtained from the encoders of the robot is used. The odometry error caused by the slip effect of the robot wheel gradually accumulates. The results obtained by GMapping are similar to the ground truth, but there is some error. Because the depth values scanned from the laser scanner are uniform in a corridor, it is not easy to compensate for the errors from odometry through GMapping. In particular, in environments such as Dataset 1, it is confirmed that the error is large.


Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity.

Oh T, Lee D, Kim H, Myung H - Sensors (Basel) (2015)

Experimental results comparing the odometry, GMapping and the proposed method. The trajectory of the proposed method is almost the same as the ground truth. Gray regions depict the walls. (a) Dataset 1; (b) Dataset 2; (c) Dataset 3; (d) Dataset 4.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4541856&req=5

f13-sensors-15-15830: Experimental results comparing the odometry, GMapping and the proposed method. The trajectory of the proposed method is almost the same as the ground truth. Gray regions depict the walls. (a) Dataset 1; (b) Dataset 2; (c) Dataset 3; (d) Dataset 4.
Mentions: Figures 13 and 14 show the comparison results. In Figure 13, the blue solid line indicates the ground truth value, the green rectangle solid line the trajectory from the robot's odometry, the magenta circle solid line the results obtained from GMapping [4] and the cyan dotted line the results obtained by the proposed algorithm. In Figure 14, the magenta circle solid line means the error from GMapping [4] and the cyan dashed line the error by the proposed algorithm. The largest position error is found in the robot odometry, where only the information obtained from the encoders of the robot is used. The odometry error caused by the slip effect of the robot wheel gradually accumulates. The results obtained by GMapping are similar to the ground truth, but there is some error. Because the depth values scanned from the laser scanner are uniform in a corridor, it is not easy to compensate for the errors from odometry through GMapping. In particular, in environments such as Dataset 1, it is confirmed that the error is large.

Bottom Line: To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor.The experimental results were compared with those of the conventional GMappingapproach.The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

View Article: PubMed Central - PubMed

Affiliation: Urban Robotics Laboratory (URL), Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro (373-1 Guseong-dong), Yuseong-gu, Daejeon 305-701, Korea. buljaga@kaist.ac.kr.

ABSTRACT
Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

No MeSH data available.