Limits...
Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity.

Oh T, Lee D, Kim H, Myung H - Sensors (Basel) (2015)

Bottom Line: To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor.The experimental results were compared with those of the conventional GMappingapproach.The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

View Article: PubMed Central - PubMed

Affiliation: Urban Robotics Laboratory (URL), Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro (373-1 Guseong-dong), Yuseong-gu, Daejeon 305-701, Korea. buljaga@kaist.ac.kr.

ABSTRACT
Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

No MeSH data available.


Example of a graph structure. The black node is the robot pose, and the edge of the graph structure represents the constraint between nodes. The green solid line is the constraint from the odometry; the red dashed line denotes the constraint from the 2D laser scanner data; and the blue dotted line represents the constraint from the hybrid method of the monocular camera and the laser scanner.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4541856&req=5

f5-sensors-15-15830: Example of a graph structure. The black node is the robot pose, and the edge of the graph structure represents the constraint between nodes. The green solid line is the constraint from the odometry; the red dashed line denotes the constraint from the 2D laser scanner data; and the blue dotted line represents the constraint from the hybrid method of the monocular camera and the laser scanner.

Mentions: The example of generating the pose graph structure from multi-sensors is shown in Figure 5. Each node represents the pose of the robot (xi), and the edge that is connecting nodes is the constraint, which consists of a measurement (zi,j) and its covariance (Λi,j). The green solid line denotes the constraint created using dead-reckoning from the odometry. The red dashed line represents the constraint resulting from the laser scanner using GMapping [4]. In using GMapping, a 2D grid map is obtained from the scan matching method using a 2D laser scanner. It is then possible to acquire the constraint using Rao–Blackwellized particle filters. Since GMapping is the particle filter-based SLAM algorithm, it only provides the current pose estimate relative to the origin. Therefore, the constraints between successive nodes in the graph structure cannot be generated using GMapping, and only the constraints between the origin and the robot can be generated. The blue dotted line represents the constraint generated using the hybrid method from the feature points of the camera and the depth data of the laser scanner. Only if the loop closure is detected, the hybrid method generates constraints between the corresponding nodes. Therefore, the constraint may not be generated for successive nodes by the hybrid method. The pose prediction from each sensor and the covariance values are used as constraints in the graph to correct the final robot pose estimation using a graph optimization technique.


Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity.

Oh T, Lee D, Kim H, Myung H - Sensors (Basel) (2015)

Example of a graph structure. The black node is the robot pose, and the edge of the graph structure represents the constraint between nodes. The green solid line is the constraint from the odometry; the red dashed line denotes the constraint from the 2D laser scanner data; and the blue dotted line represents the constraint from the hybrid method of the monocular camera and the laser scanner.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4541856&req=5

f5-sensors-15-15830: Example of a graph structure. The black node is the robot pose, and the edge of the graph structure represents the constraint between nodes. The green solid line is the constraint from the odometry; the red dashed line denotes the constraint from the 2D laser scanner data; and the blue dotted line represents the constraint from the hybrid method of the monocular camera and the laser scanner.
Mentions: The example of generating the pose graph structure from multi-sensors is shown in Figure 5. Each node represents the pose of the robot (xi), and the edge that is connecting nodes is the constraint, which consists of a measurement (zi,j) and its covariance (Λi,j). The green solid line denotes the constraint created using dead-reckoning from the odometry. The red dashed line represents the constraint resulting from the laser scanner using GMapping [4]. In using GMapping, a 2D grid map is obtained from the scan matching method using a 2D laser scanner. It is then possible to acquire the constraint using Rao–Blackwellized particle filters. Since GMapping is the particle filter-based SLAM algorithm, it only provides the current pose estimate relative to the origin. Therefore, the constraints between successive nodes in the graph structure cannot be generated using GMapping, and only the constraints between the origin and the robot can be generated. The blue dotted line represents the constraint generated using the hybrid method from the feature points of the camera and the depth data of the laser scanner. Only if the loop closure is detected, the hybrid method generates constraints between the corresponding nodes. Therefore, the constraint may not be generated for successive nodes by the hybrid method. The pose prediction from each sensor and the covariance values are used as constraints in the graph to correct the final robot pose estimation using a graph optimization technique.

Bottom Line: To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor.The experimental results were compared with those of the conventional GMappingapproach.The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

View Article: PubMed Central - PubMed

Affiliation: Urban Robotics Laboratory (URL), Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro (373-1 Guseong-dong), Yuseong-gu, Daejeon 305-701, Korea. buljaga@kaist.ac.kr.

ABSTRACT
Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

No MeSH data available.