Limits...
Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity.

Oh T, Lee D, Kim H, Myung H - Sensors (Basel) (2015)

Bottom Line: To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor.The experimental results were compared with those of the conventional GMappingapproach.The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

View Article: PubMed Central - PubMed

Affiliation: Urban Robotics Laboratory (URL), Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro (373-1 Guseong-dong), Yuseong-gu, Daejeon 305-701, Korea. buljaga@kaist.ac.kr.

ABSTRACT
Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

No MeSH data available.


Flow diagram of the overall algorithm. For the final robot pose, predictions using the sensor fusion method are exploited. The robot pose is predicted using dead-reckoning of the encoder and is updated using feature matching through loop closure detection.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4541856&req=5

f6-sensors-15-15830: Flow diagram of the overall algorithm. For the final robot pose, predictions using the sensor fusion method are exploited. The robot pose is predicted using dead-reckoning of the encoder and is updated using feature matching through loop closure detection.

Mentions: This subsection explains the method of graph structure construction by generating nodes and constraints from the depth data of the laser scanner and the feature points from the camera. The basic SLAM algorithm used in this method is based on the 3D SLAM algorithm [13], and only a monocular camera and a laser scanner are used to estimate the distance information of the feature points. A flow diagram of the overall algorithm is shown in Figure 6. When data from the monocular camera are inputted, the feature points are extracted from the 2D image. The extracted feature points on the wall components are used for visual odometry and in the loop closure detection. 3D coordinates of the feature points on the image are obtained using the hybrid method described in the previous subsection. The 3D pose (six DoF (degrees of freedom)) between matched images is determined with 3D-RANSAC [26] using 3D coordinates of the matched feature points on the paired image. The initial connection between the nodes uses the odometry information. The 2D image feature point obtained at each node is stored in the feature manager and used for constructing the constraint between nodes. The feature manager finds the previous node that has a common image feature, using the feature data of the newly-generated node. This process is called loop closure detection [13]. The constraint between nodes imposed by using the feature points is used as information for forming a graph structure representing the entire trajectory of the robot. The pose graph SLAM is then applied to the graph structure formed via the above-described method to obtain the final corrected pose. The graph-based SLAM problem is solved by optimizing the graph model with sparse linear algebra techniques in real time [2].


Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity.

Oh T, Lee D, Kim H, Myung H - Sensors (Basel) (2015)

Flow diagram of the overall algorithm. For the final robot pose, predictions using the sensor fusion method are exploited. The robot pose is predicted using dead-reckoning of the encoder and is updated using feature matching through loop closure detection.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4541856&req=5

f6-sensors-15-15830: Flow diagram of the overall algorithm. For the final robot pose, predictions using the sensor fusion method are exploited. The robot pose is predicted using dead-reckoning of the encoder and is updated using feature matching through loop closure detection.
Mentions: This subsection explains the method of graph structure construction by generating nodes and constraints from the depth data of the laser scanner and the feature points from the camera. The basic SLAM algorithm used in this method is based on the 3D SLAM algorithm [13], and only a monocular camera and a laser scanner are used to estimate the distance information of the feature points. A flow diagram of the overall algorithm is shown in Figure 6. When data from the monocular camera are inputted, the feature points are extracted from the 2D image. The extracted feature points on the wall components are used for visual odometry and in the loop closure detection. 3D coordinates of the feature points on the image are obtained using the hybrid method described in the previous subsection. The 3D pose (six DoF (degrees of freedom)) between matched images is determined with 3D-RANSAC [26] using 3D coordinates of the matched feature points on the paired image. The initial connection between the nodes uses the odometry information. The 2D image feature point obtained at each node is stored in the feature manager and used for constructing the constraint between nodes. The feature manager finds the previous node that has a common image feature, using the feature data of the newly-generated node. This process is called loop closure detection [13]. The constraint between nodes imposed by using the feature points is used as information for forming a graph structure representing the entire trajectory of the robot. The pose graph SLAM is then applied to the graph structure formed via the above-described method to obtain the final corrected pose. The graph-based SLAM problem is solved by optimizing the graph model with sparse linear algebra techniques in real time [2].

Bottom Line: To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor.The experimental results were compared with those of the conventional GMappingapproach.The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

View Article: PubMed Central - PubMed

Affiliation: Urban Robotics Laboratory (URL), Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro (373-1 Guseong-dong), Yuseong-gu, Daejeon 305-701, Korea. buljaga@kaist.ac.kr.

ABSTRACT
Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

No MeSH data available.