Limits...
Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity.

Oh T, Lee D, Kim H, Myung H - Sensors (Basel) (2015)

Bottom Line: To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor.The experimental results were compared with those of the conventional GMappingapproach.The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

View Article: PubMed Central - PubMed

Affiliation: Urban Robotics Laboratory (URL), Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro (373-1 Guseong-dong), Yuseong-gu, Daejeon 305-701, Korea. buljaga@kaist.ac.kr.

ABSTRACT
Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

No MeSH data available.


Scene examples of the laser scan ambiguity. From the third to seventh scenes, the ambiguity occurs because the laser scanner measures only straight wall components. (a) Snapshots of robot for every 0.45 m of movement; (b) Laser scan data corresponding to each robot position.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4541856&req=5

f12-sensors-15-15830: Scene examples of the laser scan ambiguity. From the third to seventh scenes, the ambiguity occurs because the laser scanner measures only straight wall components. (a) Snapshots of robot for every 0.45 m of movement; (b) Laser scan data corresponding to each robot position.

Mentions: In addition, to acquire the robot pose from the results of the hybrid method, the relative transformation between the camera and the robot should be determined. A similar method to that presented in our previous study [40] is used to obtain extrinsic parameter calibration results. A marker whose size is known in advance is measured through the camera installed on the robot by our previous research [40] to get six DoF displacement from the camera to the marker. Additionally, the distance from the robot to the marker is measured assuming that the robot is placed directly in front of the marker to acquire six DoF displacement from the robot to the marker. Then, using a homogenous transformation, it is possible to calculate the extrinsic parameter calibration results. The numerical results of the extrinsic calibration between the camera and the robot are as follows:(18)RCR=[0.0675−0.06950.9953−0.99760.00770.0682−0.0124−0.9976−0.0688](19)tCR=[0.0040−0.4029−0.1553]Figure 10 shows the experimental environment to verify the proposed method. The experimental environments for Datasets 1, 2, 3 and 4 are shown in Figure 10. The size of a long corridor in Dataset 1 is 36.4 m × 2.0 m. The laser scanner's depth data of this space have almost the same values because of the shape of this environment. Therefore, in this space, localization using the laser scanner provides good performance for the lateral direction. However, the localization results along the moving direction are not reliable. The number of ground truth positions in Dataset 1 is 85. The size of the L-shaped space in Dataset 2 is 16.0 m × 0.9 m, and the detailed dimensions are shown in Figure 10. The experimental environment of Dataset 2 is a long corridor similar to that of Dataset 1, except there is a corner. The number of ground truth positions in Dataset 2 is 43. The size of the L-shaped space in Dataset 3 is 18.0 m × 0.9 m, and some parts of the wall are made of transparent glass. In experimental environments where the wall contains the transparent glasses, there is a high probability that a laser light passes through the glass or is diffused by the glass. However in Dataset 3, parts of the wall seen within the field of view consist of a steel frame or a concrete wall. The snapshots of Dataset 3 are shown in Figure 11. Although laser scan ambiguity still occurs partially, it is possible to estimate the robot pose information by measuring the laser depth data from these components. The number of ground truth positions in Dataset 3 is 43. The experimental conditions and the location of Dataset 4 is the same as Dataset 2. However, obstacles, which are not normal to the ground, but inclined, are placed in Dataset 4 to verify whether the proposed method robustly estimates the robot pose or not. Ground truth positions are acquired by a manual measurement thanks to the regular grids on the floor. In order to derive experimental results, the robot passed through the ground truth positions manually measured in advance. Despite the structure of the doors and the edges, the laser scan ambiguity occurs in the experimental environment; since the maximum detectable range of the laser scanner is limited (about 4 m in our experiments), these components cannot be measured, and the laser scanner acquires uniform data from the wall. The scene examples of the laser scan ambiguity are shown in Figure 12, where the successive number indicates a snapshot of robot and laser scan data corresponding to each robot position for every 0.45 m of travel of the robot. The third to seventh scenes in Figure 12 exhibit the laser scan ambiguity.


Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity.

Oh T, Lee D, Kim H, Myung H - Sensors (Basel) (2015)

Scene examples of the laser scan ambiguity. From the third to seventh scenes, the ambiguity occurs because the laser scanner measures only straight wall components. (a) Snapshots of robot for every 0.45 m of movement; (b) Laser scan data corresponding to each robot position.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4541856&req=5

f12-sensors-15-15830: Scene examples of the laser scan ambiguity. From the third to seventh scenes, the ambiguity occurs because the laser scanner measures only straight wall components. (a) Snapshots of robot for every 0.45 m of movement; (b) Laser scan data corresponding to each robot position.
Mentions: In addition, to acquire the robot pose from the results of the hybrid method, the relative transformation between the camera and the robot should be determined. A similar method to that presented in our previous study [40] is used to obtain extrinsic parameter calibration results. A marker whose size is known in advance is measured through the camera installed on the robot by our previous research [40] to get six DoF displacement from the camera to the marker. Additionally, the distance from the robot to the marker is measured assuming that the robot is placed directly in front of the marker to acquire six DoF displacement from the robot to the marker. Then, using a homogenous transformation, it is possible to calculate the extrinsic parameter calibration results. The numerical results of the extrinsic calibration between the camera and the robot are as follows:(18)RCR=[0.0675−0.06950.9953−0.99760.00770.0682−0.0124−0.9976−0.0688](19)tCR=[0.0040−0.4029−0.1553]Figure 10 shows the experimental environment to verify the proposed method. The experimental environments for Datasets 1, 2, 3 and 4 are shown in Figure 10. The size of a long corridor in Dataset 1 is 36.4 m × 2.0 m. The laser scanner's depth data of this space have almost the same values because of the shape of this environment. Therefore, in this space, localization using the laser scanner provides good performance for the lateral direction. However, the localization results along the moving direction are not reliable. The number of ground truth positions in Dataset 1 is 85. The size of the L-shaped space in Dataset 2 is 16.0 m × 0.9 m, and the detailed dimensions are shown in Figure 10. The experimental environment of Dataset 2 is a long corridor similar to that of Dataset 1, except there is a corner. The number of ground truth positions in Dataset 2 is 43. The size of the L-shaped space in Dataset 3 is 18.0 m × 0.9 m, and some parts of the wall are made of transparent glass. In experimental environments where the wall contains the transparent glasses, there is a high probability that a laser light passes through the glass or is diffused by the glass. However in Dataset 3, parts of the wall seen within the field of view consist of a steel frame or a concrete wall. The snapshots of Dataset 3 are shown in Figure 11. Although laser scan ambiguity still occurs partially, it is possible to estimate the robot pose information by measuring the laser depth data from these components. The number of ground truth positions in Dataset 3 is 43. The experimental conditions and the location of Dataset 4 is the same as Dataset 2. However, obstacles, which are not normal to the ground, but inclined, are placed in Dataset 4 to verify whether the proposed method robustly estimates the robot pose or not. Ground truth positions are acquired by a manual measurement thanks to the regular grids on the floor. In order to derive experimental results, the robot passed through the ground truth positions manually measured in advance. Despite the structure of the doors and the edges, the laser scan ambiguity occurs in the experimental environment; since the maximum detectable range of the laser scanner is limited (about 4 m in our experiments), these components cannot be measured, and the laser scanner acquires uniform data from the wall. The scene examples of the laser scan ambiguity are shown in Figure 12, where the successive number indicates a snapshot of robot and laser scan data corresponding to each robot position for every 0.45 m of travel of the robot. The third to seventh scenes in Figure 12 exhibit the laser scan ambiguity.

Bottom Line: To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor.The experimental results were compared with those of the conventional GMappingapproach.The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

View Article: PubMed Central - PubMed

Affiliation: Urban Robotics Laboratory (URL), Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro (373-1 Guseong-dong), Yuseong-gu, Daejeon 305-701, Korea. buljaga@kaist.ac.kr.

ABSTRACT
Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

No MeSH data available.