Limits...
A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

Rau JY, Yeh PC - Sensors (Basel) (2012)

Bottom Line: This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object.The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner.The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333.

View Article: PubMed Central - PubMed

Affiliation: Department of Geomatics, National Cheng-Kung University, No.1, University Road, Tainan 701, Taiwan. jyrau@mail.ncku.edu.tw

ABSTRACT
The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

No MeSH data available.


Taking pictures for multi-camera calibration.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3472884&req=5

f6-sensors-12-11271: Taking pictures for multi-camera calibration.

Mentions: For the purpose of multi-camera calibration, the technique with self-calibration bundle adjustment through coded targets was adopted again. Depending on the size of the target, the code targets were uniformly spread throughout an area similar to or larger than the target. Outdoors, the code targets can be spread on the ground. When taking one calibration image dataset, the camera's viewing direction is changed 5∼7 times to construct 90 degrees convergent angle to ensure strong imaging geometry. Acquisition of three more calibration image datasets is suggested by rotating the camera's metal frame for 90, 180 and 270 degrees to increase the redundancy of measurements. For indoor experiments, a portable wooden plate is proposed to fix the code targets and during image acquisition the camera's metal bar can remain stationary. Instead of rotating the metal bar, the wooden plate is inclined at 5∼7 different tilt angles for the construction of a convergent imaging geometry and rotated with 90, 180 and 270 degrees in roll angle. The situation for above mentioned procedure in the laboratory is shown in Figure 6. After automatic recognition of the coded targets, a self-calibration bundle adjustment scheme is utilized to perform photo triangulation and to calculate the EOPs for all images. Under well controls, the five cameras' IOPs can be self-calibrated as well, called on-the-job (OTJ) calibration. However, it is important to make sure that the code targets are well distributed throughout the whole image frame in order to fully characterize the radial lens distortion. Otherwise, it is suggested that the single camera calibration results should be applied and fixed during bundle adjustment for the multi-camera configuration.


A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

Rau JY, Yeh PC - Sensors (Basel) (2012)

Taking pictures for multi-camera calibration.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3472884&req=5

f6-sensors-12-11271: Taking pictures for multi-camera calibration.
Mentions: For the purpose of multi-camera calibration, the technique with self-calibration bundle adjustment through coded targets was adopted again. Depending on the size of the target, the code targets were uniformly spread throughout an area similar to or larger than the target. Outdoors, the code targets can be spread on the ground. When taking one calibration image dataset, the camera's viewing direction is changed 5∼7 times to construct 90 degrees convergent angle to ensure strong imaging geometry. Acquisition of three more calibration image datasets is suggested by rotating the camera's metal frame for 90, 180 and 270 degrees to increase the redundancy of measurements. For indoor experiments, a portable wooden plate is proposed to fix the code targets and during image acquisition the camera's metal bar can remain stationary. Instead of rotating the metal bar, the wooden plate is inclined at 5∼7 different tilt angles for the construction of a convergent imaging geometry and rotated with 90, 180 and 270 degrees in roll angle. The situation for above mentioned procedure in the laboratory is shown in Figure 6. After automatic recognition of the coded targets, a self-calibration bundle adjustment scheme is utilized to perform photo triangulation and to calculate the EOPs for all images. Under well controls, the five cameras' IOPs can be self-calibrated as well, called on-the-job (OTJ) calibration. However, it is important to make sure that the code targets are well distributed throughout the whole image frame in order to fully characterize the radial lens distortion. Otherwise, it is suggested that the single camera calibration results should be applied and fixed during bundle adjustment for the multi-camera configuration.

Bottom Line: This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object.The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner.The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333.

View Article: PubMed Central - PubMed

Affiliation: Department of Geomatics, National Cheng-Kung University, No.1, University Road, Tainan 701, Taiwan. jyrau@mail.ncku.edu.tw

ABSTRACT
The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

No MeSH data available.