The full text of this item is not available at this time because the student has placed this item under an embargo for a period of time. The Libraries are not authorized to provide a copy of this work during the embargo period, even for Texas A&M users with NetID.
Sensor Fusion for Robotic Surface and Subsurface Infrastructure Inspection
MetadataShow full item record
Infrastructure is indispensable for modern society nowadays. However, infrastructure requires periodic inspections for maintenance purposes because of deterioration over time. Manual inspections would be labor-intensive and costly. A more viable approach is to mount sensors onto a robot to perform the inspection tasks. The inspection tasks usually include both surface and subsurface mapping, thus the ability to fuse surface images with subsurface scans is important for further analysis. Therefore, we design and build a multi-modal sensing suite comprising multiple sensors for both surface and subsurface infrastructure inspection. The sensing suite contains a camera, a LIDAR, a ground penetrating radar (GPR), and a wheel encoder. To fuse different sensor modalities properly, the major challenges lie in the calibration and the synchronization between the camera and the GPR. Another limitation is the computationally intensive optimization steps in the system. In this dissertation, we first propose a method for the extrinsic calibration of a GPR. We build an artificial planar bridge as the calibration device and choose metal balls as calibration objects. We model the GPR imaging process and extract readings from hyperbolas generated by metal balls. We apply maximum likelihood estimator (MLE) to estimate the rigid body transformation and provide the closed form error analysis for our calibration models. Based on the extrinsic calibration of a GPR, we further propose a method to solve the relative pose between a camera and a GPR. We extend the artificial planar bridge by combining with a planar mirror as the calibration rig, and use a metal ball and a checker-board together as a combo calibration object. We estimate the GPR poses by extracting readings from hyperbolas generated by metal balls, and apply the mirror-based pinhole camera model to estimate the camera and mirror poses. We formulate an MLE problem to estimate the relative pose between the two sensors and provide the closed form analysis for the error distribution of calibration results. With calibration problems solved, we propose a data collection scheme using customized artificial landmarks (ALs) to synchronize and fuse the camera images and GPR scans for transportation infrastructure inspection. We utilize pose graph optimization to refine synchronization and reconstruct the 3D structure by fusing all the data. We test our method in physical experiments, and the results show that our method is able to fuse three sensory data (camera, GPR, and wheel encoder), product metric 3D reconstruction, and reduce the end-to-end distance error from 7.45cm to 3.10cm. In addition, we design a tunable sparse optimization solver that can trade a slight de-crease in accuracy for significant speed improvement in pose graph optimization in visual simultaneous localization and mapping. The solver is designed for devices with significant computation and power constraints such as mobile phones or tablets. We propose a graph pruning strategy by exploiting objective function structure to reduce the optimization problem size. Besides, we apply a modified Cholesky factorization accelerate the computation. We reuse the decomposition result from last iteration by using Cholesky update/downdate to reduce the repeated computation. We have implemented our solver and tested it with open source data. The experimental results show that our solver can be twice as fast as the counterpart while maintaining a loss of less than 5% in accuracy.
Chou, Chieh (2019). Sensor Fusion for Robotic Surface and Subsurface Infrastructure Inspection. Doctoral dissertation, Texas A & M University. Available electronically from