Sensors and Data: Representation to Semantics to Deep Features
Abstract
The field of robotics has rapidly expanded in recent years and has found its way into various sectors, including manufacturing, healthcare, and transportation. This growth is largely attributed to the advancements in deep learning techniques, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), which have made it possible to create intelligent robots capable of performing complex tasks independently. With the ability to interact with the environment and learn from experience, robots can now tackle previously daunting real-world problems like object recognition, natural language processing, and motion planning while also being able to adapt to changing conditions and optimize their performance.
This dissertation places significant emphasis on the role of perception sensors in enabling robots to understand and engage with their environment, particularly in off-road terrain. The re-search investigates several types of sensors, including cameras, LIDAR, and radar, and how they provide insights into a robot’s surroundings. Additionally, the study explores different levels of sensor data representation, ranging from raw data to semantic information and deep features. The dissertation introduces an off-road dataset for semantic segmentation that includes semantic labels for raw data and proposes a benchmark for point cloud and image semantic segmentation. It also delves into various semantic segmentation problems for different types of sensors, including camera images, LIDAR point cloud, and raw RADAR. The research proposes a semantic segmen-tation framework for off-road image segmentation, a technique to transfer labels from one LIDAR point cloud dataset to another dataset, and a pipeline to transfer LIDAR semantic segmentation labels to radar data. The study also proposes a technique to learn cross-modal deep features using contrastive learning. Finally, the research employs higher-level information, such as semantic in-formation and deep features, to address multi-modal extrinsic calibration for cameras-LIDAR and LIDAR-radar. The expected outcome of this research is to improve autonomous navigation in off-road environments, and the dissertation provides new resources and avenues for further research.
Citation
Jiang, Peng (2023). Sensors and Data: Representation to Semantics to Deep Features. Doctoral dissertation, Texas A&M University. Available electronically from https : / /hdl .handle .net /1969 .1 /198958.