Show simple item record

dc.contributor.advisorSaripalli, Srikanth
dc.creatorJiang, Peng
dc.date.accessioned2023-09-19T18:36:28Z
dc.date.available2023-09-19T18:36:28Z
dc.date.created2023-05
dc.date.issued2023-03-23
dc.date.submittedMay 2023
dc.identifier.urihttps://hdl.handle.net/1969.1/198958
dc.description.abstractThe field of robotics has rapidly expanded in recent years and has found its way into various sectors, including manufacturing, healthcare, and transportation. This growth is largely attributed to the advancements in deep learning techniques, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), which have made it possible to create intelligent robots capable of performing complex tasks independently. With the ability to interact with the environment and learn from experience, robots can now tackle previously daunting real-world problems like object recognition, natural language processing, and motion planning while also being able to adapt to changing conditions and optimize their performance. This dissertation places significant emphasis on the role of perception sensors in enabling robots to understand and engage with their environment, particularly in off-road terrain. The re-search investigates several types of sensors, including cameras, LIDAR, and radar, and how they provide insights into a robot’s surroundings. Additionally, the study explores different levels of sensor data representation, ranging from raw data to semantic information and deep features. The dissertation introduces an off-road dataset for semantic segmentation that includes semantic labels for raw data and proposes a benchmark for point cloud and image semantic segmentation. It also delves into various semantic segmentation problems for different types of sensors, including camera images, LIDAR point cloud, and raw RADAR. The research proposes a semantic segmen-tation framework for off-road image segmentation, a technique to transfer labels from one LIDAR point cloud dataset to another dataset, and a pipeline to transfer LIDAR semantic segmentation labels to radar data. The study also proposes a technique to learn cross-modal deep features using contrastive learning. Finally, the research employs higher-level information, such as semantic in-formation and deep features, to address multi-modal extrinsic calibration for cameras-LIDAR and LIDAR-radar. The expected outcome of this research is to improve autonomous navigation in off-road environments, and the dissertation provides new resources and avenues for further research.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectSensors
dc.subjectSemantic Segmentation
dc.subjectDeep Learning
dc.subjectRepresentation Learning
dc.titleSensors and Data: Representation to Semantics to Deep Features
dc.typeThesis
thesis.degree.departmentMechanical Engineering
thesis.degree.disciplineMechanical Engineering
thesis.degree.grantorTexas A&M University
thesis.degree.nameDoctor of Philosophy
thesis.degree.levelDoctoral
dc.contributor.committeeMemberShell, Dylan
dc.contributor.committeeMemberKalathil, Dileep
dc.contributor.committeeMemberGopalswamy, Swaminathan
dc.type.materialtext
dc.date.updated2023-09-19T18:36:29Z
local.etdauthor.orcid0000-0002-8349-1743


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record