Show simple item record

dc.contributor.advisorPagilla, Prabhakar Reddy
dc.contributor.advisorDarbha, Swaroop
dc.creatorHu, Jie
dc.date.accessioned2023-05-26T18:15:37Z
dc.date.available2024-08-01T05:58:57Z
dc.date.created2022-08
dc.date.issued2022-07-26
dc.date.submittedAugust 2022
dc.identifier.urihttps://hdl.handle.net/1969.1/198121
dc.description.abstractWhen planning paths for robotic tasks involving interaction with an object, a key piece of information needed is the location of the object within the robot workspace. The process of obtaining the object location (both position and orientation) is referred to as workpiece localization in manufacturing, or more generally, object pose estimation. The object pose estimation process typically consists of two steps: data collection and pose estimation. Each step can be formulated and solved differently, or even separately, depending on the underlying process, the associated assumptions, and the work environment of the robot. In this work, an active robot perception framework that includes both data collection and pose estimation is proposed. The framework includes novel ways to (1) improve the accuracy of the estimated pose by collecting informative data and (2) plan subsequent sensor views automatically based on previously collected data. The data used in this work is in the format of point clouds. It is assumed that the 3D Computer-Aided Design (CAD) model of the target object is available. The object pose is estimated by registering the measured point clouds and the point clouds sampled from the CAD model. The proposed active robot perception framework includes two main elements: view planning and pose estimation. View planning in this work includes generating and selecting sensor views and determining robot poses. Two sets of methods have been developed under the proposed framework. First, objects are assumed to have planar features, which are utilized for pose estimation. A plane-based point cloud registration method has been developed. Informative sensor view directions are defined based on the current pose estimation. Regions around the informative sensor view directions are discretized into voxels. Sensor view candidates are defined for voxels, and these candidates are further down-selected based on the kinematic feasibility of the robot to reach those views. A view gain is proposed to select the next-best-view from the view candidates. Simulations and experiments are conducted to evaluate the effectiveness of the pose estimation and view planning separately. The second set of methods is agnostic to the object geometries. Point cloud analysis in terms of quality and quantity is proposed to generate sensor view candidates. The goal is to increase the estimated pose accuracy by improving the quality and quantity of the collected point clouds. Techniques from combinatorial optimization are utilized to determine the sensor views. Constrained nonlinear optimization is employed to calculate the robot poses corresponding to the sensor views. Experiments are conducted to evaluate the effectiveness of each component. The proposed methods are further compared with methods for reconstruction. The results from these comparisons reveal the differences between data collection for reconstruction and data collection for pose estimation. Generating sensor views based on the measured data is shown to have the following benefits: (1) view planning is less dependent on human experience; (2) sensor views can be generated efficiently and informatively for tasks with high variance; and (3) selecting sensor views offline to collect point clouds is avoided.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectactive perception
dc.subjectrobot perception
dc.subjectobject pose estimation
dc.subjectpoint cloud registration
dc.subjectview planning
dc.titleActive Robot Perception for Object Pose Estimation
dc.typeThesis
thesis.degree.departmentMechanical Engineering
thesis.degree.disciplineMechanical Engineering
thesis.degree.grantorTexas A&M University
thesis.degree.nameDoctor of Philosophy
thesis.degree.levelDoctoral
dc.contributor.committeeMemberKim, Won-jong
dc.contributor.committeeMemberSong, Xingyong
dc.type.materialtext
dc.date.updated2023-05-26T18:15:37Z
local.embargo.terms2024-08-01
local.etdauthor.orcid0000-0002-3944-4798


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record