Show simple item record

dc.contributor.advisorValasek, John
dc.creatorGuimaraes Goecks, Vinicius
dc.date.accessioned2020-12-17T22:04:32Z
dc.date.available2022-05-01T07:12:46Z
dc.date.created2020-05
dc.date.issued2020-03-17
dc.date.submittedMay 2020
dc.identifier.urihttps://hdl.handle.net/1969.1/191655
dc.description.abstractRecent successes combine reinforcement learning algorithms and deep neural networks, despite reinforcement learning not being widely applied to robotics and real world scenarios. This can be attributed to the fact that current state-of-the-art, end-to-end reinforcement learning approaches still requires thousands or millions of data samples to converge to a satisfactory policy and are subject to catastrophic failures during training. Conversely, in real world scenarios and after just a few data samples, humans are able to either provide demonstrations of the task, intervene to prevent catastrophic actions, or simply evaluate if the policy is performing correctly. This dissertation investigates how to integrate these human interaction modalities to the reinforcement learning loop, increasing sample efficiency and enabling real-time reinforcement learning in robotics and real world scenarios. The theoretical foundation of this dissertation builds upon the actor- critic reinforcement learning architecture, the use of function approximation to represent action- and value-based functions, and the integration of different human interaction modalities; namely, task demonstration, intervention, and evaluation, to these functions and to reward signals. This novel theoretical foundation is called Cycle-of-Learning, a reference to how different human interaction modalities are cycled and combined to reinforcement learning algorithms. This approach is validated on an Unmanned Air System (UAS) collision avoidance and landing scenario using a high-fidelity simulated environment and several continuous control tasks standardized to benchmark reinforcement learning algorithms. Results presented in this dissertation show that the reward signal that is learned based upon human interaction accelerates the rate of learning of reinforcement learning algorithms, when compared to traditional handcrafted or binary reward signals returned by the environment. Results also show that learning from a combination of human demonstrations and interventions is faster and more sample efficient when compared to traditional supervised learning algorithms. Finally, Cycle-of-Learning develops an effective transition between policies learned using human demonstrations and interventions to reinforcement learning. It learns faster and uses fewer interactions with the environment when compared to state-of-the-art algorithms. The theoretical foundation developed by this dissertation opens new research paths to human-agent teaming scenarios where autonomous agents are able to learn from human teammates and adapt to mission performance metrics in real-time and in real world scenarios.en
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectHuman-in-the-loop Learningen
dc.subjectReinforcement Learningen
dc.subjectDeep Learningen
dc.subjectMachine Learningen
dc.subjectLearningen
dc.subjectHuman-Robot Interactionen
dc.subjectHuman-Machine Interactionen
dc.subjectAutonomous Systemsen
dc.subjectRoboticsen
dc.subjectUnmanned Air Systemsen
dc.subjectUnmanned Air Vehiclesen
dc.titleHuman-in-the-loop Methods for Data-driven and Reinforcement Learning Systemsen
dc.typeThesisen
thesis.degree.departmentAerospace Engineeringen
thesis.degree.disciplineAerospace Engineeringen
thesis.degree.grantorTexas A&M Universityen
thesis.degree.nameDoctor of Philosophyen
thesis.degree.levelDoctoralen
dc.contributor.committeeMemberChamitoff, Gregory
dc.contributor.committeeMemberSelva, Daniel
dc.contributor.committeeMemberShell, Dylan
dc.type.materialtexten
dc.date.updated2020-12-17T22:04:33Z
local.embargo.terms2022-05-01
local.etdauthor.orcid0000-0002-4481-671X


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record