Human-in-the-loop Methods for Data-driven and Reinforcement Learning Systems
Abstract
Recent successes combine reinforcement learning algorithms and deep neural networks, despite reinforcement learning not being widely applied to robotics and real world scenarios. This can be attributed to the fact that current state-of-the-art, end-to-end reinforcement learning approaches still requires thousands or millions of data samples to converge to a satisfactory policy and are subject to catastrophic failures during training. Conversely, in real world scenarios and after just a few data samples, humans are able to either provide demonstrations of the task, intervene to prevent catastrophic actions, or simply evaluate if the policy is performing correctly. This dissertation investigates how to integrate these human interaction modalities to the reinforcement learning loop, increasing sample efficiency and enabling real-time reinforcement learning in robotics and real world scenarios. The theoretical foundation of this dissertation builds upon the actor- critic reinforcement learning architecture, the use of function approximation to represent action- and value-based functions, and the integration of different human interaction modalities; namely, task demonstration, intervention, and evaluation, to these functions and to reward signals. This novel theoretical foundation is called Cycle-of-Learning, a reference to how different human interaction modalities are cycled and combined to reinforcement learning algorithms. This approach is validated on an Unmanned Air System (UAS) collision avoidance and landing scenario using a high-fidelity simulated environment and several continuous control tasks standardized to benchmark reinforcement learning algorithms. Results presented in this dissertation show that the reward signal that is learned based upon human interaction accelerates the rate of learning of reinforcement learning algorithms, when compared to traditional handcrafted or binary reward signals returned by the environment. Results also show that learning from a combination of human demonstrations and interventions is faster and more sample efficient when compared to traditional supervised learning algorithms. Finally, Cycle-of-Learning develops an effective transition between policies learned using human demonstrations and interventions to reinforcement learning. It learns faster and uses fewer interactions with the environment when compared to state-of-the-art algorithms. The theoretical foundation developed by this dissertation opens new research paths to human-agent teaming scenarios where autonomous agents are able to learn from human teammates and adapt to mission performance metrics in real-time and in real world scenarios.
Subject
Human-in-the-loop LearningReinforcement Learning
Deep Learning
Machine Learning
Learning
Human-Robot Interaction
Human-Machine Interaction
Autonomous Systems
Robotics
Unmanned Air Systems
Unmanned Air Vehicles
Citation
Guimaraes Goecks, Vinicius (2020). Human-in-the-loop Methods for Data-driven and Reinforcement Learning Systems. Doctoral dissertation, Texas A&M University. Available electronically from https : / /hdl .handle .net /1969 .1 /191655.
Related items
Showing items related by title, author, creator and subject.
-
Peterson, Cheryl (2012-10-19)PlantingScience (PS) is a unique web-based learning system designed to develop secondary students' scientific practices and proficiencies as they engage in hands-on classroom investigations while being mentored online by ...
-
Rengarajan, Desik (2023-07-07)Reinforcement learning is a powerful approach for training intelligent agents to make decisions in complex environments. However, these algorithms often struggle when faced with challenging scenarios, such as sparse reward ...
-
Hasanzadehmoghimi, Arman (2021-12-03)In this dissertation, we propose novel Bayesian machine learning models to solve various graph analytics problems, including graph representation learning, graph generative modeling, structured semi-supervised learning, ...