Show simple item record

dc.contributor.advisorValasek, John
dc.creatorKirkpatrick, Kenton
dc.date.accessioned2013-10-03T15:01:39Z
dc.date.available2013-10-03T15:01:39Z
dc.date.created2013-05
dc.date.issued2013-04-30
dc.date.submittedMay 2013
dc.identifier.urihttps://hdl.handle.net/1969.1/149493
dc.description.abstractReinforcement Learning has received a lot of attention over the years for systems ranging from static game playing to dynamic system control. Using Reinforcement Learning for control of dynamical systems provides the benefit of learning a control policy without needing a model of the dynamics. This opens the possibility of controlling systems for which the dynamics are unknown, but Reinforcement Learning methods like Q-learning do not explicitly account for time. In dynamical systems, time-dependent characteristics can have a significant effect on the control of the system, so it is necessary to account for system time dynamics while not having to rely on a predetermined model for the system. In this dissertation, algorithms are investigated for expanding the Q-learning algorithm to account for the learning of sampling rates and dynamics approximations. For determining a proper sampling rate, it is desired to find the largest sample time that still allows the learning agent to control the system to goal achievement. An algorithm called Sampled-Data Q-learning is introduced for determining both this sample time and the control policy associated with that sampling rate. Results show that the algorithm is capable of achieving a desired sampling rate that allows for system control while not sampling “as fast as possible”. Determining an approximation of an agent’s dynamics can be beneficial for the control of hierarchical multiagent systems by allowing a high-level supervisor to use the dynamics approximations for task allocation decisions. To this end, algorithms are investigated for learning first- and second-order dynamics approximations. These algorithms are respectively called First-Order Dynamics Learning and Second-Order Dynamics Learning. The dynamics learning algorithms are evaluated on several examples that show their capability to learn accurate approximations of state dynamics. All of these algorithms are then evaluated on hierarchical multiagent systems for determining task allocation. The results show that the algorithms successfully determine appropriated sample times and accurate dynamics approximations for the agents investigated.en
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectReinforcement Learningen
dc.subjectQ-learningen
dc.subjectControlen
dc.subjectDynamicsen
dc.subjectMultiagenten
dc.subjectMachine Learningen
dc.subjectArtificial Intelligenceen
dc.subjectSamplingen
dc.subjectSampled-Data Systemsen
dc.subjectSystem Identificationen
dc.titleReinforcement Learning Control with Approximation of Time-Dependent Agent Dynamicsen
dc.typeThesisen
thesis.degree.departmentAerospace Engineeringen
thesis.degree.disciplineAerospace Engineeringen
thesis.degree.grantorTexas A&M Universityen
thesis.degree.nameDoctor of Philosophyen
thesis.degree.levelDoctoralen
dc.contributor.committeeMemberBhattacharya, Raktim
dc.contributor.committeeMemberChakravorty, Suman
dc.contributor.committeeMemberIoerger, Thomas
dc.type.materialtexten
dc.date.updated2013-10-03T15:01:39Z


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record