Show simple item record

dc.contributor.advisorLi, Peng
dc.contributor.advisorGratz, Paul
dc.creatorMahadevuni, Amarnath
dc.date.accessioned2019-01-17T16:14:58Z
dc.date.available2020-05-01T06:23:19Z
dc.date.created2018-05
dc.date.issued2018-03-13
dc.date.submittedMay 2018
dc.identifier.urihttps://hdl.handle.net/1969.1/173294
dc.description.abstractThe autonomous navigation of mobile robots is of great interest in mobile robotics. Algorithms such as simultaneous localization and mapping (SLAM) and artificial potential field methods can be applied to known and mapped environments. However, navigating in an unknown, and unmapped environments is still a challenge. In this research, we propose an algorithm for mobile robot navigation in the near-shortest possible time toward a predefined target location in an unknown environment containing obstacles. The algorithm is based on a reinforcement learning paradigm with biologically realistic spiking neural networks. We make use of eligibility traces that are inherent to spiking neural networks to solve the delayed reward problem implicitly present in reinforcement learning. With this algorithm, we achieve a set of movement decisions for the mobile robot to reach the target in the near-shortest time.en
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectAutonomous navigationen
dc.subjectSpiking Neural Networksen
dc.titleAutonomous Navigation Using Reinforcement Learning with Spiking Neural Networksen
dc.typeThesisen
thesis.degree.departmentElectrical and Computer Engineeringen
thesis.degree.disciplineComputer Engineeringen
thesis.degree.grantorTexas A & M Universityen
thesis.degree.nameMaster of Scienceen
thesis.degree.levelMastersen
dc.contributor.committeeMemberTalebpour, Alireza
dc.type.materialtexten
dc.date.updated2019-01-17T16:14:58Z
local.embargo.terms2020-05-01
local.etdauthor.orcid0000-0002-9338-9419


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record