Show simple item record

dc.contributor.advisorLi, Peng
dc.creatorZhang, Renqian
dc.date.accessioned2019-11-25T23:08:58Z
dc.date.available2021-08-01T07:36:07Z
dc.date.created2019-08
dc.date.issued2019-07-09
dc.date.submittedAugust 2019
dc.identifier.urihttps://hdl.handle.net/1969.1/186586
dc.description.abstractSpiking Neural Networks (SNNs) are widely researched in recent years and present a promising computing model. Several key properties including biologically plausible information processing and event driven sample learning make SNNs be able for ultra-low power neuromorphic hardware implementation. However, to achieve the same level of performance in training conventional deep artificial neural networks (ANNs), especially for networks with error backpropagation (BP) algorithm, is a significant challenge existing in SNNs training, which is due to inherent complex dynamics and non-differentiable spike activities of spiking neurons. To solve this problem, this thesis proposes the first study on realizing competitive spike-train level backpropagation (BP) like algorithms to enable on-chip BP training of SNNs. This novel alrogithm, called spike-train level direct feedback alignment (ST-DFA), performs better in computation complexity and training latency compared to traditional BP methods. Furthermore, algorithm and hardware cooptimization as well as efficient online neural signal computation are explored for on-chip implementation of ST-DFA. To figure out the performance of this proposed algorithm, the final online version of ST-DFA is tested on the Xilinx ZC706 FPGA board. During testing on real-world speech and image classification applications, it shows excellent performance vs. overhead tradeoffs. SNN neural processors with on-chip ST-DFA training show competitive classification accuracy of 97.23% for the MNIST dataset with 4X input resolution reduction and 87.40% for the challenging 16-speaker TI46 speech corpus, respectively. This experimental result is then compared to the hardware implementation of the state-of-the-art BP algorithm HM2-BP. While trading off classification performance very gracefully, the design of the proposed online ST-DFA training reduces functional resources by 76.7% and backward training latency by 31.6%, which dramatically cut resource and power demand for hardware implementationen
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectSNNen
dc.subjectback propagationen
dc.subjectFPGAen
dc.titleONCHIP TRAINING OF SPIKING NEURAL ACCELERATORS USING SPIKE-TRAIN LEVEL DIRECT FEEDBACK ALIGNMENTen
dc.typeThesisen
thesis.degree.departmentElectrical and Computer Engineeringen
thesis.degree.disciplineComputer Engineeringen
thesis.degree.grantorTexas A&M Universityen
thesis.degree.nameMaster of Scienceen
thesis.degree.levelMastersen
dc.contributor.committeeMemberHarris, Harlan Rusty
dc.contributor.committeeMemberWalker, Duncan M
dc.contributor.committeeMemberKalafatis, Stavros
dc.type.materialtexten
dc.date.updated2019-11-25T23:08:58Z
local.embargo.terms2021-08-01
local.etdauthor.orcid0000-0002-5450-4453


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record