The full text of this item is not available at this time because the student has placed this item under an embargo for a period of time. The Libraries are not authorized to provide a copy of this work during the embargo period, even for Texas A&M users with NetID.
ONCHIP TRAINING OF SPIKING NEURAL ACCELERATORS USING SPIKE-TRAIN LEVEL DIRECT FEEDBACK ALIGNMENT
MetadataShow full item record
Spiking Neural Networks (SNNs) are widely researched in recent years and present a promising computing model. Several key properties including biologically plausible information processing and event driven sample learning make SNNs be able for ultra-low power neuromorphic hardware implementation. However, to achieve the same level of performance in training conventional deep artificial neural networks (ANNs), especially for networks with error backpropagation (BP) algorithm, is a significant challenge existing in SNNs training, which is due to inherent complex dynamics and non-differentiable spike activities of spiking neurons. To solve this problem, this thesis proposes the first study on realizing competitive spike-train level backpropagation (BP) like algorithms to enable on-chip BP training of SNNs. This novel alrogithm, called spike-train level direct feedback alignment (ST-DFA), performs better in computation complexity and training latency compared to traditional BP methods. Furthermore, algorithm and hardware cooptimization as well as efficient online neural signal computation are explored for on-chip implementation of ST-DFA. To figure out the performance of this proposed algorithm, the final online version of ST-DFA is tested on the Xilinx ZC706 FPGA board. During testing on real-world speech and image classification applications, it shows excellent performance vs. overhead tradeoffs. SNN neural processors with on-chip ST-DFA training show competitive classification accuracy of 97.23% for the MNIST dataset with 4X input resolution reduction and 87.40% for the challenging 16-speaker TI46 speech corpus, respectively. This experimental result is then compared to the hardware implementation of the state-of-the-art BP algorithm HM2-BP. While trading off classification performance very gracefully, the design of the proposed online ST-DFA training reduces functional resources by 76.7% and backward training latency by 31.6%, which dramatically cut resource and power demand for hardware implementation
Zhang, Renqian (2019). ONCHIP TRAINING OF SPIKING NEURAL ACCELERATORS USING SPIKE-TRAIN LEVEL DIRECT FEEDBACK ALIGNMENT. Master's thesis, Texas A&M University. Available electronically from