Architectures and Training Algorithms of Deep Spiking Neural Networks
Abstract
The spiking neural network (SNN), an emerging brain-inspired computing paradigm, is positioned to enable spatio-temporal information processing and ultra-low power event-driven neuromorphic hardware. The existing SNN architectures and their corresponding training algorithms suffer from the major limitations in terms of learning performance and efficiency. By leveraging the architectural characteristics and tackling training difficulties of SNNs, this dissertation explores various SNN architectures (feedforward and recurrent) and the corresponding training rules (numerical and bio-inspired algorithms), proposing a comprehensive set of solutions to embrace high performance and energy efficient (deep) spiking neuron networks.
The feedforward network topology is a straightforward structure for exploiting the computation power of spiking neuron networks. However, training feedforward SNNs to achieve a performance level on par with deep models is very challenging. The existing SNN error backpropagation methods are limited in terms of scalability, lack of proper handling of spiking discontinuities, and/or mismatch between the rate-coded loss function and computed gradient. We present a hybrid macro/micro level backpropagation (HM2-BP) algorithm for training multi-layer SNNs, achieving an accuracy level of 99.49% and 98.88% for MNIST and neuromorphic MNIST, respectively, outperforming the existing SNN BP algorithms.
It is widely anticipated that the wiring structure of biological brain will be more closely matched using the recurrent spiking reservoir computing model, or liquid state machine (LSM), to constitute a powerful bio-inspired computing paradigm. The LSM exploits the computation power of recurrent spiking neural networks by incorporating a randomly generated reservoir and a trainable readout layer. To realize adaptive LSMs thus boost learning performance, we propose a novel biologically plausible Activity-based Probabilistic Spiking-Timing Dependent Plastic (APSTDP) mechanism for recurrent reservoir tuning. Then, a hardware-optimized STDP mechanism is proposed to enable efficient on-chip learning. We demonstrate that the proposed approaches boost the learning performance by up to 2.7% while reducing energy dissipation by up to 25%. Furthermore, we present a unifying biologically inspired calcium-modulated supervised STDP approach for training and sparsification of readout synapses. We demonstrate that it outperforms a competitive spike-dependent training algorithm by up to 2.7% and prunes out up to 30% of readout synapses without significant performance degradation.
Subject
ArchitectureDeep Spiking Neuron Network
Training Algorithms
Recurrent Spiking Neuron Network
Citation
Jin, Yingyezhe (2018). Architectures and Training Algorithms of Deep Spiking Neural Networks. Doctoral dissertation, Texas A & M University. Available electronically from https : / /hdl .handle .net /1969 .1 /173988.