The full text of this item is not available at this time because the student has placed this item under an embargo for a period of time. The Libraries are not authorized to provide a copy of this work during the embargo period, even for Texas A&M users with NetID.
DESIGN AND OPTIMIZATION OF ENERGY EFFICIENT RECURRENT SPIKING NEURAL ACCELERATORS
MetadataShow full item record
The liquid state machine (LSM) is a model of recurrent spiking neural networks that provides an appealing brain-inspired computing paradigm for machine-learning applications such as pattern recognition. Moreover, processing information directly on spiking events makes the LSM well suited for cost and energy efficient hardware implementation. The LSM is considered to be a good trade-off between the ability in tapping the computational power of recurrent SNNs and engineering tractability. This research work focuses on building bio-inspired energy-efficient LSM neural processors that enable intelligent and ubiquitous on-line learning. Hardware and algorithm co-design and co-optimization are explored for great hardware efficiency and decent performance of the proposed neural processors. The proposed learning models and architectures demonstrated on the presented FPGA LSM neural accelerators also provide opportunities for developing energy-efficient spiking neural processors on emerging microsystems such as three-dimensional integrated circuits (3D ICs). The conventional LSM consists of a fixed reservoir to avoid the difficulty in training the recurrent network. In this work, we propose the hardware LSM with a trainable recurrent reservoir to improve its self-adaptability hence provide better learning results. The first explored reservoir training scheme is the hardware-friendly spike-timing-dependent-plasticity (STDP) algorithm, which is implemented with great hardware efficiency and further optimized by runtime power gating and activity-depend clock gating approaches to minimize dynamic power consumption. With the sparsity naturally brought in by the STDP and the runtime power optimization approaches, the proposed LSM neural processor boosts the learning performance by up to 4.2% while reducing energy dissipation by up to 30.4% compared to a baseline LSM. In the second reservoir training scheme, an efficient on-chip intrinsic plasticity (IP) based algorithm, offering additional bio-inspired learning opportunities, is explored. We enable feasible on-chip integration of IP and further optimize its hardware efficiency through both algorithmic and hardware optimization approaches. A new hardware-friendly IP rule (SpiKL-IFIP) is proposed, which significantly optimizes the performance gain vs. overhead trade-off of onchip IP on the hardware recurrent spiking neural processors. On the Xilinx ZC706 FPGA board, LSMs with self-adapting reservoir neurons using IP boost the classification accuracy by up to 10.33%. Moreover, the highly-optimized IP implementation reduces training energy by 48.1% and resource utilization by 64.4% while gracefully trades off the classification accuracy for design efficiency. Furthermore, this work employs supervised STDP readout training with efficient resource sharing implementation of the LSM such that it delivers good classification performance at the same time sparsifies network connections to reduce hardware power consumption. FPGA LSM neural accelerators built on a Xilinx Zync ZC706 platform and trained for the speech recognition task with the TI46 speech corpus benchmark can achieve up to 3.47% on-line classification performance boost with great efficiency. Energy-efficient LSM neural processors have also been developed on monolithic three-dimensional (M3D) integrated circuits (IC) and demonstrates dramatic power-performance-area-accuracy (PPAA) benefits with design and architectural co-optimization.
Subjectrecurrent spiking neural network
hardware neuromorphic processor
Liu, Yu (2019). DESIGN AND OPTIMIZATION OF ENERGY EFFICIENT RECURRENT SPIKING NEURAL ACCELERATORS. Doctoral dissertation, Texas A&M University. Available electronically from