Show simple item record

dc.contributor.advisorLi, Peng
dc.creatorLiu, Yu
dc.date.accessioned2019-11-25T22:30:00Z
dc.date.available2021-08-01T07:32:09Z
dc.date.created2019-08
dc.date.issued2019-06-27
dc.date.submittedAugust 2019
dc.identifier.urihttps://hdl.handle.net/1969.1/186531
dc.description.abstractThe liquid state machine (LSM) is a model of recurrent spiking neural networks that provides an appealing brain-inspired computing paradigm for machine-learning applications such as pattern recognition. Moreover, processing information directly on spiking events makes the LSM well suited for cost and energy efficient hardware implementation. The LSM is considered to be a good trade-off between the ability in tapping the computational power of recurrent SNNs and engineering tractability. This research work focuses on building bio-inspired energy-efficient LSM neural processors that enable intelligent and ubiquitous on-line learning. Hardware and algorithm co-design and co-optimization are explored for great hardware efficiency and decent performance of the proposed neural processors. The proposed learning models and architectures demonstrated on the presented FPGA LSM neural accelerators also provide opportunities for developing energy-efficient spiking neural processors on emerging microsystems such as three-dimensional integrated circuits (3D ICs). The conventional LSM consists of a fixed reservoir to avoid the difficulty in training the recurrent network. In this work, we propose the hardware LSM with a trainable recurrent reservoir to improve its self-adaptability hence provide better learning results. The first explored reservoir training scheme is the hardware-friendly spike-timing-dependent-plasticity (STDP) algorithm, which is implemented with great hardware efficiency and further optimized by runtime power gating and activity-depend clock gating approaches to minimize dynamic power consumption. With the sparsity naturally brought in by the STDP and the runtime power optimization approaches, the proposed LSM neural processor boosts the learning performance by up to 4.2% while reducing energy dissipation by up to 30.4% compared to a baseline LSM. In the second reservoir training scheme, an efficient on-chip intrinsic plasticity (IP) based algorithm, offering additional bio-inspired learning opportunities, is explored. We enable feasible on-chip integration of IP and further optimize its hardware efficiency through both algorithmic and hardware optimization approaches. A new hardware-friendly IP rule (SpiKL-IFIP) is proposed, which significantly optimizes the performance gain vs. overhead trade-off of onchip IP on the hardware recurrent spiking neural processors. On the Xilinx ZC706 FPGA board, LSMs with self-adapting reservoir neurons using IP boost the classification accuracy by up to 10.33%. Moreover, the highly-optimized IP implementation reduces training energy by 48.1% and resource utilization by 64.4% while gracefully trades off the classification accuracy for design efficiency. Furthermore, this work employs supervised STDP readout training with efficient resource sharing implementation of the LSM such that it delivers good classification performance at the same time sparsifies network connections to reduce hardware power consumption. FPGA LSM neural accelerators built on a Xilinx Zync ZC706 platform and trained for the speech recognition task with the TI46 speech corpus benchmark can achieve up to 3.47% on-line classification performance boost with great efficiency. Energy-efficient LSM neural processors have also been developed on monolithic three-dimensional (M3D) integrated circuits (IC) and demonstrates dramatic power-performance-area-accuracy (PPAA) benefits with design and architectural co-optimization.en
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectrecurrent spiking neural networken
dc.subjecton-chip trainingen
dc.subjectenergy efficiencyen
dc.subjecthardware neuromorphic processoren
dc.titleDESIGN AND OPTIMIZATION OF ENERGY EFFICIENT RECURRENT SPIKING NEURAL ACCELERATORSen
dc.typeThesisen
thesis.degree.departmentElectrical and Computer Engineeringen
thesis.degree.disciplineComputer Engineeringen
thesis.degree.grantorTexas A&M Universityen
thesis.degree.nameDoctor of Philosophyen
thesis.degree.levelDoctoralen
dc.contributor.committeeMemberChoe, Yoonsuck
dc.contributor.committeeMemberHoyos, Sebastian
dc.contributor.committeeMemberGratz, Paul
dc.type.materialtexten
dc.date.updated2019-11-25T22:30:00Z
local.embargo.terms2021-08-01
local.etdauthor.orcid0000-0002-4332-8124


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record