Show simple item record

dc.contributor.advisorLi, Peng
dc.contributor.advisorChoi, Gwan
dc.creatorWang, Qian
dc.date.accessioned2016-09-22T19:51:33Z
dc.date.available2018-08-01T05:58:06Z
dc.date.created2016-08
dc.date.issued2016-08-02
dc.date.submittedAugust 2016
dc.identifier.urihttps://hdl.handle.net/1969.1/158093
dc.description.abstractQuintillions of bytes of data are generated every day in this era of big data. Machine learning techniques are utilized to perform predictive analysis on these data, to reveal hidden relationships and dependencies and perform predictions of outcomes and behaviors. The obtained predictive models are used to interpret the existing data and predict new data information. Nowadays, most machine learning algorithms are realized by software programs running on general-purpose processors, which usually takes a huge amount of CPU time and introduces unbelievably high energy consumption. In comparison, a dedicated hardware design is usually much more efficient than software programs running on general-purpose processors in terms of runtime and energy consumption. Therefore, the objective of this dissertation is to develop efficient hardware architectures for mainstream machine learning algorithms, to provide a promising solution to addressing the runtime and energy bottlenecks of machine learning applications. However, it is a really challenging task to map complex machine learning algorithms to efficient hardware architectures. In fact, many important design decisions need to be made during the hardware development for efficient tradeoffs. In this dissertation, a parallel digital VLSI architecture for combined SVM training and classification is proposed. For the first time, cascade SVM, a powerful training algorithm, is leveraged to significantly improve the scalability of hardware-based SVM training and develop an efficient parallel VLSI architecture. The parallel SVM processors provide a significant training time speedup and energy reduction compared with the software SVM algorithm running on a general-purpose CPU. Furthermore, a liquid state machine based neuromorphic learning processor with integrated training and recognition is proposed. A novel theoretical measure of computational power is proposed to facilitate fast design space exploration of the recurrent reservoir. Three low-power techniques are proposed to improve the energy efficiency. Meanwhile, a 2-layer spiking neural network with global inhibition is realized on Silicon. In addition, we also present architectural design exploration of a brain-inspired digital neuromorphic processor architecture with memristive synaptic crossbar array, and highlight several synaptic memory access styles. Various analog-to-digital converter schemes have been investigated to provide new insights into the tradeoff between the hardware cost and energy consumption.en
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectMachine learningen
dc.subjectVLSI architectureen
dc.titleArchitectures and Design of VLSI Machine Learning Systemsen
dc.typeThesisen
thesis.degree.departmentElectrical and Computer Engineeringen
thesis.degree.disciplineComputer Engineeringen
thesis.degree.grantorTexas A & M Universityen
thesis.degree.nameDoctor of Philosophyen
thesis.degree.levelDoctoralen
dc.contributor.committeeMemberPalermo, Sam
dc.contributor.committeeMemberChoe, Yoonsuck
dc.type.materialtexten
dc.date.updated2016-09-22T19:51:33Z
local.embargo.terms2018-08-01
local.etdauthor.orcid0000-0001-6611-3786


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record