An FPGA-Based Accelerator For Distributed SVM Training
Abstract
Support Vector Machines are a class of machine learning algorithms with applications ranging
from classification to regression and categorization. With the exponential increase in edge computing
devices, there is a growing demand to adapt SVM-based techniques for edge analytics. However,
training SVM is computationally challenging due to a quadratic complexity in the number of
training samples. Consequently, SVM training is performed off-line on back-end servers, which
possess the computing power to train SVM models. Creating efficient frameworks for SVM-based
edge analytics requires a scalable, distributed training algorithm. Alongside, the computational capabilities
of edge nodes must be augmented through energy-efficient hardware accelerators. In this
research, we present a scalable FPGA-based accelerator for a distributed SVM training algorithm.
The accelerator exploits both data and task parallelism to create efficient, pipelined implementations
of computing modules in hardware. We evaluate the training performance of our proposed
accelerator for five SVM benchmarks, and compare with a high performance CPU cluster and an
embedded SoC server deploying equal number of computing units. The proposed FPGA-based
accelerator performs SVM training up to 25x and 1:75x faster than the CPU and SoC counterpart
respectively. Alongside, the accelerator provides 9x and 6x reduction in energy consumption,
relative to the SoC and CPU clusters respectively.
Subject
Support Vector MachinesEdge Analytics
FPGA
SVM training
SVM training on FPGA
Hardware Acceleration
Citation
Narawane, Yashwardhan (2018). An FPGA-Based Accelerator For Distributed SVM Training. Master's thesis, Texas A & M University. Available electronically from https : / /hdl .handle .net /1969 .1 /174027.