Techniques for Secure and Efficient Computing
Loading...
Date
2020-06-02
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Security and computing efficiency are two important aspects in the field of computer engineering. With the increasing complexity of modern computers, various security problems are exposed. Moreover, applications continuously present need for further computing efficiency. In this dissertation, techniques for both security and computing efficiency are studied. For hardware security, split fabrication is recognized as a promising approach to defense against attacks by untrusted foundries. The existing split fabrication methods mostly neglect manufacturability, which is an unavoidable challenge in nanometer technologies. Observing that security and manufacturability can be addressed in a synergistic manner, our research introduces routing techniques that can simultaneously improve both security and manufacturability. The effectiveness of these techniques is confirmed by experiments on benchmark circuits. For software security, Control-Flow Integrity (CFI) and Data-Flow Integrity (DFI) are effective defense techniques against a variety of memory-based cyber attacks. CFI and DFI are usually enforced through software methods, which entail considerable performance overhead. Hardware-based CFI techniques can largely avoid performance overhead, but typically rely on code instrumentation, which forms a non-trivial hurdle to the application of CFI. DFI often leads to even larger performance overhead comparing to CFI, and its real-world application has been quite limited. The overhead is intrinsically difficult to reduce unless the DFI verification criterion is lowered. We propose the hardware-based solutions for CFI and DFI verification, where FPGA and Processing-In-Memory (PIM) are leveraged, respectively. Experiments on popular benchmarks confirm that our designs can detect fine-grained CFI violations over unmodified binaries, and completely enforce the DFI defined in the original seminal work. The measurement results show an average of 0.36% performance overhead for CFI and an average 4x performance overhead reduction for DFI on SPEC 2006 benchmarks. For computing efficiency, serverless or functions as a service runtimes offer an efficient and cost-effective mechanism for event-driven cloud applications. Training deep neural networks can be both compute and memory intensive. We investigate the use of serverless runtimes for neural network training while leveraging data parallelism for large neural network models, show the challenges and limitations due to the data communication bottleneck, and propose modifications to the underlying runtime implementations that would mitigate them. For hyperparameter optimization of smaller deep learning models, we show that serverless runtimes can provide significant benefit.
Description
Keywords
Computing Efficiency, Security,