CONTAINER MANAGEMENT FOR SERVERLESS EDGE COMPUTING OFFERINGS
Abstract
Under the serverless paradigm, containers may serve as the runtime execution environments for processing clients’ service requests. For service providers aiming at broad customer bases, the portfolio of containers to be made available can be quite large. In edge computing scenarios, where hardware elasticity is limited or nonexistent, an effective method for container provisioning and destroying is crucial to increase service availability and mitigate startup overheads.
However, current methods have not been designed for the Internet-of-Things (IoT) applications – one major use case in edge computing. In this work, we introduce a new container management method that exploits predictable patterns present in the workload to decrease request latency in such environments. We propose a new container management method, called Look-Ahead Request Serving (LARS), designed for IoT applications that exhibit periodicity. We demonstrate that for workloads that invoke requests periodically (e.g., environmental sensors, surveillance cameras, smart home gadgets), our method outperforms the method in OpenWhisk, an open-source serverless platform, attaining a 37% and 78% improvement in the startup overhead in a smart gym and a smart home scenario, respectively
Citation
Wu, Chih-Peng (2019). CONTAINER MANAGEMENT FOR SERVERLESS EDGE COMPUTING OFFERINGS. Master's thesis, Texas A&M University. Available electronically from https : / /hdl .handle .net /1969 .1 /188813.