HotEdge '19 Workshop Program

All sessions will be held in Grand Ballroom VII–IX unless otherwise noted.

Papers are available for download below to registered attendees now and to everyone beginning July 9, 2019. Paper abstracts are available to everyone now. Copyright to the individual works is retained by the author[s].

Downloads for Registered Attendees
(Sign in to your USENIX account to download these files.)

Attendee Files 
HotEdge '19 Paper Archive (ZIP)
HotEdge '19 Attendee List (PDF)

Tuesday, July 9, 2019

7:30 am–8:30 am

Continental Breakfast

Grand Ballroom Prefunction

8:30 am–9:40 am

Shared Keynote Address with HotStorage '19

Grand Ballroom

9:40 am–10:10 am

Break with Refreshments

Grand Ballroom Prefunction

10:10 am–12:20 pm

Learning and Applications

Opening Remarks
Program Co-Chairs: Irfan Ahmad, Magnition, and Swaminathan Sundararaman

Collaborative Learning on the Edges: A Case Study on Connected Vehicles

Sidi Lu, Yongtao Yao, and Weisong Shi, Wayne State University

Available Media

The wide deployment of 4G/5G has enabled connected vehicles as a perfect edge computing platform for a plethora of new services which are impossible before, such as remote real-time diagnostics and advanced driver assistance. In this work, we propose CLONE, a collaborative learning setting on the edges based on the real-world dataset collected from a large electric vehicle (EV) company. Our approach is built on top of the federated learning algorithm and long short-term memory networks, and it demonstrates the effectiveness of driver personalization, privacy serving, latency reduction (asynchronous execution), and security protection. We choose the failure of EV battery and associated accessories as our case study to show how the CLONE solution can accurately predict failures to ensure sustainable and reliable driving in a collaborative fashion.

Home, SafeHome: Ensuring a Safe and Reliable Home Using the Edge

Shegufta Bakht Ahsan and Rui Yang, University of Illinois at Urbana Champaign; Shadi Abdollahian Noghabi, Microsoft Research; Indranil Gupta, University of Illinois at Urbana Champaign

Available Media

As smart home environments get more complex and denser, they are becoming harder to manage. We present our ongoing work on the design and implementation of ``SafeHome'', a system for management and coordination inside a smart home. SafeHome offers users and programmers the flexibility to specify safety properties in a declarative way, and to specify routines of commands in an imperative way. SafeHome includes mechanisms which ensure that under concurrent routines and device failures, the smart home behavior is consistent (e.g., serializable) and safety properties are always guaranteed. SafeHome is intended to run on edge machines co-located with the smart home. Our design space opens the opportunity to borrow and adapt rich ideas and mechanisms from related areas such as databases and compilers.

Exploring the Use of Synthetic Gradients for Distributed Deep Learning across Cloud and Edge Resources

Yitao Chen, Kaiqi Zhao, Baoxin Li, and Ming Zhao, Arizona State University

Available Media

With the explosive growth of data, largely contributed by the rapidly and widely deployed smart devices on the edge, we need to rethink the training paradigm for learning on such realworld data. The conventional cloud-only approach can hardly keep up with the computational demand from these deep learning tasks; and the traditional back propagation based training method also makes it difficult to scale out the training. Fortunately, the continuous advancement in System on Chip (SoC) hardware is transforming edge devices into capable computing platforms, and can potentially be exploited to address these challenges. These observations have motivated this paper’s study on the use of synthetic gradients for distributed training cross cloud and edge devices. We employ synthetic gradients into various neural network models to comprehensively evaluate its feasibility in terms of accuracy and convergence speed. We distribute the training of the various layers of a model using synthetic gradients, and evaluate its effectiveness on the edge by using resource-limited containers to emulate edge devices. The evaluation result shows that the synthetic gradient approach can achieve comparable accuracy compared to the conventional back propagation, for an eight-layer model with both fully-connected and convolutional layers. For a more complex model (VGG16), the training suffers from some accuracy degradation (up to 15%). But it achieves 11% improvement in training speed when the layers of a model are decoupled and trained on separate resource-limited containers, compared to the training of the whole model using the conventional method on the physical machine.

Distributing Deep Neural Networks with Containerized Partitions at the Edge

Li Zhou, The Ohio State University; Hao Wen, University of Minnesota, Twin Cities; Radu Teodorescu, The Ohio State University; David H.C. Du, University of Minnesota, Twin Cities

Available Media

Deploying machine learning on edge devices is becoming increasingly important, driven by new applications such as smart homes, smart cities, and autonomous vehicles. Unfortunately, it is challenging to deploy deep neural networks (DNNs) on resource-constrained devices. These workloads are computationally intensive and often require cloud-like resources. Prior solutions attempted to address these challenges by either sacrificing accuracy or by relying on cloud resources for assistance.

In this paper, we propose a containerized partition-based runtime adaptive convolutional neural network (CNN) acceleration framework for Internet of Things (IoT) environments. The framework leverages spatial partitioning techniques through convolution layer fusion to dynamically select the optimal partition according to the availability of computational resources and network conditions. By containerizing each partition, we simplify the model update and deployment with Docker and Kubernetes to efficiently handle runtime resource management and scheduling of containers.

LiveMicro: An Edge Computing System for Collaborative Telepathology

Alessio Sacco, Politecnico di Torino; Flavio Esposito and Princewill Okorie, Saint Louis University; Guido Marchetto, Politecnico di Torino

Available Media

Telepathology is the practice of digitizing histological images for transmission along telecommunication pathways for diagnosis, consultation or continuing medical education. Existing telepathology solutions are limited to offline or delay-tolerant diagnosis.

In this paper we present LiveMicro, a telepathology system that, leveraging edge computing, enables multiple pathologists to collaborate on a diagnosis by allowing a remote live control of a microscope. In such environment, computation at the edge is used in three ways: (1) to allow remote users to control the microscope simultaneously, (2) to process histological image and live video, by running algorithms that recognize e.g., tumor grades, (3) to preserve privacy creating virtual shared data views. In particular, we built the first opensource edge computing based telepathology system. In our prototype, the examples of edge processing that we currently support are extraction of diagnosis-oriented features and compression of payloads to minimize transmission delays. Our evaluation shows how LiveMicro can help a medical team with a remote, faster and more accurate diagnosis.

Secure Incentivization for Decentralized Content Delivery

Prateesh Goyal, MIT CSAIL; Ravi Netravali, UCLA; Mohammad Alizadeh and Hari Balakrishnan, MIT CSAIL

Available Media

Prior research has proposed using peer-to-peer (P2P) content delivery to serve Internet video at lower costs. Yet, such methods have not witnessed widespread adoption. An important challenge is incentivization: what tangible benefits does P2P content delivery offer users who bring resources to the table? In this paper, we ask whether monetary incentives can help attract peers in P2P content delivery systems. We first propose Gringotts, a system to enable secure monetary incentives for P2P content delivery systems. Gringotts provides a novel Proof of Delivery mechanism that allows content providers to verify correct delivery of their files, and shows how to use cryptocurrency to pay peers while guarding against liars and Sybil attacks. We then present results from an 876-person professional survey we commissioned to understand users’ willingness to participate in Gringotts, and what challenges remain. Our survey revealed that 51% would participate for suitable financial incentives, and motivated the need for alternate payment forms, device security, and peer anonymity.

Navigating the Visual Fog: Analyzing and Managing Visual Data from Edge to Cloud

Ragaad Altarawneh, Christina Strong, Luis Remis, and Pablo Munoz, Intel Labs, Intel Corporation; Addicam Sanjay, Internet of Things Group, Intel Corporation; Srikanth Kambhatla, System Software Products, Intel Corporation

Available Media

Visual data produced at the edge is rich with information, opening a world of analytics opportunities for applications to explore. However, the demanding requirements of visual data on computational resources and bandwidth have hindered effective processing, preventing the data from being used in an economically efficient manner. In order to scale out visual analytics systems, it is necessary to have a framework that works collaboratively between edge and cloud. In this paper, we propose an end-to-end (E2E) visual fog architecture, designed for processing and management of visual data. Using our architecture to extract shopper insights, we are able to achieve application specified real time requirements for extracting and querying visual data, showing the feasibility of our design in a real-world setting. We also discuss the lessons we learned from deploying an edge-to-cloud architecture for video streaming applications.

Modeling The Edge: Peer-to-Peer Reincarnated

Gala Yadgar and Oleg Kolosov, Computer Science Department, Technion; Mehmet Fatih Aktas and Emina Soljanin, Department of Electrical and Computer Engineering, Rutgers University

Available Media

The rise of edge computing as a new storage and compute model has already motivated numerous studies within the systems community, focusing on the choices and mechanisms of task offloading from end devices to the edge infrastructure, pricing, consistency, indexing and caching. However, it is not yet entirely clear how the edge infrastructure itself will be deployed, and, more importantly, managed. A common point of view considers the edge as an extension of traditional content distribution networks (CDN), due to its hierarchical layout, centralized ownership, and cloud back-end.

In this paper, we consider a different view of the edge, as a "reincarnation" of the well-known peer-to-peer (P2P) model. We show how the edge is similar to P2P systems in many aspects, including the number, heterogeneity and limited availability and resources of its nodes, their central role in performing the system's storage and computation, and the vulnerabilities related to tight interoperability with user end devices. We describe the similarities of the edge to both CDNs and P2P systems, the challenges that arise from these similarities, and the previous approaches to address them in both contexts. We show that the challenges that remain in applying these approaches may be addressed by viewing the edge as a larger and smarter reincarnation of P2P systems.

12:20 pm–1:40 pm

Luncheon for Workshop Attendees

Olympic Pavilion

1:40 pm–3:40 pm

Infrastructure

Towards a Serverless Platform for Edge AI

Thomas Rausch, TU Wien; Waldemar Hummer and Vinod Muthusamy, IBM Research AI; Alexander Rashed and Schahram Dustdar, TU Wien

Available Media

This paper proposes a serverless platform for building and operating edge AI applications. We analyze edge AI use cases to illustrate the challenges in building and operating AI applications in edge cloud scenarios. By elevating concepts from AI lifecycle management into the established serverless model, we enable easy development of edge AI workflow functions. We take a deviceless approach, i.e., we treat edge resources transparently as cluster resources, but give developers fine-grained control over scheduling constraints. Furthermore, we demonstrate the limitations of current serverless function schedulers, and present the current state of our prototype.

An Edge-based Framework for Cooperation in Internet of Things Applications

Zach Leidall, Abhishek Chandra, and Jon Weissman, University of Minnesota, Twin Cities

Available Media

Edge computing and the Internet of Things (IoT) are irrevocably intertwined and much work has proposed enhancing the IoT through the use of edge computing. These solutions have typically focused on using the edge to increase the locality of cloud applications, achieving benefits mainly in terms of lower network latency. In this work, we argue that IoT systems can benefit much more from semantic properties which are best recognized and exploited in situ, at the edge of the network where the data streams and actuators exist. We outline the idea of a real-time semantic operating system, hosted on the edge, which can provide higher performance in energy consumption, latency, accuracy, and improve not only individual application performance but that of an entire IoT infrastructure. We have implemented a prototype system and show some initial results demonstrating the efficacy of our proposed optimizations, and provide insights into how to handle some of the most critical issues faced in such a system.

OneOS: IoT Platform based on POSIX and Actors

Kumseok Jung, University of British Columbia; Julien Gascon-Samson, ÉTS Montréal; Karthik Pattabiraman, University of British Columbia

Available Media

Recent interest in Edge/Fog Computing has pushed IoT Platforms to support a broader range of general-purpose workloads. We propose a design of an IoT Platform called OneOS, inspired by Distributed OS and micro-kernel principles, providing a single system image of the IoT network. OneOS aims to preserve the portability of applications by reusing a subset of the POSIX interface at a higher layer over a flat group of Actors. As a distributed middleware, OneOS achieves its goal through evaluation context replacement, which enables a process to run in a virtual context rather than its local context.

Open Infrastructure for Edge: A Distributed Ledger Outlook

Aleksandr Zavodovski, Nitinder Mohan, Walter Wong, and Jussi Kangasharju, University of Helsinki

Available Media

High demand for low latency services and local data processing has given rise for edge computing. As opposed to cloud computing, in this new paradigm computational facilities are located close to the end-users and data producers, on the edge of the network, hence the name. The critical issue for the proliferation of edge computing is the availability of local computational resources. Major cloud providers are already addressing the problem by establishing facilities in the proximity of end-users. However, there is an alternative trend, namely, developing open infrastructure as a set of standards, technologies, and practices to enable any motivated parties to offer their computational capacity for the needs of edge computing. Open infrastructure can give an additional boost to this new promising paradigm and, moreover, help to avoid problems for which cloud computing has been long criticized for, such as vendor lock-in or privacy. In this paper, we discuss the challenges related to creating such an open infrastructure, in particular focusing on the applicability of distributed ledgers for contractual agreement and payment. Solving the challenge of contracting is central to realizing an open infrastructure for edge computing, and in this paper, we highlight the potential and shortcomings of distributed ledger technologies in the context of our use case.

HydraOne: An Indoor Experimental Research and Education Platform for CAVs

Yifan Wang, Institute of Computing Technology, Chinese Academy of Sciences; Liangkai Liu, Wayne State University; Xingzhou Zhang, Institute of Computing Technology, Chinese Academy of Sciences; Weisong Shi, Wayne State University

Available Media

Connected and autonomous vehicles (CAVs) is currently a hot topic and a major focus in the field of edge computing, and it has created numerous pivotal and challenging research problems. In this paper, we present HydraOne, an indoor experimental research and education platform for edge computing in the CAVs scenario. HydraOne is a hardware-software co-design platform built from scratch based on our experience with the requirements of edge computing research problems. We present the design and implementation details and discuss three key characteristics of HydraOne: design modularization, resource extensibility and openness, as well as function isolation. These will help researchers and students fully understand the platform and take advantage of it to conduct research experiments. We also provide three case studies deployed on HydraOne to demonstrate the capabilities of our research platform.

DeCaf: Iterative Collaborative Processing over the Edge

Dhruv Kumar, Aravind Alagiri Ramkumar, Rohit Sindhu, and Abhishek Chandra, University of Minnesota, Twin Cities

Available Media

The increase in privacy concerns among the users has led to edge based analytics applications such as federated learning which trains machine learning models in an iterative and collaborative fashion on the edge devices without sending the raw private data to the central cloud. In this paper, we propose a system for enabling iterative collaborative processing (ICP) in resource constrained edge environments. We first identify the unique systems challenges posed by ICP, which are not addressed by the existing distributed machine learning frameworks such as the parameter server. We then propose the system components necessary for ICP to work well in highly distributed edge environments. Based on this, we propose a system design for enabling such applications over the edge. We show the benefits of our proposed system components with a preliminary evaluation.

Shimmy: Shared Memory Channels for High Performance Inter-Container Communication

Marcelo Abranches, Sepideh Goodarzy, Maziyar Nazari, Shivakant Mishra, and Eric Keller, University of Colorado, Boulder

Available Media

With the increasing need for more reactive services, and the need to process large amounts of IoT data, edge clouds are emerging to enable applications to be run close to the users and/or devices. Following the trend in hyperscale clouds, applications are trending toward a microservices architecture where the application is decomposed into smaller pieces that can each run in its own container and communicate with each other over a network through well defined APIs. This improves the development effort and deployability, but also introduces inefficiencies in communication. In this paper, we rethink the communication model, and introduce the ability to create shared memory channels between containers supporting both a pub/sub model and streaming model. Our approach is not only applicable to the edge clouds but also beneficial in core cloud environments. Local communication is made more efficient, and remote communication is efficiently supported through synchronizing shared memory regions via RDMA.

The Case for Determinism on the Edge

Matthew Furlong, Andrew Quinn, and Jason Flinn, University of Michigan

Available Media

Emerging edge applications, such as augmented and virtual reality, real-time video analytics and thin-client gaming, are latency-sensitive, resource-intensive, and stateful. Transitioning these applications from cloud deployments to the edge is non-trivial since edge deployments will exhibit variable resource availability, significant user mobility, and high potential for faults and application preemption, requiring considerable developer effort per application to maintain stable quality of experience for the user.

In this paper, we propose deterministic containers, a new abstraction that simplifies the development of complex applications on the edge. Deterministic containers enforce the property that all activity within a container behave deterministically. Determinism provides replication, which in turn provides key benefits for edge computing including resilience to performance jitter, enhanced fault-tolerance, seamless migration, and data provenance.

We are currently building a prototype, Shadow, that aims to provide deterministic containers with minimal performance overhead while requiring few application modifications. For all sources of non-determinism, Shadow either converts the behavior to be deterministic or restricts the allowable application behavior. Preliminary results indicate that using Shadow to reduce performance jitter at the edge for a vehicle caravan application involving video analytics reduces median application response time by up to 25%.

3:40 pm–4:10 pm

Break with Refreshments

Grand Ballroom Prefunction

4:10 pm–5:40 pm

IoT, Video, and Networking

SMC: Smart Media Compression for Edge Storage Offloading

Ali E. Elgazar, Mohammad Aazam, and Khaled A. Harras, Carnegie Mellon University

Available Media

With the pervasiveness and growth in media technology, user-generated content has become intertwined with our day-to-day life. Such advancements however, have enabled the exponential growth in media file sizes, which leads to shortage of storage on small-scale edge devices. Online clouds are generally a potential solution, however, they raise privacy concerns, are not fully automated, and do not adapt to different networking environments (rural/urban/metropolitan). Distributed storage systems rely on their distributed nature to combat concerns over privacy and are adaptable to different networking environments. Nevertheless, such systems lack optimization via compression due to energy concerns on edge devices. In this work, we propose Smart Media Compression (SMC) for distributed edge storage systems. SMC dynamically adjusts compression parameters, in order to reduce the amount of needless compression, thus reducing energy consumption while providing smaller user file access delays. Our results show an improvement in average file access delay by up to 90%, while only costing an additional 14% in energy consumption.

Edge-based Transcoding for Adaptive Live Video Streaming

Pradeep Dogga, UCLA; Sandip Chakraborty, IIT Kharagpur; Subrata Mitra, Adobe Research; Ravi Netravali, UCLA

Available Media

User-generated video content is imposing an increasing burden on live video service architectures such as Facebook Live. These services are responsible for ingesting large amounts of video, transcoding that video into different quality levels (i.e., bitrates), and adaptively streaming it to viewers. These tasks are expensive, both computationally and network-wise, often forcing service providers to sacrifice the “liveness” of delivered video. Given the steady increases in smartphone bandwidth and energy resources, coupled with the fact that commodity smartphones now include hardware-accelerated codecs, we propose that live video services augment their existing infrastructure with edge support for transcoding and transmission. We present measurements to motivate the feasibility of incorporating such edge-support into the live video ecosystem, present the design of a peer-to-peer adaptive live video streaming system, and discuss directions for future work to realize this vision in practice.

Toward Optimal Performance with Network Assisted TCP at Mobile Edge

Soheil Abbasloo, NYU; Yang Xu, Fudan University; H. Jonathon Chao, NYU; Hang Shi, Ulas C. Kozat, and Yinghua Ye, Futurewei Technologies

Available Media

In contrast to the classic fashion for designing distributed end-to-end (e2e) TCP schemes for cellular networks (CN), we explore another design space by having the CN assist the task of the transport control. We show that in the emerging cellular architectures such as mobile/multi-access edge computing (MEC), where the servers are located close to the radio access network (RAN), significant improvements can be achieved by leveraging the nature of the logically centralized network measurements at the RAN and passing information such as its minimum e2e delay and access link capacity to each server. Particularly, a Network Assistance module (located at the mobile edge) will pair up with wireless scheduler to provide feedback information to each server and facilitate the task of congestion control. To that end, we present two Network Assisted schemes called NATCP (a clean-slate design replacing TCP at end-hosts) and NACubic (a backward compatible design requiring no change for TCP at end-hosts). Our preliminary evaluations using real cellular traces show that both schemes dramatically outperform existing schemes both in single-flow and multi-flow scenarios.

Throwing MUD into the FOG: Defending IoT and Fog by expanding MUD to Fog network

Vafa Andalibi, DongInn Kim, and L. Jean Camp, Indiana University Bloomington

Available Media

Manufacturer Usage Description (MUD) is a proposed IETF standard enabling local area networks (LAN) to automatically configure their access control when adding a new IoT device based on the recommendations provided for that device by the manufacturer. MUD has been proposed as an isolation-based defensive mechanism with a focus on devices in the home, where there is no dedicated network administrator. In this paper, we describe the efficacy of MUD for a generic IoT device under different threat scenarios in the context of the Fog. We propose a method to use rate limiting to prevent end devices from participating in denial of service attacks (DDoS), including against the Fog itself. We illustrate our assumptions by providing a possible real world example and describe the benefits for MUD in the Fog for various stakeholders.

Devices-as-Services: Rethinking Scalable Service Architectures for the Internet of Things

Fatih Bakir, Rich Wolski, and Chandra Krintz, Univ. of California, Santa Barbara; Gowri Sankar Ramachandran, Univ. of Southern California

Available Media

We investigate a new distributed services model and architecture for Internet of Things (IoT) applications. In particular, we observe that devices at the edge of the network, although resource constrained, are increasingly capable—performing actions (e.g. data analytics, decision support, actuation, control, etc.) in addition to event telemetry. Thus, such devices are better modeled as servers, which applications in the cloud compose for their functionality. We investigate the implications of this "flipped" IoT client-server model, for server discovery, authentication, and resource use. We find that by combining capability-based security with an edge-aware registry, this model can achieve fast response and energy efficiency.

Publish-Pay-Subscribe Protocol for Payment-driven Edge Computing

Gowri Sankar Ramachandran, Sharon L.G Contreras, and Bhaskar Krishnamachari, University of Southern California

Available Media

IoT applications are starting to rely heavily on edge computing due to the advent of low-power and high data-rate wireless communication technologies such as 5G and the processing capability of GPU-driven edge platforms. However, the computation and the data communication model for the edge computing applications are quite diverse, which limits their interoperability. An interoperable edge computing architecture with a versatile communication model would lead to the development of innovative and incentive-driven edge computing applications by combining various data sources from a wide array of devices. In this paper, we present an edge computing architecture by extending the publish-subscribe protocol with support for incentives. Our novel publish-pay-subscribe protocol enables the data producers (publishers) to sell their data with data consumers and service providers (subscribers). The proposed architecture not only allows the device owners to gain incentive but also enable the service providers to sell the processed data with one or more data consumers. Our proof-of-concept implementation using AEDES publish-subscribe broker and Ethereum cryptocurrency shows the feasibility of publish-pay-subscribe broker and its support for data-driven and incentive-based edge computing applications.

5:40 pm–6:40 pm

HotEdge '19 Poster Session

Lake Washington Ballroom