Workshop Program

All sessions will take place in the Magnolia Room unless otherwise noted.

The workshop papers are available for download below. Copyright to the individual works is retained by the author(s).

Downloads for Registered Attendees

Attendee Files 
HotCloud Paper Archive (ZIP)
HotCloud Attendee List (PDF)

 

Monday, July 6, 2015

8:00 am–9:00 am Monday

Continental Breakfast

Mezzanine East/West

8:45 am–9:00 am Monday

Opening Remarks

Program Co-Chairs: Irfan Ahmad, CloudPhysics, and Tim Kraska, Brown University

9:00 am–10:40 am Monday

Network

Session Chairs: Irfan Ahmad, CloudPhysics and Tim Kraska, Brown University

Oh Flow, Are Thou Happy? TCP Sendbuffer Advertising for Make Benefit of Clouds and Tenants

Alexandru Agache and Costin Raiciu, University Politehnica of Bucharest

Datacenter networks have evolved from simple trees to multi-rooted tree topologies such as FatTree or VL2 that provide many paths between any pair of servers to ensure high performance under all traffic patterns. The standard way to load balance traffic across these links is Equal Cost Multipathing that randomly places flows on paths. ECMP may wrongly place multiple flows on the same congested link, wasting as much as 60% of total capacity in a worst case scenarios for FatTree networks. These networks need information about the traffic they route to avoid collisions, by steering it towards idle paths, or by creating more capacity on the fly between groups of hot racks. Additionally, ECMP creates uncertainty about the path a given flow has taken, making network debugging difficult.

Available Media

Optimizing Network Performance in Distributed Machine Learning

Luo Mai, Imperial College London; Chuntao Hong and Paolo Costa, Microsoft Research

To cope with the ever growing availability of training data, there have been several proposals to scale machine learning computation beyond a single server and distribute it across a cluster. While this enables reducing the training time, the observed speed up is often limited by network bottlenecks.

To address this, we design MLNET, a host-based communication layer that aims to improve the network performance of distributed machine learning systems. This is achieved through a combination of traffic reduction techniques (to diminish network load in the core and at the edges) and traffic management (to reduce average training time). A key feature of MLNET, is its compatibility with existing hardware and software infrastructure so it can be immediately deployed.

We describe the main techniques underpinning MLNET, and show through simulation that the overall training time can be reduced by up to 78%. While preliminary, our results indicate the critical role played by the network and the benefits of introducing a new communication layer to increase the performance of distributed machine learning systems.

Available Media

An Optimization Case in Support of Next Generation NFV Deployment

Zahra Abbasi, Ming Xia M, Meral Shirazipour, and Attila Takacs, Ericsson Research

Still not long ago operators were struggling with middlebox deployment and traffic management across them. The service chaining problem was a well studied subject which had to deal with the limitations of middleboxes and offer various techniques to overcome them to achieve the desired traffic steering. Only two years have passed its official launch by ETSI, but network function virtualization (NFV) has already revolutionized the telecom industry by proposing a complete design paradigm shift in the way middleboxes are built and deployed. NFV requires the virtualization of middleboxes and other networking equipment, called virtual network functions (vNFs). This requirement will allow networking infrastructure operators to benefit from the same economies of scale and flexibility than the information technology community experienced with the advent of cloud computing. Other than the capex/opex saving and faster time to market of new services, the cloudification of networking gives us the opportunity to rethink how networking equipment are designed and deployed.

Available Media

Enabling Topological Flexibility for Data Centers Using OmniSwitch

Yiting Xia, Rice University; Mike Schlansker, HP Labs; T. S. Eugene Ng, Rice University; Jean Tourrilhes, HP Labs

Most data centers deploy fixed network topologies. This brings difficulties to traffic optimization and network management, because bandwidth locked up in fixed links is not adjustable to traffic needs, and changes of network equipments require cumbersome rewiring of existing links. We believe the solution is to introduce topological flexibility that allows dynamic cable rewiring in the network. We design the OmniSwitch prototype architecture to realize this idea. It uses inexpensive small optical circuit switches to configure cable connections, and integrates them closely with Ethernet switches to provide large-scale connectivity. We design an example control algorithm for a traffic optimization use case, and demonstrate the power of topological flexibility using simulations. Our solution is effective in provisioning bandwidth for cloud tenants and reducing transmission hop count at low computation cost.

Available Media
10:40 am–11:15 am Monday

Break with Refreshments

Mezzanine East/West

11:15 am–12:30 pm Monday

Infrastructure

Session Chair: Xiaojun Liu, CloudPhysics

Supporting Dynamic GPU Computing Result Reuse in the Cloud

Husheng Zhou, Yangchun Fu, and Cong Liu, The University of Texas at Dallas

Graphics processing units (GPUs) have been adopted by major cloud vendors, as GPUs provide ordersof- magnitude speedup for computation-intensive dataparallel applications. In the cloud, efficiently sharing GPU resources among multiple virtual machines (VMs) is not so straightforward. Recent research has been conducted to develop GPU virtualization technologies, making it feasible for VMs to share GPU resources in a reliable manner. This paper seeks to improve the efficiency of sharing GPU resources in the cloud for accelerating general-purpose workloads. Our key observation is that redundant GPU computation requests are being seen in many GPU-accelerated workloads in the cloud, such as cloud gaming where multiple clients playing the same game call GPUs to perform physics simulation. We have measured this redundancy using a gaming case study, and found that more than 24% (47%) of the GPU computation requests called by the same VM (multiple VMs) are identical. To exploit this redundancy, we present GRU (GPU Result re-Use), a GPU sharing, result memoization and reuse ecosystem in a cloud environment. GRU transparently enables VMs in the cloud to share a single GPU efficiently, and memoizes GPU computation results for reuse. It leverages the GPU full-virtualization technology, which enables GPU result memoization and reuse without modification of existing device drivers and operating systems. We have implemented GRU on top of the Xen hypervisor. Preliminary experiments show that GRU is able to achieve a significant speedup of up to 18 times compared to the state-of-the-art GPU virtualization framework, while adding a rather small amount of runtime overheads.

Available Media

GreenMap: MapReduce with Ultra High Efficiency Power Delivery

Du Su and Yi Lu, University of Illinois at Urbana-Champaign

Energy consumption has become a significant fraction of the total cost of ownership of data centers. While much work has focused on improving power efficiency per unit of computation, little attention has been paid to power delivery, which currently wastes 10-20% of total energy consumption even before any computation takes place. A new power delivery architecture using series- stacked servers has recently been proposed in the power community. However, the reduction in power loss depends on the difference in power consumption of the series-stacked servers: The more balanced the computation loads, the more reduction in power conversion loss.

In this preliminary work, we implemented GreenMap, a modified MapReduce framework that assigns tasks in synchronization, and computed the conversion loss based on the measured current profile. At all loads, GreenMap achieves 81x-138x reduction in power conversion loss from the commercial-grade high voltage converter used by data centers, which is equivalent to 15% reduction in total energy consumption. The average response time of GreenMap suffers no degradation when load reaches 0.6 and above, but at loads below 0.6, the response time suffers a 26-42% increase due to task synchronization. For the low-load region, we describe the use of GreenMap with dynamic scaling to achieve a favorable tradeoff between response time and power efficiency.

Available Media

The Case for the Superfluid Cloud

Filipe Manco, Joao Martins, Kenichi Yasukata, Jose Mendes, Simon Kuenzer, and Felipe Huici, NEC Europe Ltd.

The confluence of a number of relatively recent trends including the development of virtualization technologies, the deployment of micro datacenters at PoPs, and the availability of microservers, opens up the possibility of evolving the cloud, and the network it is connected to, towards a superfluid cloud: a model where parties other than infrastructure owners can quickly deploy and migrate virtualized services throughout the network (in the core, at aggregation points and at the edge), enabling a number of novel use cases including virtualized CPEs and on-the-fly services, among others.

Towards this goal, we identify a number of required mechanisms and present early evaluation results of their implementation. On an inexpensive commodity server, we are able to concurrently run up to 10,000 specialized virtual machines, instantiate a VM in as little as 10 milliseconds, and migrate it in under 100 milliseconds.

Available Media
12:30 pm–2:00 pm Monday

Luncheon for Workshop Attendees

Terra Courtyard

2:00 pm–3:30 pm Monday
3:30 pm–4:00 pm Monday

Break with Refreshments

Mezzanine East/West

4:00 pm–5:40 pm Monday

Monitoring

Session Chair: Swaminathan Sundararaman, SanDisk

Towards Pre-Deployment Detection of Performance Failures in Cloud Distributed Systems

Riza O. Suminto, University of Chicago; Agung Laksono and Anang D. Satria, Surya University; Thanh Do, Microsoft Gray Systems Lab; Haryadi S. Gunawi, University of Chicago

Modern distributed systems ("cloud systems") have emerged as a dominant backbone for many today's applications. They come in different forms such as scale-out file systems, key-value stores, computing frameworks, synchronization and cluster management services. As these systems collectively become the "cloud operating system", users expect high dependability including performance stability. Unfortunately, the complexity of the software and environment in which they must run has outpaced existing testing and debugging tools. Cloud systems must run at scale with different topologies, execute complex distributed protocols, face load fluctuations and a wide range of hardware faults, and serve users with diverse job characteristics.

One type of important failures is performance failures, a situation where a system (e.g., Hadoop) does not deliver the expected performance (e.g., a job takes 10x longer time than usual). Conversation with cloud engineers reflects that performance stability is often more important than performance optimization; when performance failures happen, users are frustrated, systems waste and underutilize resources, and long debugging efforts are required to find and fix the problems. Sadly, performance failures are still common; our previous work shows that 22% of vital issues reported by cloud system developers relate to performance bugs.

In this paper, our focus is to answer the following three questions: What is the root-cause anatomy of performance bugs that appear in cloud systems? What is missing within the state of the art of detecting performance bugs? What are new novel directions that can prevent performance failures to happen in the field?

Available Media

The Importance of Features for Statistical Anomaly Detection

David Goldberg and Yinan Shan, eBay

The theme of this paper is that anomaly detection splits into two parts: developing the right features, and then feeding these features into a statistical system that detects anomalies in the features. Most literature on anomaly detection focuses on the second part. Our goal is to illustrate the importance of the first part. We do this with two real-life examples of anomaly detectors in use at eBay.

Available Media

Unified Monitoring and Analytics in the Cloud

Ricardo Koller and Canturk Isci, IBM T. J. Watson Research Center; Sahil Suneja and Eyal de Lara, University of Toronto

Modern cloud applications are distributed across a wide range of instances of multiple types, including virtual machines, containers, and baremetal servers. Traditional approaches to monitoring and analytics fail in these complex, distributed and diverse environments. They are too intrusive and heavy-handed for short-lived, lightweight cloud instances, and cannot keep up with rapid the pace of change in the cloud with continuous dynamic scheduling, provisioning and auto-scaling. We introduce a unified monitoring and analytics architecture designed for the cloud. Our approach leverages virtualization and containerization to decouple monitoring from instance execution and health. Moreover, it provides a uniform view of systems regardless of instance type, and operates without intervening with the end-user context. We describe an implementation of our approach in an actual deployment, and discuss our experiences and observed results.

Available Media

Soroban: Attributing Latency in Virtualized Environments

James Snee, Lucian Carata, Oliver R. A. Chick, Ripduman Sohan, Ramsey M. Faragher, Andrew Rice, and Andy Hopper, University of Cambridge

Applications executing on a hypervisor or in a container experience a lack of performance isolation from other services executing on shared resources. Latencysensitive applications executing in the cloud therefore have highly-variable response times, yet attributing the additional latency caused by virtualization overheads on individual requests is an unsolved problem.

We present Soroban, a framework for attributing latency to either the cloud provider or their customer. Soroban allows developers to instrument applications, such as web servers to determine, for each request, how much of the latency is due to the cloud provider, and how much is due to the consumer’s application or service. With this support Soroban enables cloud-providers to provision based on acceptable-latencies, adopt finegrained charging levels that reflect latency demands of users and attribute performance anomalies to either the cloud provider or their consumer. We apply Soroban to a HTTP server and show that it identifies when the cause of latency is due to a provider-induced activity, such as underprovisioning a host, or due to the software run by the customer.

Available Media
6:00 pm–7:00 pm Monday

Joint Poster Session and Happy Hour with HotStorage

Mezzanine East/West

 

Tuesday, July 7, 2015

8:00 am–9:00 am Tuesday

Continental Breakfast

Mezzanine East/West

9:00 am–10:30 am Tuesday

Joint Keynote Address with HotStorage

Santa Clara Ballroom

Kubernetes and the Path to Cloud Native

Eric Brewer, Google

We are in the midst of an important shift to higher levels of abstraction than virtual machines. Kubernetes aims to simplify the deployment and management of services, including the construction of applications as sets of interacting but independent services. We explain some of the key concepts in Kubernetes and show how they work together to simplify evolution and scaling.

Eric Brewer is a vice president of infrastructure at Google. He pioneered the use of clusters of commodity servers for Internet services, based on his research at Berkeley. His “CAP Theorem” covers basic tradeoffs required in the design of distributed systems and followed from his work on a wide variety of systems, from live services, to caching and distribution services, to sensor networks. He is a member of the National Academy of Engineering, and winner of the ACM Infosys Foundation award for his work on large-scale services.

We are in the midst of an important shift to higher levels of abstraction than virtual machines. Kubernetes aims to simplify the deployment and management of services, including the construction of applications as sets of interacting but independent services. We explain some of the key concepts in Kubernetes and show how they work together to simplify evolution and scaling.

Eric Brewer is a vice president of infrastructure at Google. He pioneered the use of clusters of commodity servers for Internet services, based on his research at Berkeley. His “CAP Theorem” covers basic tradeoffs required in the design of distributed systems and followed from his work on a wide variety of systems, from live services, to caching and distribution services, to sensor networks. He is a member of the National Academy of Engineering, and winner of the ACM Infosys Foundation award for his work on large-scale services.

10:30 am–11:00 am Tuesday

Break with Refreshments

Mezzanine East/West

11:00 am–12:15 pm Tuesday

Big Data Processing

Towards Hybrid Programming in Big Data

Peng Wang, Chinese Academy of Sciences; Hong Jiang, University of Nebraska–Lincoln; Xu Liu, College of William and Mary; Jizhong Han, Chinese Academy of Sciences

Within the past decade, there have been a number of parallel programming models developed for data-intensive (i.e., big data) applications. Typically, each model has its own strengths in performance or programmability for some kinds of applications but limitations for others. As a result, multiple programming models are often combined in a complimentary manner to exploit their merits and hide their weaknesses. However, existing models can only be loosely coupled due to their isolated runtime systems.

In this paper, we present Transformer, the first system that supports hybrid programming models for data-intensive applications. Transformer has two unique contributions. First, Transformer offers a programming abstraction in a unified runtime system for different programming model implementations, such as Dryad, Spark, Pregel, and PowerGraph. Second, Transformer supports an efficient and transparent data sharing mechanism, which tightly integrates different programming models in a single program. Experimental results on Amazon’s EC2 cloud show that Transformer can flexibly and efficiently support hybrid programming models for data-intensive computing.

Available Media

CodePlugin: Plugging Deduplication into Erasure Coding for Cloud Storage

Mengbai Xiao, George Mason University; Mohammed A. Hassan, NetApp, Inc.; Weijun Xiao, Virginia Commonwealth University; Qi Wei and Songqing Chen, George Mason University

Cloud storage systems play a key role in many cloud services. To tolerate multiple simultaneous disk failures and reduce the storage overhead, today cloud storage systems often employ erasure coding schemes. To simplify implementations, existing systems, such as MicrosoftAzure and EMC Atmos, only support file appending operations. However, this feature leads to a nontrivial and increasing portion of redundant data on cloud storage systems.

To reduce the data redundancy due to file updates by users so as to reduce the corresponding encoding and storage cost, in this work, we investigate how to efficiently integrate the inline deduplication capability into the general context of the Reed-Solomon (RS) code. For this purpose, we present our initial design of CodePlugin. Basically, CodePlugin introduces some preprocessing steps before the normal encoding. In these pre-processing steps, the data duplications are identified and properly shuffled so that the redundant blocks do not have to be encoded. CodePlugin is applicable to any existing coding scheme and our preliminary experimental results show that CodePlugin can effectively improve the encoding throughput (by ~20%) and reduce the storage cost (by ~17.4%).

Available Media

Enabling Scalable Social Group Analytics via Hypergraph Analysis Systems

Benjamin Heintz and Abhishek Chandra, University of Minnesota

With the rapid growth of large online social networks, the ability to analyze large-scale social structure and behavior has become critically important, and this has led to the development of several scalable graph processing systems. In reality, social interaction takes place not just between pairs of individuals as in the common graph model, but rather in the context of multi-user groups. Research has shown that such group dynamics can be better modeled through hypergraphs: a generalization of graphs. There are not yet, however, scalable systems to support hypergraph computation, and several challenges and opportunities arise in their design and implementation. In this paper, we present an initial attempt at building a scalable hypergraph analysis framework based on the GraphX/Spark framework. We use this prototype to examine several programmability and implementation issues through experiments with two real-world datasets on a 6-node cluster.

Available Media
12:15 pm–2:00 pm Tuesday

Luncheon for Workshop Attendees

Terra Courtyard

2:00 pm–3:40 pm Tuesday

Carefully Distributed

Session Chair: Austin Clements, Google

Highly Auditable Distributed Systems

Murat Demirbas, SUNY Buffalo State; Sandeep Kulkarni, Michigan State University

Auditability is a key requirement for providing scalability and availability to distributed systems. Auditability allows us to identify latent concurrency bugs, dependencies among events, and performance bottlenecks. Our work focuses on providing auditability by combining two key concepts: time and causality. In particular, we prescribe hybrid logical clocks (HLC) which offer the functionality of logical clocks while keeping them close to physical clocks. We propose that HLC can enable effective detection of invariant predicate violations and latent concurrency bugs, and provide efficient means to correct the state of the distributed system back to good states.

Available Media

App–Bisect: Autonomous Healing for Microservice-Based Apps

Shriram Rajagopalan and Hani Jamjoom, IBM T. J. Watson Research Center

The microservice and DevOps approach to software design has resulted in new software features being delivered immediately to users, instead of waiting for long refresh cycles. On the downside, software bugs and performance regressions have now become an important cause of downtime. We propose app-bisect, an autonomous tool to troubleshoot and repair such software issues in production environments. Our insight is that the evolution of microservices in an application can be captured as mutations to the graph of microservice dependencies, such that a particular version of the graph from the past can be deployed automatically, as an interim measure until the problem is permanently fixed. Using canary testing and version-aware routing techniques, we describe how the search process can be sped up to identify such a candidate version. We present the overall design and key challenges towards implementing such a system.

Available Media

Provenance Issues in Platform-as-a-Service Model of Cloud Computing

Devdatta Kulkarni, Rackspace and The University of Texas at Austin

In this paper we present provenance issues that arise in building Platform-as-a-Service (PaaS) model of cloud computing. The issues are related to designing, building, and deploying of the platform itself, and those related to building and deploying applications on the platform. These include, tracking of commands for successful software installations, tracking of inter-service dependencies, tracking of configuration parameters for different services, and tracking of application related artifacts. We identify the provenance information to address these issues and propose mechanisms to track it.

Available Media

Privacy-Preserving Offloading of Mobile App to the Public Cloud

Yue Duan, Mu Zhang, Heng Yin, and Yuzhe Tang, Syracuse University

To support intensive computations on resourcerestricting mobile devices, studies have been made to enable the offloading of a part of a mobile program to the cloud. However, none of the existing approaches considers user privacy when transmitting code and data off the device, resulting in potential privacy breach. In this paper, we present the design and implementation of a system that automatically performs fine-grained privacy-preserving Android app offloading. It utilizes static analysis and bytecode instrumentation techniques to ensure transparent and efficient Android app offloading while preserving user privacy. We evaluate the effectiveness and performance of our system using two Android apps. Preliminary experimental results show that our offloading technique can effectively preserve user privacy while reducing hardware resource consumption at the same time.

Available Media
3:40 pm–4:10 pm Tuesday

Break with Refreshments

Mezzanine East/West

4:10 pm–5:25 pm Tuesday

Not Enough Speed

Joint Session with Hot Storage
Santa Clara Ballroom

Session Chair: Dan Ports, University of Washington

Dynacache: Dynamic Cloud Caching

Asaf Cidon and Assaf Eisenman, Stanford University; Mohammad Alizadeh, MIT CSAIL; Sachin Katti, Stanford University

Web-scale applications are heavily reliant on memory cache systems such as Memcached to improve throughput and reduce user latency. Small performance improvements in these systems can result in large end-to-end gains, for example a marginal increase in hit rate of 1% can reduce the application layer latency by over 25%. Yet, surprisingly many of these systems use generic firstcome- first-serve designs with simple fixed size allocations that are oblivious to the application’s requirements. In this paper, we use detailed empirical measurements from a widely used caching service, Memcachier to show that these simple default policies can lead to significant performance penalties, in some cases increasing the number of cache misses by as much as 3x.

Motivated by these empirical analyses, we propose Dynacache, a cache controller that significantly improves the hit rate of web applications, by profiling applications and dynamically tailoring memory resources and eviction policies. We show that for certain applications in our real-world traces from Memcachier, Dynacache reduces the number of misses by more than 65% with a minimal overhead on the average request performance. We also show that Memcachier would need to more than double the number of Memcached servers in order to achieve the same reduction of misses that is achieved by Dynacache. In addition, Dynacache allows Memcached operators to better plan their resource allocation and manage server costs, by estimating the cost of cache hits as a function of memory.

Available Media

Pricing Games for Hybrid Object Stores in the Cloud: Provider vs. Tenant

Yue Cheng and M. Safdar Iqbal, Virginia Tech; Aayush Gupta, IBM Almaden Research Center; Ali R. Butt, Virginia Tech

Cloud object stores are increasingly becoming the de facto storage choice for big data analytics platforms, mainly because they simplify the management of large blocks of data at scale. To ensure cost-effectiveness of the storage service, the object stores use hard disk drives (HDDs). However, the lower performance of HDDs affect tenants who have strict performance requirements for their big data applications. The use of faster storage devices such as solid state drives (SSDs) is thus desirable by the tenants, but incurs significant maintenance costs to the provider. We design a tiered object store for the cloud, which comprises both fast and slow storage devices. The resulting hybrid store exposes the tiering to tenants with a dynamic pricing model that is based on the tenants’ usage and the provider’s desire to maximize profits. The tenants leverage knowledge of their workloads and current pricing information to select a data placement strategy that would meet the application requirements at the lowest cost. Our approach allows both a service provider and its tenants to engage in a pricing game, which our results show yields a win–win situation.

Available Media

The Cloud is Not Enough: Saving IoT from the Cloud

Ben Zhang, Nitesh Mor, John Kolb, Douglas S. Chan, Nikhil Goyal, Ken Lutz, Eric Allman, John Wawrzynek, Edward Lee, and John Kubiatowicz, University of California, Berkeley

The Internet of Things (IoT) represents a new class of applications that can benefit from cloud infrastructure. However, the current approach of directly connecting smart devices to the cloud has a number of disadvantages and is unlikely to keep up with either the growing speed of the IoT or the diverse needs of IoT applications.

In this paper we explore these disadvantages and argue that fundamental properties of the IoT prevent the current approach from scaling. What is missing is a wellarchitected system that extends the functionality of the cloud and provides seamless interplay among the heterogeneous components in the IoT space. We argue that raising the level of abstraction to a data-centric design—focused around the distribution, preservation and protection of information—provides a much better match to the IoT.We present early work on such a distributed platform, called the Global Data Plane (GDP), and discuss how it addresses the problems with the cloud-centric architecture.

Available Media