Technical Sessions

All sessions will take place in the Gold Room unless otherwise noted.

Conference full papers and full proceedings are available to conference registrants immediately and to everyone beginning Wednesday, June 26, 2013. Everyone can view the abstracts and the proceedings front matter immediately.

Proceedings Front Matter: 
Cover Page | Title Page and List of Organizers | Table of Contents | Message from the Program Co-Chairs

 

Wednesday, June 26, 2013

8:15 a.m.–8:45 a.m. Wednesday

Continental Breakfast

Market Street Foyer

8:45 a.m.–9:00 a.m. Wednesday
9:00 a.m.–10:30 a.m. Wednesday

Keynote Address I

Session Chair: Xiaoyun Zhu, VMware

Perspectives on Virtualized Resource Management

Carl Waldspurger

In little more than a decade, virtualization has evolved from an exotic mainframe technology to become the cornerstone of enterprise data centers and modern cloud-computing infrastructures. Much of virtualization's power derives from the extra level of indirection it introduces, but this is a double-edged sword. Consolidation inherently complicates performance isolation, and the hypervisor faces a semantic gap in trying to understand guest behavior. Despite many innovations, practical end-to-end control over application performance remains elusive.

This talk will focus on key challenges of virtualized resource management, drawing on examples from my own experiences in both research and product development. I will also highlight several promising approaches and new techniques aimed at achieving more autonomic resource management in virtualized systems.

In little more than a decade, virtualization has evolved from an exotic mainframe technology to become the cornerstone of enterprise data centers and modern cloud-computing infrastructures. Much of virtualization's power derives from the extra level of indirection it introduces, but this is a double-edged sword. Consolidation inherently complicates performance isolation, and the hypervisor faces a semantic gap in trying to understand guest behavior. Despite many innovations, practical end-to-end control over application performance remains elusive.

This talk will focus on key challenges of virtualized resource management, drawing on examples from my own experiences in both research and product development. I will also highlight several promising approaches and new techniques aimed at achieving more autonomic resource management in virtualized systems.

Carl Waldspurger has been innovating in the area of resource management for more than two decades. He is active in the systems research community, and served as the program co-chair for the 2011 USENIX Annual Technical Conference. Carl is currently working closely with several early-stage startups, including CloudPhysics and PrivateCore. For over a decade, he was responsible for core resource management and virtualization technologies at VMware. Carl led the design and implementation of processor scheduling and memory management for the ESX hypervisor, and was the architect for VMware's Distributed Resource Scheduler (DRS). Prior to VMware, he was a researcher at the DEC Systems Research Center. Carl holds a Ph.D. in computer science from MIT, for which he received the ACM Doctoral Dissertation Award.

Available Media
10:30 a.m.–11:00 a.m. Wednesday

Break with Refreshments

Market Street Foyer

11:00 a.m.–12:30 p.m. Wednesday

Cloud Management

Session Chair: Jia Rao, University of Colorado, Colorado Springs

Application Placement and Demand Distribution in a Global Elastic Cloud: A Unified Approach

Hangwei Qian, VMware, Inc.; Michael Rabinovich, Case Western Reserve University

Efficient hosting of applications in a globally distributed multi-tenant cloud computing platform requires policies to decide where to place application replicas and how to distribute client requests among these replicas in response to the dynamic demand. We present a unified method that computes both policies together based on a sequence of min-cost flow models. Further, since optimization problems are generally very large-scale in this environment, we propose a novel demand clustering approach to make them computationally practical. An experimental evaluation, both through large-scale simulation and a prototype in a testbed deployment, shows significant promise of our approach for the targeted environment.

Available Media

To Reserve or Not to Reserve: Optimal Online Multi-Instance Acquisition in IaaS Clouds

Wei Wang, Baochun Li, and Ben Liang, University of Toronto
Best Paper Award Finalist 

Infrastructure-as-a-Service (IaaS) clouds offer diverse instance purchasing options. A user can either run instances on demand and pay only for what it uses, or it can prepay to reserve instances for a long period, during which a usage discount is entitled. An important problem facing a user is how these two instance options can be dynamically combined to serve time-varying demands at minimum cost. Existing strategies in the literature, however, require either exact knowledge or the distribution of demands in the long-term future, which significantly limits their use in practice. Unlike existing works, we propose two practical online algorithms, one deterministic and another randomized, that dynamically combine the two instance options online without any knowledge of the future. We show that the proposed deterministic (resp., randomized) algorithm incurs no more than 2 − α (resp., e/(e − 1 + α)) times the minimum cost obtained by an optimal offline algorithm that knows the exact future a priori, where α is the entitled discount after reservation. Our online algorithms achieve the best possible competitive ratios in both the deterministic and randomized cases, and can be easily extended to cases when short-term predictions are reliable. Simulations driven by a large volume of real-world traces show that significant cost savings can be achieved with prevalent IaaS prices.

Available Media

Elasticity in Cloud Computing: What It Is, and What It Is Not

Nikolas Roman Herbst, Samuel Kounev, and Ralf Reussner, Karlsruhe Institute of Technology

Originating from the field of physics and economics, the term elasticity is nowadays heavily used in the context of cloud computing. In this context, elasticity is commonly understood as the ability of a system to automatically provision and deprovision computing resources on demand as workloads change. However, elasticity still lacks a precise definition as well as representative metrics coupled with a benchmarking methodology to enable comparability of systems. Existing definitions of elasticity are largely inconsistent and unspecific, which leads to confusion in the use of the term and its differentiation from related terms such as scalability and efficiency; the proposed measurement methodologies do not provide means to quantify elasticity without mixing it with efficiency or scalability aspects. In this short paper, we propose a precise definition of elasticity and analyze its core properties and requirements explicitly distinguishing from related terms such as scalability and efficiency. Furthermore, we present a set of appropriate elasticity metrics and sketch a new elasticity tailored benchmarking methodology addressing the special requirements on workload design and calibration.

Available Media

K-Scope: Online Performance Tracking for Dynamic Cloud Applications

Li Zhang, Xiaoqiao Meng, Shicong Meng, and Jian Tan, IBM T.J. Watson Research Center

Cloud computing is an ongoing technology evolution that reshapes every aspects of computing. Cloud provides on-demand, flexible and easy-to-use resource provisioning. It is also an open platform where Cloud users can share software components, resources and services. These features give rise to several emerging Cloud application development and deployment paradigms, represented by continuous delivery and shared platform services.

Available Media

 

12:30 p.m.–2:00 p.m. Wednesday

FCW Luncheon

Market Street Foyer

2:00 p.m.–3:30 p.m. Wednesday

System Resource Management

Session Chair: Samuel Kounev, Karlsruhe Institute of Technology

Adaptive Performance-Aware Distributed Memory Caching

Jinho Hwang and Timothy Wood, The George Washington University

Distributed in-memory caching systems such as memcached have become crucial for improving the performance of web applications. However, memcached by itself does not control which node is responsible for each data object, and inefficient partitioning schemes can easily lead to load imbalances. Further, a statically sized memcached cluster can be insufficient or inefficient when demand rises and falls. In this paper we present an automated cache management system that both intelligently decides how to scale a distributed caching system and uses a new, adaptive partitioning algorithm that ensures that load is evenly distributed despite variations in object size and popularity. We have implemented an adaptive hashing system as a proxy and node control framework for memcached, and evaluate it on EC2 using a set of realistic benchmarks including database dumps and traces from Wikipedia.

Available Media

Exploiting Processor Heterogeneity in Interactive Services

Shaolei Ren, Florida International University; Yuxiong He, Sameh Elnikety, and Kathryn S. McKinley, Microsoft Research

To add processing power under power constraints, emerging heterogeneous processors include fast and slow cores on the same chip. This paper demonstrates that this heterogeneity is well suited to interactive data center workloads (e.g., web search, online gaming, and financial trading) by observing and exploiting two workload properties. (1) These workloads may trade response quality for responsiveness. (2) The request service demand is unknown and varies widely with both short and long requests. Subject to per-server power constraints, traditional homogeneous processors either include a few high-power fast cores that deliver high quality responses or many low-power slow cores that deliver high throughput, but not both.

This paper shows heterogeneous processors deliver both high quality and throughput by executing short requests on slow cores and long requests on fast cores with Fast Old and First (FOF), a new scheduling algorithm. FOF schedules new requests with unknown service demands on the fastest idle core and migrates requests from slower to faster cores. We simulate and implement FOF. In simulations modeling Microsoft’s Bing index search, FOF on heterogeneous processors improves response quality and increases throughput by up to 50% compared to homogeneous processors. We confirm simulation improvements with an implementation of an interactive finance server using Simultaneous Multithreading (SMT), configured as a dynamic heterogeneous processor. Both simulation and experimental results indicate processor heterogeneity offers a lot of potential for interactive workloads.

Available Media

Autonomic Management of Dynamically Partially Reconfigurable FPGA Architectures Using Discrete Control

Xin An and Eric Rutten, Inria Grenoble Rhône-Alpes; Jean-Philippe Diguet and Nicolas le Griguer, Lab-STICC; Abdoulaye Gamatié, LIRMM

This paper targets the autonomic management of dynamically partially reconfigurable hardware architectures based on FPGAs. Discrete Control modelled with Labelled Transition Systems is employed to model the considered behaviours of the computing system and derive a controller for the control objective enforcement. We consider system application described as task graphs and FPGA as a set of reconfigurable areas that can be dynamically partially reconfigured to execute tasks. The computation of an autonomic manager is encoded as a Discrete Controller Synthesis problem w.r.t. multiple constraints and objectives e.g., mutual exclusion of resource uses, power cost minimization.

Available Media

FMEM: A Fine-grained Memory Estimator for MapReduce Jobs

Lijie Xu, Institute of Software, Chinese Academy of Sciences, and University of Chinese Academy of Sciences; Jie Liu and Jun Wei, Institute of Software, Chinese Academy of Sciences

MapReduce is designed as a simple and scalable framework for big data processing. Due to the lack of resource usage models, its implementation Hadoop hands over resource planning and optimizing works to users. But users also find difficulty in specifying right resource-related, especially memory-related, configurations without good knowledge of job’s memory usage. Modeling memory usage is challenging because there are many influencing factors such as framework’s dataflow, user-defined programs, large space of configurations and memory management mechanism of JVM. In order to help both users and the framework to analyze, predict and optimize memory usage, we propose a Fine-grained Memory Estimator for MapReduce jobs called FMEM. FMEM contains a dataflow estimator which can predict the data volume flowing among map/reduce tasks. Based on dataflow and rules of memory utilization learnt from a lot of jobs, FMEM uses a rules-statistics method to estimate fine-grained memory usage in each generation of task’s JVM. Representative benchmarks show that FMEM can predict diverse jobs' memory usage within 20% relative error. Furthermore, FMEM will be promoted to find optimum dataflow and memory related configurations.

Available Media

 

3:30 p.m.–4:00 p.m. Wednesday

Break with Refreshments

Market Street Foyer

4:00 p.m.–5:30 p.m. Wednesday

Virtual Machine Management

Session Chair: Ming Zhao, Florida International University

AGILE: Elastic Distributed Resource Scaling for Infrastructure-as-a-Service

Hiep Nguyen, Zhiming Shen, and Xiaohui Gu, North Carolina State University; Sethuraman Subbiah, NetApp Inc.; John Wilkes, Google Inc.

Dynamically adjusting the number of virtual machines (VMs) assigned to a cloud application to keep up with load changes and interference from other uses typically requires detailed application knowledge and an ability to know the future, neither of which are readily available to infrastructure service providers or application owners. The result is that systems need to be over-provisioned (costly), or risk missing their performance Service Level Objectives (SLOs) and have to pay penalties (also costly). AGILE deals with both issues: it uses wavelets to provide a medium-term resource demand prediction with enough lead time to start up new application server instances before performance falls short, and it uses dynamic VMcloning to reduce application startup times. Tests using RUBiS and Google cluster traces show that AGILE can predict varying resource demands over the medium-term with up to 3.42× better true positive rate and 0.34× the false positive rate than existing schemes. Given a target SLO violation rate, AGILE can efficiently handle dynamic application workloads, reducing both penalties and user dissatisfaction.

Available Media

PACMan: Performance Aware Virtual Machine Consolidation

Alan Roytman, University of California, Los Angeles; Aman Kansal, Microsoft Research; Sriram Govindan, Microsoft Corporation; Jie Liu and Suman Nath, Microsoft Research

Consolidation of multiple workloads, encapsulated in virtual machines (VMs), can significantly improve efficiency in cloud infrastructures. But consolidation also introduces contention in shared resources such as the memory hierarchy, leading to degraded VM performance. To avoid such degradation, the current practice is to not pack VMs tightly and leave a large fraction of server resource unused. This is wasteful. We present a system that consolidates VMs such that performance degradation is within a tunable bound while minimizing unused resources. The problem of selecting the most suitable VM combinations is NP-Complete and our system employs a practical method that performs provably close to the optimal. In some scenarios resource efficiency may trump performance and for this case our system implements a technique that maximizes performance while not leaving any resource unused. Experimental results show that the proposed system realizes over 30% savings in energy costs and up to 52% reduction in performance degradation compared to consolidation algorithms that do not consider degradation.

Available Media

Working Set-based Physical Memory Ballooning

Jui-Hao Chiang, Stony Brook University; Han-Lin Li and Tzi-cker Chiueh, Industrial Technology Research Institute

Minimizing the total amount of physical memory consumption of a set of virtual machines (VM) running on a physical machine is the key to improving a hypervisor’s consolidation ratio, which is defined as the maximum number of VMs that can run on a server without any performance degradation. To give each VM just enough physical memory equal to its true working set (TWS), we propose a TWS-based memory ballooning mechanism that takes away all unneeded physical memory from a VM without affecting its performance. Compared with a state-of-the-art commercial hypervisor, this working setbased memory virtualization technique is able to produce noticeably more effective reduction in physical memory consumption under the same input workloads, and thus represent promising additions to the repertoire of hypervisor-level optimization technologies.

Available Media

Coriolis: Scalable VM Clustering in Clouds

Daniel Campello and Carlos Crespo, Florida International University; Akshat Verma, IBM Research-India; Raju Rangaswami, Florida International University; Praveen Jayachandran, IBM Research-India

The growing popularity of virtualized data centers and clouds has led to virtual machine sprawl, significantly increasing system management costs. We present Coriolis, a scalable system that analyzes virtual machine images and automatically clusters them based on content and/or semantic similarity. Image similarity analysis can improve in planning many management activities (e.g., migration, system administration, VM placement) and reduce their execution cost. However, clustering images based on similarity – content or semantic – requires large scale data processing and does not scale well. Coriolis uses (i) asymmetric similarity semantics and (ii) a hierarchical clustering approachwith a data access requirement that is linear in the number of images. This represents a significant improvement over conventional clustering approaches that incur quadratic complexity and therefore becoming prohibitively expensive in a cloud setting.

Available Media

 

6:30 p.m.–8:00 p.m. Wednesday

Poster Session

Market Street Foyer

Session Chair: Samuel Kounev, Karlsruhe Institute of Technology

 

Thursday, June 27, 2013

8:30 a.m.–9:00 a.m. Thursday

Continental Breakfast

Market Street Foyer

9:00 a.m.–10:30 a.m. Thursday

Keynote Address II

Session Chair: Calton Pu, Georgia Institute of Technology

Recent Advances on Structured Data and the Web

Alon Halevy, Google

The World-Wide Web contains vast quantities of structured data on a variety of domains, such as hobbies, products and reference data. Moreover, the Web provides a platform that can encourage publishing more data sets from governments and other public organizations and support new data management opportunities, such as effective crisis response, data journalism and crowd-sourcing data sets. For the first time since the emergence of the Web, structured data is being used widely by search engines and is being collected via a concerted effort.

I will describe some of the efforts we are conducting at Google to collect structured data, filter the high-quality content, and serve it to our users. These efforts include providing Google Fusion Tables, a service for easily ingesting, visualizing and integrating data, mining the Web for high-quality HTML tables, and contributing these data assets to Google's other services.

The World-Wide Web contains vast quantities of structured data on a variety of domains, such as hobbies, products and reference data. Moreover, the Web provides a platform that can encourage publishing more data sets from governments and other public organizations and support new data management opportunities, such as effective crisis response, data journalism and crowd-sourcing data sets. For the first time since the emergence of the Web, structured data is being used widely by search engines and is being collected via a concerted effort.

I will describe some of the efforts we are conducting at Google to collect structured data, filter the high-quality content, and serve it to our users. These efforts include providing Google Fusion Tables, a service for easily ingesting, visualizing and integrating data, mining the Web for high-quality HTML tables, and contributing these data assets to Google's other services.

Alon Halevy heads the Structured Data Management Research group at Google. Prior to that, he was a professor of Computer Science at the University of Washington in Seattle, where he founded the database group. In 1999, Dr. Halevy co-founded Nimble Technology, one of the first companies in the Enterprise Information Integration space, and in 2004, Dr. Halevy founded Transformic, a company that created search engines for the deep web, and was acquired by Google. Dr. Halevy is a Fellow of the Association for Computing Machinery, received the the Presidential Early Career Award for Scientists and Engineers (PECASE) in 2000, and was a Sloan Fellow (1999-2000). He received his Ph.D in Computer Science from Stanford University in 1993 and his Bachelors from the Hebrew University in Jerusalem. Halevy is also a coffee culturalist and published the book The Infinite Emotions of Coffee, published in 2011 and a co-author of the book Principles of Data Integration, published in 2012.

10:30 a.m.–11:00 a.m. Thursday

Break with Refreshments

Market Street Foyer

11:00 a.m.–12:30 p.m. Thursday

MapReduce Workloads and Key-Value Stores

Session Chair: Christopher Stewart, The Ohio State University

iShuffle: Improving Hadoop Performance with Shuffle-on-Write

Yanfei Guo, Jia Rao, and Xiaobo Zhou, University of Colorado, Colorado Springs
Awarded Best Paper! 

Hadoop is a popular implementation of the MapReduce framework for running data-intensive jobs on clusters of commodity servers. Although Hadoop automatically parallelizes job execution with concurrent map and reduce tasks, we find that, shuffle, the all-to-all input data fetching phase in a reduce task can significantly affect job performance. We attribute the delay in job completion to the coupling of the shuffle phase and reduce tasks, which leaves the potential parallelism between multiple waves of map and reduce unexploited, fails to address data distribution skew among reduce tasks, and makes task scheduling inefficient. In this work, we propose to decouple shuffle from reduce tasks and convert it into a platform service provided by Hadoop. We present iShuffle, a user-transparent shuffle service that pro-actively pushes map output data to nodes via a novel shuffle-on-write operation and flexibly schedules reduce tasks considering workload balance. Experimental results with representative workloads show that iShuffle reduces job completion time by as much as 30.2%.

Available Media

AUTOPLACER: Scalable Self-Tuning Data Placement in Distributed Key-value Stores

João Paiva, Pedro Ruivo, Paolo Romano, and Luís Rodrigues, INESC-ID Lisboa, Instituto Superior Técnico, and Universidade Técnica de Lisboa
Best Paper Award Finalist 

This paper addresses the problem of autonomic data placement in replicated key-value stores. The goal is to automatically optimize replica placement in a way that leverages locality patterns in data accesses, such that inter-node communication is minimized. To do this efficiently is extremely challenging, as one needs not only to find lightweight and scalable ways to identify the right data placement, but also to preserve fast data lookup. The paper introduces new techniques that address each of the challenges above. The first challenge is addressed by optimizing, in a decentralized way, the placement of the objects generating most remote operations for each node. The second challenge is addressed by combining the usage of consistent hashing with a novel data structure, which provides efficient probabilistic data placement. These techniques have been integrated in Infinispan, a popular open-source key-value store. The performance results show that the throughput of the optimized system can be 6 times better than a baseline system employing the widely used static placement based on consistent hashing.

Available Media

Adaptive Information Passing for Early State Pruning in MapReduce Data Processing Workflows

Seokyong Hong, Padmashree Ravindra, and Kemafor Anyanwu, North Carolina State University

MapReduce data processing workflows often consist of multiple cycles where each cycle hosts the execution of some data processing operators e.g., join, defined in a program. A common situation is that many data items that are propagated along in a workflow, end up being "fruitless" i.e. they do not contribute to the final output. Given that the dominant costs associated with MapReduce processing (I/O, sorting and network transfer) are very sensitive to the size of intermediate states, such fruitless data items contribute unnecessarily to workflow costs. Consequently, it may be possible to improve the performance of MapReduce data processing workflows by eliminating fruitless data items as early as possible. Achieving this will require maintaining extra information about the state (output) of each operator, and then passing this information to descendant operators in the workflow. The descendant operators can use this state information to prune fruitless data items from their other inputs. However, this process is not without any overhead and in some cases, its costs may outweigh its benefits. Consequently, a technique for adaptively selecting Information Passing as part of an execution plan is needed. This adaptivity will need to be determined by a cost model that accounts for MapReduce's partitioned execution model as well as its restricted model of communication between operators. These nuances of MapReduce impose limitations on the applicability of information passing techniques developed for traditional database systems.

In this paper, we propose an approach for implementing Adaptive Information Passing for MapReduce platforms. Our proposal includes a benefit estimation model, and an approach for collecting data statistics needed for benefit estimation, which piggybacks on operator execution. Our approach has been integrated into Apache Hive and a comprehensive empirical evaluation is presented.

Available Media

 

12:30 p.m.–2:00 p.m. Thursday

FCW Luncheon

Market Street Foyer

2:00 p.m.–3:45 p.m. Thursday

Management of Big Data Systems Track

Session Chairs: Karsten Schwan, Georgia Institute of Technology, and Vanish Talwar, HP Labs

To Auto Scale or Not to Auto Scale

Nathan D. Mickulicz, Priya Narasimhan, and Rajeev Gandhi, YinzCam, Inc., and Carnegie Mellon University

YinzCam is a cloud-hosted service that provides sports fans with real-time scores, news, photos, statistics, live radio, streaming video, etc., on their mobile devices. YinzCam’s infrastructure is currently hosted on Amazon Web Services (AWS) and supports over 7 million downloads of the official mobile apps of 40+ professional sports teams and venues. YinzCam’s workload is necessarily multi-modal (e.g., pre-game, in-game, post-game, game-day, non-gameday, in-season, off-season) and exhibits large traffic spikes due to extensive usage by sports fans during the actual hours of a game, with normal game-time traffic being twenty-fold of that on non-game days.

We discuss the system’s performance in the three phases of its evolution: (i) when we initially deployed the YinzCam infrastructure and our users experienced unpredictable latencies and a large number of errors, (ii) when we enabled AWS’ Auto Scaling capability to reduce the latency and the number of errors, and (iii) when we analyzed the YinzCam architecture and discovered opportunities for architectural optimization that allowed us to provide predictable performance with lower latency, a lower number of errors, and at lower cost, when compared with enabling Auto Scaling.

Available Media

Big Data Exploration Via Automated Orchestration of Analytic Workflows

Alina Beygelzimer, Anton Riabov, Daby Sow, Deepak S. Turaga, and Octavian Udrea, IBM T. J. Watson Research Center

Large-scale data exploration using Big Data platforms requires the orchestration of complex analytic workflows composed of atomic analytic components for data selection, feature extraction, modeling and scoring. In this paper, we propose an approach that uses a combination of planning and machine learning to automatically determine the most appropriate data-driven workflows to execute in response to a user-specified objective. We combine this with orchestration mechanisms and automatically deploy, adapt and manage such workflows across Big Data platforms. We present results of this automated exploration in real settings in healthcare.

Available Media

ThroughputScheduler: Learning to Schedule on Heterogeneous Hadoop Clusters

Shekhar Gupta, Christian Fritz, Bob Price, Roger Hoover, and Johan DeKleer, Palo Alto Research Center; Cees Witteveen, Delft University of Technology

Hadoop is the de-facto standard for big data analytics applications. Presently available schedulers for Hadoop clusters assign tasks to nodes without regard to the capability of the nodes. We propose ThroughputScheduler, which reduces the overall job completion time on a clusters of heterogeneous nodes by actively scheduling tasks on nodes based on optimally matching job requirements to node capabilities. Node capabilities are learned by running probe jobs on the cluster. ThroughputScheduler uses a Bayesian, active learning scheme to learn the resource requirements of jobs on-the-fly. An empirical evaluation on a set of sample problems demonstrates that ThroughputScheduler can reduce total job completion time by almost 20% compared to the Hadoop FairScheduler and 40% compared to FIFOScheduler. ThroughputScheduler also reduces average mapping time by 33% compared to either of these schedulers.

Available Media

Real-Time User-Centric Management of Time-Intensive Analytics Using Convergence of Local Functions

Vinay Deolalikar, HP-Autonomy Research

The past decade has witnessed an astonishing growth in unstructured information in enterprises. The commercial value locked in enterprise unstructured information is being increasingly recognized. Accordingly, a range of textual document analytics—clustering, classification, taxonomy generation, provenance, etc.— have taken center stage as a potential means to manage this explosive growth in unstructured enterprise information, and unlock its value.

Several analytics are time-intensive: the time taken to complete processing the increasingly large volumes of data is significantly more than real-time. However, users are increasingly demanding real-time services that rely on such time-intensive analytics. There is clearly a tension between the aforementioned two developments.

In light of the preceding, vendors increasingly realize that while an analytic may take a longer time to converge, they need to extract useful information from it in real-time. Furthermore, this information has to be application-driven. In other words, it is often not an option to simply "wait until the analytic has finished running:" they must start providing the user with information while the analytic is still running. In summary, there is an emerging stress in Enterprise Information Management (EIM) on application-driven real-time information being extracted from time-intensive analytics.

A priori, it is not clear what could be extracted from an analytic that has yet to complete, and whether any such information would be useful. As of the present, there is little or no research literature on this problem: it is generally assumed that all of the information from an analytic will be available upon its completion.

We present an approach to this problem that is based on decomposing the objective function of the analytic, which is a global function that determines the progress of the analytic, into multiple local, user-centric functions. How can we construct meaningful local functions? How can such functions be measured? How do these functions evolve with time? Do these functions encode useful information that can be obtained real-time? These are the questions we will address in this paper.

We demonstrate our approach using local functions on document clustering using the de facto standard algorithm—k-means. In this case, the multiple local user-centric functions transform k-means into a flow algorithm, with each local function measuring a flow. Our results show that these flows evolve very differently from the global objective function, and in particular, may often converge quickly at many local sites. Using this property, we are able to extract useful information considerably earlier than the time taken by k-means to converge to its final state.

We believe that such pragmatic approaches will have to be taken in order to manage systems performing analytics on large volumes of unstructured data.

Available Media

AutoTune: Optimizing Execution Concurrency and Resource Usage in MapReduce Workflows

Zhuoyao Zhang, University of Pennsylvania; Ludmila Cherkasova, Hewlett-Packard Labs; Boon Thau Loo, University of Pennsylvania

An increasing number of MapReduce applications are written using high-level SQL-like abstractions on top of MapReduce engines. Such programs are translated into MapReduce workflows where the output of one job becomes the input of the next job in a workflow. A user must specify the number of reduce tasks for each MapReduce job in a workflow. The reduce task setting may have a significant impact on the execution concurrency, processing efficiency, and the completion time of the worklflow. In this work, we outline an automated performance evaluation framework, called AutoTune, for guiding the user efforts of tuning the reduce task settings in MapReduce sequential workflows while achieving performance objectives. We evaluate performance benefits of the proposed framework using a set of realistic MapReduce applications: TPC-H queries and custom programs mining a collection of enterprise web proxy logs.

Available Media

 

3:45 p.m.–4:00 p.m. Thursday

Break with Refreshments

Market Street Foyer

4:00 p.m.–5:45 p.m. Thursday

Self-Aware Internet of Things Track

Session Chair: Levent Gürgen, CEA-Leti

Self-healing and Optimizing of the HIP-based M2M Overlay Network

Amine Dhraief, HANA Research Group, University of Manouba; Khalil Drira, LAAS-CNRS, University of Toulouse; Abdelfettah Belghith, HANA Research Group, University of Manouba; Tarek Bouali and Mohamed Amine Ghorbali, HANA Research Group, University of Manouba, and LAAS-CNRS, University of Toulouse

Machine-to-Machine (M2M) paradigm is a novel communication technology under standardization at both the ETSI and the 3GPP. It involves a set of sensors and actuators (M2M devices) communicating with M2M applications via M2M gateways, with no human intervention. For M2M communications trust and privacy are key requirements. This drove us to propose a host identity protocol (HIP) based M2M overlay network, called HBMON, in order to ensure private communications between M2M devices, M2M gateway and M2M applications. In this paper, we first propose to add the self-healing capabilities to the M2M gateways. We enable at the M2M gateway level the REAP protocol, a failure detection and locator pair exploration protocol for IPv6 multihoming nodes. We also add mobility management capabilities to the M2M gateway in order to handle M2M devices mobility. Furthermore, in this paper we add the self-optimization capabilities to the M2M gateways. We also modify the REAP protocol to continuously monitor the overlay paths in order to always select the best available one in term of RTT. We implement our solution on the OMNeT++ network simulator. Results highlight the novel gateway capabilities: it recovers from failures, handle mobility and always select the best available path.

Available Media

Between Neighbors: Neighbor Discovery Analysis in EH-IoTs

Shruti Devasenapathy, Vijay S. Rao, R. Venkatesha Prasad, and Ignas Niemegeers, Delft University of Technology; Abdur Rahim, CreateNet

Devices in future Internet of Things (IoT) will be scavenging energy from the ambiance for all their operations. They face challenges in various aspects of network organization and operation due to the nature of ambient energy sources such as, solar insolation, vibration and motion. In this paper we analyze the classical two-way algorithm for neighbor discovery (ND) in an energy harvesting IoT. Through analysis, we outline the parameters that play an important role in ND performance such as node density, duty cycle, beamwidth and energy profile. We also provide simulation results to understand the impact of the energy storage element of energy harvesting devices in the ND process. We demonstrate that there exist trade-offs in choices for antenna beamwidth and node duty cycle, given node density and energy arrival rate. We show that the variations in energy availability impact ND performance. We also demonstrate that the right size of the storage buffer can smooth the effects of energy variability.

Available Media

Towards a Generic Architecture and Methodology for Multi-goal, Highly-distributed and Dynamic Autonomic Systems

Sylvain Frey, EDF R&D and Télécom ParisTech, CNRS LTCIAda Diaconescu, Télécom ParisTech, CNRS LTCI; David Menga, EDF R&D; Isabelle Demeure, Télécom ParisTech, CNRS LTCI

Autonomic control is vital to the success of large-scale distributed and open IoT systems, which must simultaneously cater for the interests of several parties. However, developing and maintaining autonomic controllers is highly difficult and costly. To illustrate this problem, this paper considers a system that could be deployed in the future, integrating smart homes within a smart microgrid. The paper addresses this problem from a Software Engineering perspective, building on the authors' experience with devising autonomic systems and including recent work on integration design patterns. The contribution focuses on a generic architecture for multi-goal, adaptable and open autonomic systems, exemplified via the development of a concrete autonomic application for the smart micro-grid. Our long-term goal is to progressively identify and develop reusable artefacts, such as paradigms, models and frameworks for helping the development of autonomic applications, which are vital for reaching the full potential of IoT systems.

Available Media

Learning Deployment Trade-offs for Self-Optimization of Internet of Things Applications

Arun kishore Ramakrishnan, Nayyab Zia Naqvi, Zubair Wadood Bhatti, Davy Preuveneers, and Yolande Berbers, KU Leuven

The Internet of Things (IoT) is the next big wave in computing characterized by large scale open ended heterogeneous network of things, with varying sensing, actuating, computing and communication capabilities. Compared to the traditional field of autonomic computing, the IoT is characterized by an open ended and highly dynamic ecosystem with variable workload and resource availability. These characteristics make it difficult to implement self-awareness capabilities for IoT to manage and optimize itself. In this work, we introduce a methodology to explore and learn the trade-offs of different deployment configurations to autonomously optimize the QoS and other quality attributes of IoT applications. Our experiments demonstrate that our proposed methodology can automate the efficient deployment of IoT applications in the presence of multiple optimization objectives and variable operational circumstances.

Available Media

 

4:00 p.m.–5:30 p.m. Thursday

Management of Big Data Systems Panel

Empire Room

Moderators: Karsten Schwan, Georgia Institute of Technology; Vanish Talwar, HP Labs

Panelists: Lucy Cherkasova, HP Labs; Gregory Eitzmann, Google; Sameh Elnikety, Microsoft Research; Krishna Gade, Twitter; Nagapramod Mandagere, IBM Research; Sambavi Muthukrishnan, Facebook; Priya Narasimhan, Carnegie Mellon University; Dilma da Silva, Qualcomm

6:30 p.m.–8:00 p.m. Thursday

 

Friday, June 28, 2013

8:30 a.m.–9:00 a.m. Friday

Continental Breakfast

Market Street Foyer

9:00 a.m.–10:30 a.m. Friday

Keynote Address III

Session Chair: Jeffrey Kephart, IBM Research

Opportunities for Autonomic Behavior in Mobile Cloud Computing

Dilma Da Silva, Qualcomm Research

Many mobile applications offer services that combine functionality from components in the mobile devices, in cloud computing datacenters, and even across networking nodes in between. This talk discusses opportunities and techniques for applying techniques from autonomic computing to the mobile-cloud software stack, aiming at improving application efficiency (e.g., battery and network usage) and user experience.

Many mobile applications offer services that combine functionality from components in the mobile devices, in cloud computing datacenters, and even across networking nodes in between. This talk discusses opportunities and techniques for applying techniques from autonomic computing to the mobile-cloud software stack, aiming at improving application efficiency (e.g., battery and network usage) and user experience.

Dilma da Silva is a Principal Engineer and Manager at Qualcomm Research in Santa Clara, California. She leads the area of mobile cloud computing. Her prior work experience includes IBM T. J. Watson Research Center in New York (2000-2012) and University of Sao Paulo in Brazil (1996-2000). Her research activities have been around scalable and adaptive system software, focusing in the past four years on cloud computing. She received her Ph.D in computer science from Georgia Tech in 1997. She has published more than 70 technical papers. Dilma is an ACM Distinguished Scientist, an ACM Distinguished Speaker, a member of the board of CRA-W (Computer Research Association's Committee on the Status of Women in Computing Research) and of the CDC (Coalition for Diversifying Computing), a co-founder of the Latinas in Computing group, and treasurer/secretary for ACM SIGOPS. More information is available at www.dilmamds.com.

Available Media
10:30 a.m.–11:00 a.m. Friday

Break with Refreshments

Market Street Foyer

11:00 a.m.–12:45 p.m. Friday

Self-Protect/Self-Healing

Session Chair: Helen Gu, North Carolina State University

Autonomic Fail-over for a Software-Defined Container Computer Network

Chien-Yung Lee and Yu-Wei Lee, Industrial Technology Research Institute; Cheng-Chun Tu, Stony Brook University and Industrial Technology Research Institute; Pai-Wei Wang, Yu-Cheng Wang, and Chih-Yu Lin, Industrial Technology Research Institute; Tzi-cker Chiueh, Stony Brook University and Industrial Technology Research Institute;

The ITRI container computer is a modular computer designed to be a building block for constructing cloud-scale data centers. Rather than using a traditional enterprise data center network architecture, which is typically based on a combination of Layer 2 switches and Layer 3 routers, the ITRI container computer’s internal interconnection fabric, called Peregrine, is a software-defined network specially architected to meet the scalability, fast fail-over and multi-tenancy requirements of these data centers. Peregrine uses as the underlying physical interconnect a mesh of commodity off-the-shelf Ethernet switches, and adopts a centralized network control architecture that operates these Ethernet switches as a coordinated distributed data plane. Compared with vanilla enterprise networks, Peregrine features a fast fail-over capability not only for network switch/link failures, but also for failures of its own control servers. This paper describes the design and implementation of Peregrine’s fault tolerance mechanisms, and shows their effectiveness using empirical performance measurements taken from a fully working Peregrine prototype under various failure scenarios.

Available Media

Fault Management in Map-Reduce Through Early Detection of Anomalous Nodes

Selvi Kadirvel, Jeffrey Ho, and José A. B. Fortes, University of Florida

Map-Reduce frameworks such as Hadoop have built-in fault-tolerance mechanisms that allow jobs to run to completion even in the presence of certain faults. However, these jobs can experience severe performance penalties under faulty conditions. In this paper, we present Fault-Managed Map-Reduce (FMR) which augments Hadoop with the functionality to mitigate job execution time penalties. FMR uses an anomaly detection algorithm based on sparse coding to anticipate a faulty slave node. This proposed technique has the following key advantages: (1) model training uses only normal-class data, (2) time taken for prediction is less than a second, and (3) confidence estimates are produced along with the anomaly prediction. FMR uses the result of anomaly detection to invoke a closed-loop recovery action, namely dynamic resource scaling. A scaling heuristic is proposed to determine the extent of scaling necessary to reduce impending performance penalty. FMR facilitates practical adoption by being implemented as a set of libraries and scripts that require no changes to the underlying source code of Hadoop. A set of realistic Map-Reduce applications were studied through a few thousand job executions on a 72-node Hadoop testbed. Detailed empirical evaluation shows that FMR successfully mitigates performance penalties from 119% down to 14%, averaged across experiments.

Available Media

Reliability and Timeliness Analysis of Fault-tolerant Distributed Publish/Subscribe Systems

Thadpong Pongthawornkamol and Klara Nahrstedt, University of Illinois at Urbana–Champaign; Guijun Wang, Boeing Research & Technology

Distributed publish / subscribe paradigm is a powerful data dissemination paradigm that offers both scalability and flexibility for time-sensitive applications. However, its nature of high expressiveness makes it difficult to analyze or predict the performance of publish / subscribe systems such as event delivery probability and end-to-end delivery delay, especially when the publish / subscribe systems are deployed over distributed, large-scale networks. While several fault tolerance techniques to increase reliability in distributed publish / subscribe systems have been proposed, event delivery probability and timeliness of publish / subscribe systems with such reliability enhancement techniques have not yet been analyzed. This paper proposes a generic model that abstracts the basic distributed publish / subscribe protocol, along with several commonly used fault-tolerant techniques, on top of distributed, large-scale networks. The overall goal of this model is to predict quality of service (QoS) in terms of event delivery probability and timeliness based on statistical attributes of each component in the distributed publish / subscribe systems. The evaluation results via extensive simulations with parameters computed from real-world traces verifies the correctness of the proposed prediction model. The proposed prediction model can be used as a building block for automatic QoS control in distributed, time-sensitive publish / subscribe systems such as subscriber admission control, broker deployment, and reliability optimization.

Available Media

Mitigating Anonymity Challenges in Automated Testing and Debugging Systems

Silviu Andrica and George Candea, École Polytechnique Fédérale de Lausanne (EPFL)

Modern software often provides automated testing and bug reporting facilities that enable developers to improve the software after release. Alas, this comes at the cost of user anonymity: reported execution traces may identify users. We present a way to mitigate this inherent tension between developer utility and user anonymity: automatically transform execution traces in a way that preserves their utility for testing and debugging while, at the same time, providing k-anonymity to users, i.e., a guarantee that the trace can at most identify the user as being part of a group of k indistinguishable users. We evaluate this approach in the context of an automated testing and bug reporting system for smartphone applications.

Available Media

 

12:45 p.m.–2:00 p.m. Friday

FCW Luncheon

Market Street Foyer

2:00 p.m.–3:30 p.m. Friday

Scheduling

Session Chair: Yuxiong He, Microsoft Research

Zoolander: Efficiently Meeting Very Strict, Low-Latency SLOs

Christopher Stewart and Aniket Chakrabarti, The Ohio State University; Rean Griffith, VMware

Internet services access networked storage many times while processing a request. Just a few slow storage accesses per request can raise response times a lot, making the whole service less usable and hurting profits. This paper presents Zoolander, a key value store that meets strict, low latency service level objectives (SLOs). Zoolander scales out using replication for predictability, an old but seldom-used approach that uses redundant accesses to mask outlier response times. Zoolander also scales out using traditional replication and partitioning. It uses an analytic model to efficiently combine these competing approaches based on systems data and workload conditions. For example, when workloads under utilize system resources, Zoolander’s model often suggests replication for predictability, strengthening service levels by reducing outlier response times. When workloads use system resources heavily, causing large queuing delays, Zoolander’s model suggests scaling out via traditional approaches. We used a diurnal trace to test Zoolander at scale (up to 40M accesses per hour). Zoolander reduced SLO violations by 32%.

Available Media

Preemptive ReduceTask Scheduling for Fair and Fast Job Completion

Yandong Wang, Auburn University; Jian Tan, IBM T.J. Watson Research; Weikuan Yu, Auburn University; Li Zhang and Xiaoqiao Meng, IBM T.J. Watson Research; Xiaobing Li, Auburn University

Hadoop MapReduce adopts a two-phase (map and reduce) scheme to schedule tasks among data-intensive applications. However, under this scheme, Hadoop schedulers do not work effectively for both phases. We reveal that there exists a serious fairness issue among jobs of different sizes, leading to prolonged execution for small jobs, which are starving for reduce slots held by large jobs. To solve this fairness issue and ensure fast completion for all jobs, we propose the Preemptive ReduceTask mechanism and the Fair Completion scheduler. Preemptive ReduceTask is a mechanism that corrects the monopolizing behavior of long reduce tasks from large jobs. The Fair Completion Scheduler dynamically balances the execution of different jobs for fair and fast completion. Experimental results with a diverse collection of benchmarks and tests demonstrate that these techniques together speed up the average job execution by as much as 39.7%, and improve fairness by up to 66.7%.

Available Media

QoS-Aware Admission Control in Heterogeneous Datacenters

Christina Delimitrou, Nick Bambos, and Christos Kozyrakis, Stanford University

Large-scale datacenters (DCs) host tens of thousands of diverse applications each day. Apart from determining where to schedule workloads, the cluster manager should also decide when to constrain application admission to prevent system oversubscription. At the same time datacenter users care not only for fast execution time but for low waiting time (fast scheduling) as well. Recent work has addressed the first challenge in the presence of unknown workloads, but not the second one.

We present ARQ, a multi-class admission control protocol that leverages Paragon, a heterogeneity and interference-aware DC scheduler. ARQ divides applications in classes based on the quality of resources they need and queues them separately. This improves utilization and system throughput, while maintaining per-application QoS. To enforce timely scheduling, ARQ diverges workloads to a queue of lower resource quality, if no suitable server becomes available within the time window specified by its QoS. In an oversubscribed scenario with 8,500 applications on 1,000 EC2 servers, ARQ bounds performance degradation to less than 10% for 99% of workloads, while significantly improving utilization.

Available Media

Performance Inconsistency in Large Scale Data Processing Clusters

Mingyuan Xia and Nan Zhu, McGill University; Yuxiong He and Sameh Elnikety, Microsoft Research Redmond; Xue Liu, McGill University

A large shared computing platform is usually divided into several virtual clusters of fixed sizes, and each virtual cluster is used by a team. A cluster scheduler dynamically allocates physical servers to the virtual clusters depending on their sizes and current job demands. In this paper, we show that current cluster schedulers, which optimize for instantaneous fairness, cause performance inconsistency among the virtual clusters: Virtual clusters with similar loads see very different performance characteristics.

We identify this problem by studying a production trace obtained from a large cluster and performing a simulation study. Our results demonstrate that when using an instantaneous-fairness scheduler, a large VC that contributes more resources during underload periods can not be properly rewarded during its overload periods. These results suggest that not using resource sharing history is the root cause for the performance inconsistency.

Available Media

 

3:30 p.m.–4:00 p.m. Friday

Break with Refreshments

Market Street Foyer

4:00 p.m.–5:15 p.m. Friday

Power/Temperature-Aware Management

Session Chair: Anders Robertsson, Lund University

Temperature Aware Workload Management in Geo-distributed Datacenters

Hong Xu, Chen Feng, and Baochun Li, University of Toronto

For geo-distributed datacenters, lately a workload management approach that routes user requests to locations with cheaper and cleaner electricity has been shown promising in reducing the energy cost. We consider two key aspects that have not been explored before. First, the energy-gobbling cooling systems are often modeled with a location-independent efficiency factor. Yet, through empirical studies, we find that their actual energy efficiency depends directly on the ambient temperature, which exhibits a significant degree of geographical diversity. Temperature diversity can be used to reduce the overall cooling energy overhead. Second, datacenters run not only interactive workloads driven by user requests, but also delay tolerant batch workloads at the backend. The elastic nature of batch workloads can be exploited to further reduce the energy consumption.

In this paper, we propose to make workload management for geo-distributed datacenters temperature aware. We formulate the problem as a joint optimization of request routing for interactive workloads and capacity allocation for batch workloads. We develop a distributed algorithm based on an m-block alternating direction method of multipliers (ADMM) algorithm that extends the classical 2-block algorithm. We prove the convergence of our algorithm under general assumptions. Through trace-driven simulations with real-world electricity prices, historical temperature data, and an empirical cooling efficiency model, we find that our approach is consistently capable of delivering a 15%–20% cooling energy reduction, and a 5%–20% overall cost reduction for geo-distributed clouds.

Available Media

Power-Aware Throughput Control for Database Management Systems

Zichen Xu and Xiaorui Wang, The Ohio State University; Yi-Cheng Tu, University of South Florida

Performance has been traditionally regarded as the most important design goal for database management systems (DBMSs). However, in recent years, the increasing energy cost gradually rivals the benefit of chasing after performance. Therefore, there are strong financial incentives to minimize power consumption of a database system while maintaining its desired performance, so that the energy cost can be best amortized. Such a goal is challenging in practice because the power consumption of a database system varies significantly with the environment and workloads. Many modern hardware provide multiple modes with different power/performance tradeoffs. However, existing research has not used these power modes sufficiently to achieve the best tradeoff for database services due to the lack of the knowledge on database behavior under different power modes. In this paper, we present Power-Aware Throughput control (PAT), an online feedback control framework for energy conservation at the DBMS level. In contrast to heuristic-based tuning techniques commonly used in database systems, the design of PAT is based on rigorous control-theoretic analysis for guaranteed control accuracy and system stability. We implement PAT as an integrated component of the PostgreSQL system and evaluate it with workloads generated from various database benchmarks. The results show that PAT achieves up to 51.3% additional energy savings despite runtime workload dynamics and model errors, as compared to other competing methods.

Available Media

Wireless Inference-based Notification (WIN) without Packet Decoding

Kevin Chen and H. T. Kung, Harvard University

We consider ultra-energy-efficient wireless transmission of notifications in sensor networks. We argue that the usual practice where a receiver decodes packets sent by a remote node to acquire its state or message is suboptimal in energy use. We propose an alternative approach where a receiver first (1) performs physical-layer matched filtering on arrived packets without actually decoding them at the link layer or higher layer, and then (2) based on the matching results infers the sender's state or message from the time-series pattern of packet arrivals. We show that hierarchical multi-layer inference can be effective for this purpose in coping with channel noise. Because packets are not required to be decodable by the receiver, the sender can reach a farther receiver without increasing the transmit power or, equivalently, a receiver at the same distance with a lower transmit power. We call our scheme Wireless Inference-based Notification (WIN) without Packet Decoding. We demonstrate by analysis and simulation that WIN allows a sender to multiply its notification distance. We show how senders can realize these energy-efficiency benefits with unchanged system and protocols; only receivers, which normally are larger systems than senders and have ample computing and power resources for WIN-related processing.

Available Media