THC: Accelerating Distributed Deep Learning Using Tensor Homomorphic Compression

Authors: 

Minghao Li, Harvard University; Ran Ben Basat, University College London; Shay Vargaftik, VMware Research; ChonLam Lao, Kevin Xu, Michael Mitzenmacher, and Minlan Yu, Harvard University

Abstract: 

Deep neural networks (DNNs) are the de facto standard for essential use cases, such as image classification, computer vision, and natural language processing. As DNNs and datasets get larger, they require distributed training on increasingly larger clusters. A main bottleneck is the resulting communication overhead where workers exchange model updates (i.e., gradients) on a per-round basis. To address this bottleneck and accelerate training, a widely-deployed approach is compression. However, previous deployments often apply bi-directional compression schemes by simply using a uni-directional gradient compression scheme in each direction. This results in significant computational overheads at the parameter server and increased compression error, leading to longer training and lower accuracy.

We introduce Tensor Homomorphic Compression (THC), a novel bi-directional compression framework that enables the direct aggregation of compressed values and thus eliminating the aforementioned computational overheads. Moreover, THC is compatible with in-network aggregation (INA), which allows for further acceleration. Our evaluation shows that training representative vision and language models with THC reaches target accuracy by 1.40× to 1.47× faster using INA and 1.28× to 1.33× faster using a software PS compared with state-of-the-art systems.

NSDI '24 Open Access Sponsored by
King Abdullah University of Science and Technology (KAUST)

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

BibTeX
@inproceedings {295599,
author = {Minghao Li and Ran Ben Basat and Shay Vargaftik and ChonLam Lao and Kevin Xu and Michael Mitzenmacher and Minlan Yu},
title = {{THC}: Accelerating Distributed Deep Learning Using Tensor Homomorphic Compression},
booktitle = {21st USENIX Symposium on Networked Systems Design and Implementation (NSDI 24)},
year = {2024},
isbn = {978-1-939133-39-7},
address = {Santa Clara, CA},
pages = {1191--1211},
url = {https://www.usenix.org/conference/nsdi24/presentation/li-minghao},
publisher = {USENIX Association},
month = apr
}