Characterization of Large Language Model Development in the Datacenter

Authors: 

Qinghao Hu, Shanghai AI Laboratory and S-Lab, Nanyang Technological University; Zhisheng Ye, Shanghai AI Laboratory and Peking University; Zerui Wang, Shanghai AI Laboratory and Shanghai Jiao Tong University; Guoteng Wang, Shanghai AI Laboratory; Meng Zhang and Qiaoling Chen, Shanghai AI Laboratory and S-Lab, Nanyang Technological University; Peng Sun, Shanghai AI Laboratory and SenseTime Research; Dahua Lin, Shanghai AI Laboratory and CUHK; Xiaolin Wang and Yingwei Luo, Peking University; Yonggang Wen and Tianwei Zhang, Nanyang Technological University

Abstract: 

Large Language Models (LLMs) have presented impressive performance across several transformative tasks. However, it is non-trivial to efficiently utilize large-scale cluster resources to develop LLMs, often riddled with numerous challenges such as frequent hardware failures, intricate parallelization strategies, and imbalanced resource utilization. In this paper, we present an in-depth characterization study of a six-month LLM development workload trace collected from our GPU datacenter Acme. Specifically, we investigate discrepancies between LLMs and prior task-specific Deep Learning (DL) workloads, explore resource utilization patterns, and identify the impact of various job failures. Our analysis summarizes hurdles we encountered and uncovers potential opportunities to optimize systems tailored for LLMs. Furthermore, we introduce our system efforts: (1) fault-tolerant pretraining, which enhances fault tolerance through LLM-involved failure diagnosis and automatic recovery. (2) decoupled scheduling for evaluation, which achieves timely performance feedback via trial decomposition and scheduling optimization.

NSDI '24 Open Access Sponsored by
King Abdullah University of Science and Technology (KAUST)

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

BibTeX
@inproceedings {295545,
author = {Qinghao Hu and Zhisheng Ye and Zerui Wang and Guoteng Wang and Meng Zhang and Qiaoling Chen and Peng Sun and Dahua Lin and Xiaolin Wang and Yingwei Luo and Yonggang Wen and Tianwei Zhang},
title = {Characterization of Large Language Model Development in the Datacenter},
booktitle = {21st USENIX Symposium on Networked Systems Design and Implementation (NSDI 24)},
year = {2024},
isbn = {978-1-939133-39-7},
address = {Santa Clara, CA},
pages = {709--729},
url = {https://www.usenix.org/conference/nsdi24/presentation/hu},
publisher = {USENIX Association},
month = apr
}