GRACE: Loss-Resilient Real-Time Video through Neural Codecs

Authors: 

Yihua Cheng, Ziyi Zhang, Hanchen Li, Anton Arapin, and Yue Zhang, The University of Chicago; Qizheng Zhang, Stanford University; Yuhan Liu, Kuntai Du, and Xu Zhang, The University of Chicago; Francis Y. Yan, Microsoft; Amrita Mazumdar, NVIDIA; Nick Feamster and Junchen Jiang, The University of Chicago

Abstract: 

In real-time video communication, retransmitting lost packets over high-latency networks is not viable due to strict latency requirements. To counter packet losses without retransmission, two primary strategies are employed—encoder-based forward error correction (FEC) and decoder-based error concealment. The former encodes data with redundancy before transmission, yet determining the optimal redundancy level in advance proves challenging. The latter reconstructs video from partially received frames, but dividing a frame into independently coded partitions inherently compromises compression efficiency, and the lost information cannot be effectively recovered by the decoder without adapting the encoder.

We present a loss-resilient real-time video system called GRACE, which preserves the user’s quality of experience (QoE) across a wide range of packet losses through a new neural video codec. Central to GRACE’s enhanced loss resilience is its joint training of the neural encoder and decoder under a spectrum of simulated packet losses. In lossless scenarios, GRACE achieves video quality on par with conventional codecs (e.g., H.265). As the loss rate escalates, GRACE exhibits a more graceful, less pronounced decline in quality, consistently outperforming other loss-resilient schemes. Through extensive evaluation on various videos and real network traces, we demonstrate that GRACE reduces undecodable frames by 95% and stall duration by 90% compared with FEC, while markedly boosting video quality over error concealment methods. In a user study with 240 crowdsourced participants and 960 subjective ratings, GRACE registers a 38% higher mean opinion score (MOS) than other baselines.

NSDI '24 Open Access Sponsored by
King Abdullah University of Science and Technology (KAUST)

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

BibTeX
@inproceedings {295525,
author = {Yihua Cheng and Ziyi Zhang and Hanchen Li and Anton Arapin and Yue Zhang and Qizheng Zhang and Yuhan Liu and Kuntai Du and Xu Zhang and Francis Y. Yan and Amrita Mazumdar and Nick Feamster and Junchen Jiang},
title = {{GRACE}: {Loss-Resilient} {Real-Time} Video through Neural Codecs},
booktitle = {21st USENIX Symposium on Networked Systems Design and Implementation (NSDI 24)},
year = {2024},
isbn = {978-1-939133-39-7},
address = {Santa Clara, CA},
pages = {509--531},
url = {https://www.usenix.org/conference/nsdi24/presentation/cheng},
publisher = {USENIX Association},
month = apr
}