The following Work-in-Progress Reports will be presented during the Work-in-Progress Reports (WiPs) Session on Wednesday, February 26, 4:30 pm–5:30 pm, in the Santa Clara Ballroom.
Baton: Orchestrating GPU Memory for LLM Training on Heterogeneous Cluster
Yi Zhang, Shuibing He, and Ping Chen, Zhejiang University
A Mochi Playground for Novel High-Performance Storage Research
Philip Carns, Argonne National Laboratory
NetLSM: Enabling an In-Network Approach for Scheduling LSM-KVS Operations
Yibo Zhao, University of Maryland; Viraj Thakkar and Zhichao Cao, Arizona State University; Alan Zaoxing Liu, University of Maryland
EverCache: A Multi-Tier KVCache Engine for High-Performance and High-Efficiency LLMs Inferencing
Chang Guo, Zhenyu Zhang, and Zhichao Cao, Arizona State University
Shingle Magnetic Recording Storage System with Consolidated Write Atomicity and Data Integrity
Shu Li and Jeffrey Dong Li, Alibaba Group
Scaling GNN Sampling on Large-Scale Graphs with io_uring
Qixuan Chen, Yuhang Song, Melissa Martinez, and Vasiliki Kalavri, Boston University
Evolving XFS with Zoned Storage and Intelligent Data Placement
Hans Holmberg and Christoph Hellwig, Western Digital Research
PageANN : A Fully Out-of-Core Solution to High-performance Approximate Nearest Neighbor Search
Dingyi Kang, The University of Texas at Dallas; Haoshen Yang and Hang Liu, Rutgers, The State University of New Jersey; Bingzhe Li, The University of Texas at Dallas
AnyTier: An LSM-Managed Dynamic Data Tiering Framework with High Generality and Efficiency
Jiajun Li, Carnegie Mellon University; Chang Guo and Zhichao Cao, Arizona State University