go back

Volume 18, No. 2

cedar: Optimized and Unified Machine Learning Input Data Pipelines

Authors:
Mark Zhao, Emanuel Adamiak, Christos Kozyrakis

Abstract

The input data pipeline is an essential component of each machine learning (ML) training job. It is responsible for reading massive amounts of training data, processing batches of samples using com-amounts of training data, processing batches of samples using complex transformations, and loading them onto training nodes at low latency and high throughput. Performant input data systems are be-latency and high throughput. Performant input data systems are becoming increasingly critical due to skyrocketing data volumes and training throughput demands. Unfortunately, current input data systems cannot fully leverage key performance optimizations, re-systems cannot fully leverage key performance optimizations, resulting in hugely inefficient infrastructures that require significant resources – or worse – underutilize expensive accelerators. To address these demands, we present cedar , an optimized and unified programming framework for ML input data pipelines. cedar allows users to define a training job’s data pipeline using compos-allows users to define a training job’s data pipeline using composable operators that support arbitrary ML frameworks and libraries. cedar ’s extensible optimizer systematically combines and applies performance optimizations to the pipeline. cedar then orchestrates pipeline processing across configurable local and distributed com-pipeline processing across configurable local and distributed compute resources to efficiently meet the training job’s data throughput demands. Across eight pipelines, cedar improves performance by up to 1.87× to 10.65× compared to state-of-the-art input data systems.

PVLDB is part of the VLDB Endowment Inc.

Privacy Policy