go back

Volume 15, No. 6

Hyper-Tune: Towards Efficient Hyper-parameter Tuning at Scale

Authors:
Yang Li (Peking University)* Yu Shen (Peking University) Huaijun Jiang (Peking University) Wentao Zhang (Peking University) Jixiang Li (Kuaishou Inc.) Ji Liu (Kwai Inc.) Ce Zhang (ETH) Bin Cui (Peking University)

Abstract

The ever-growing demand and complexity of machine learning are putting pressure on hyper-parameter tuning systems: while the evaluation cost of models continues to increase, the scalability of state-of-the-arts starts to become a crucial bottleneck. In this paper, inspired by our experience deploying hyper-parameter tuning in a real-world application in production and the limitations of existing systems, we propose Hyper-Tune, an efficient and robust distributed hyper-parameter tuning framework. Compared with existing systems, Hyper-Tune highlights multiple system optimizations, including(1) automatic resource allocation, (2) asynchronous scheduling, and (3) multi-fidelity optimizer. We conduct extensive evaluations on both benchmark datasets and a large-scale real-world dataset in production. Empirically, we show that, with the aid of these optimizations, Hyper-Tune outperforms competitive hyper-parameter tuning systems on a wide range of scenarios, including XGBoost, CNN, RNN, and some architectural hyper-parameters for neural networks. Compared with the state-of-the-art BOHB and A-BOHB, we show that Hyper-Tuneachieves up to11.2×and5.1×speedups, respectively.

PVLDB is part of the VLDB Endowment Inc.

Privacy Policy