go back
go back
Volume 18, No. 1
Chameleon: a Heterogeneous and Disaggregated Accelerator System for Retrieval-Augmented Language Models
Abstract
A Retrieval-Augmented Language Model (RALM) combines a large language model (LLM) with a vector database to retrieve context- specific knowledge during text generation. This strategy facilitates impressive generation quality even with smaller models, thus re- ducing computational demands by orders of magnitude. To serve RALMs efficiently and flexibly, we propose Chameleon, a heteroge- neous accelerator system integrating both LLM and vector search accelerators in a disaggregated architecture. The heterogeneity en- sures efficient serving for both inference and retrieval, while the disaggregation allows independent scaling of LLM and vector search accelerators to fulfill diverse RALM requirements. Our Chameleon prototype implements vector search accelerators on FPGAs and assigns LLM inference to GPUs, with CPUs as cluster coordina- tors. Evaluated on various RALMs, Chameleon exhibits up to 2.16× reduction in latency and 3.18× speedup in throughput compared to the hybrid CPU-GPU architecture. The promising results pave the way for adopting heterogeneous accelerators for not only LLM inference but also vector search in future RALM systems.
PVLDB is part of the VLDB Endowment Inc.
Privacy Policy