Large language models (LLMs) with different architectures and sizes have been
developed. Serving each LLM with dedicated GPUs leads to resource waste and
service inefficiency due to the varying demand of LLM requests. A common
practice is to share multiple LLMs. However, existing sharing systems either do
not consider the autoregressive pattern of LLM services, or only focus on
improving the throughput, which impairs the sharing performance, especially the
serving latency. We present SeaLLM, which enables service-aware and
latency-optimized LLM sharing. SeaLLM improves the overall sharing performance
by (1) a latency-optimized scheduling algorithm utilizing the characteristics
of LLM services, (2) a placement algorithm to determine the placement plan and
an adaptive replacement algorithm to decide the replacement interval, and (3) a
unified key-value cache to share GPU memory among LLM services efficiently. Our
evaluation under real-world traces and LLM services demonstrates that SeaLLM
improves the normalized latency by up to $13.60\times$, the tail latency by up
to $18.69\times$, and the SLO attainment by up to $3.64\times$ compared to
existing solutions.
Este artículo explora los viajes en el tiempo y sus implicaciones.
Descargar PDF:
2504.15720v1