CVE-2026-22778

Published Feb 2, 2026

Last updated a month ago

Overview

Description
vLLM is an inference and serving engine for large language models (LLMs). From 0.8.3 to before 0.14.1, when an invalid image is sent to vLLM's multimodal endpoint, PIL throws an error. vLLM returns this error to the client, leaking a heap address. With this leak, we reduce ASLR from 4 billion guesses to ~8 guesses. This vulnerability can be chained a heap overflow with JPEG2000 decoder in OpenCV/FFmpeg to achieve remote code execution. This vulnerability is fixed in 0.14.1.
Source
security-advisories@github.com
NVD status
Analyzed
Products
vllm

Risk scores

CVSS 3.1

Type
Secondary
Base score
9.8
Impact score
5.9
Exploitability score
3.9
Vector string
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
Severity
CRITICAL

Weaknesses

security-advisories@github.com
CWE-532

Social media

Hype score
Not currently trending

Configurations