CVE-2026-34753

Published Apr 6, 2026

Last updated 12 days ago

Overview

Description
vLLM is an inference and serving engine for large language models (LLMs). From 0.16.0 to before 0.19.0, a server-side request forgery (SSRF) vulnerability in download_bytes_from_url allows any actor who can control batch input JSON to make the vLLM batch runner issue arbitrary HTTP/HTTPS requests from the server, without any URL validation or domain restrictions. This can be used to target internal services (e.g. cloud metadata endpoints or internal HTTP APIs) reachable from the vLLM host. This vulnerability is fixed in 0.19.0.
Source
security-advisories@github.com
NVD status
Analyzed
Products
vllm

Risk scores

CVSS 3.1

Type
Secondary
Base score
5.4
Impact score
2.5
Exploitability score
2.8
Vector string
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:L/I:N/A:L
Severity
MEDIUM

Weaknesses

security-advisories@github.com
CWE-918

Social media

Hype score
Not currently trending

Configurations