- Description
- vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.
- Source
- security-advisories@github.com
- NVD status
- Analyzed
- Products
- vllm
CVSS 3.1
- Type
- Secondary
- Base score
- 8.8
- Impact score
- 5.9
- Exploitability score
- 2.8
- Vector string
- CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
- Severity
- HIGH
- security-advisories@github.com
- CWE-693
- Hype score
- Not currently trending
[
{
"nodes": [
{
"cpeMatch": [
{
"criteria": "cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:*",
"matchCriteriaId": "2130385B-68E6-4854-AC42-0CBA1F30B487",
"versionEndExcluding": "0.18.0",
"versionStartIncluding": "0.10.1",
"vulnerable": true
}
],
"negate": false,
"operator": "OR"
}
]
}
]