CVE-2025-49847

Published Jun 17, 2025

Last updated a month ago

Overview

Description
llama.cpp is an inference of several LLM models in C/C++. Prior to version b5662, an attacker‐supplied GGUF model vocabulary can trigger a buffer overflow in llama.cpp’s vocabulary‐loading code. Specifically, the helper _try_copy in llama.cpp/src/vocab.cpp: llama_vocab::impl::token_to_piece() casts a very large size_t token length into an int32_t, causing the length check (if (length < (int32_t)size)) to be bypassed. As a result, memcpy is still called with that oversized size, letting a malicious model overwrite memory beyond the intended buffer. This can lead to arbitrary memory corruption and potential code execution. This issue has been patched in version b5662.
Source
security-advisories@github.com
NVD status
Awaiting Analysis

Risk scores

CVSS 3.1

Type
Secondary
Base score
8.8
Impact score
5.9
Exploitability score
2.8
Vector string
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
Severity
HIGH

Weaknesses

security-advisories@github.com
CWE-119

Social media

Hype score
Not currently trending