CVE-2025-52566

Published Jun 24, 2025

Last updated 6 months ago

Overview

Description
llama.cpp is an inference of several LLM models in C/C++. Prior to version b5721, there is a signed vs. unsigned integer overflow in llama.cpp's tokenizer implementation (llama_vocab::tokenize) (src/llama-vocab.cpp:3036) resulting in unintended behavior in tokens copying size comparison. Allowing heap-overflowing llama.cpp inferencing engine with carefully manipulated text input during tokenization process. This issue has been patched in version b5721.
Source
security-advisories@github.com
NVD status
Analyzed
Products
llama.cpp

Risk scores

CVSS 3.1

Type
Primary
Base score
8.8
Impact score
5.9
Exploitability score
2.8
Vector string
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
Severity
HIGH

Weaknesses

security-advisories@github.com
CWE-119

Social media

Hype score
Not currently trending

Configurations