- Description
- llama.cpp is an inference of several LLM models in C/C++. Prior to version b5721, there is a signed vs. unsigned integer overflow in llama.cpp's tokenizer implementation (llama_vocab::tokenize) (src/llama-vocab.cpp:3036) resulting in unintended behavior in tokens copying size comparison. Allowing heap-overflowing llama.cpp inferencing engine with carefully manipulated text input during tokenization process. This issue has been patched in version b5721.
- Source
- security-advisories@github.com
- NVD status
- Analyzed
- Products
- llama.cpp
CVSS 3.1
- Type
- Primary
- Base score
- 8.8
- Impact score
- 5.9
- Exploitability score
- 2.8
- Vector string
- CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
- Severity
- HIGH
- security-advisories@github.com
- CWE-119
- Hype score
- Not currently trending
[
{
"nodes": [
{
"cpeMatch": [
{
"criteria": "cpe:2.3:a:ggml:llama.cpp:*:*:*:*:*:*:*:*",
"matchCriteriaId": "DC465FEB-FFD7-42D5-8D81-F416C28985BD",
"versionEndExcluding": "b5721",
"vulnerable": true
}
],
"negate": false,
"operator": "OR"
}
]
}
]