CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
NVIDIA Triton Inference Server contains a vulnerability where a user may cause an out-of-bounds read issue by releasing a shared memory region while it is in use. A successful exploit of this vulnerability may lead to denial of service. |
NVIDIA Triton Inference Server for Linux and Windows contains a vulnerability where a user can inject forged logs and executable commands by injecting arbitrary data as a new log entry. A successful exploit of this vulnerability might lead to code execution, denial of service, escalation of privileges, information disclosure, and data tampering. |
NVIDIA Triton Inference Server for Linux contains a vulnerability where a user may cause an incorrect Initialization of resource by network issue. A successful exploit of this vulnerability may lead to information disclosure. |
NVIDIA Triton Inference Server for Windows and Linux contains a vulnerability in the Python backend, where an attacker could cause a remote code execution by manipulating the model name parameter in the model control APIs. A successful exploit of this vulnerability might lead to remote code execution, denial of service, information disclosure, and data tampering. |
NVIDIA Triton Inference Server for Windows and Linux contains a vulnerability where an attacker could cause an out-of-bounds write through a specially crafted input. A successful exploit of this vulnerability might lead to denial of service. |
NVIDIA Triton Inference Server for Windows and Linux contains a vulnerability where an attacker could cause memory corruption by identifying and accessing the shared memory region used by the Python backend. A successful exploit of this vulnerability might lead to denial of service. |
NVIDIA Triton Inference Server for Windows and Linux contains a vulnerability where an attacker could cause a denial of service by loading a misconfigured model. A successful exploit of this vulnerability might lead to denial of service. |
NVIDIA Triton Inference Server contains a vulnerability in the model loading API, where a user could cause an integer overflow or wraparound error by loading a model with an extra-large file size that overflows an internal variable. A successful exploit of this vulnerability might lead to denial of service. |
NVIDIA Triton Inference Server for Linux contains a vulnerability where a user can set the logging location to an arbitrary file. If this file exists, logs are appended to the file. A successful exploit of this vulnerability might lead to code execution, denial of service, escalation of privileges, information disclosure, and data tampering. |
NVIDIA Triton Inference Server for Linux contains a vulnerability in shared memory APIs, where a user can cause an improper memory access issue by a network API. A successful exploit of this vulnerability might lead to denial of service and data tampering. |
NVIDIA Triton Inference Server for Linux contains a vulnerability in the tracing API, where a user can corrupt system files. A successful exploit of this vulnerability might lead to denial of service and data tampering. |
NVIDIA Triton Inference Server contains a vulnerability in the DALI backend where an attacker may cause an improper input validation issue. A successful exploit of this vulnerability may lead to code execution. |
NVIDIA Triton Inference Server for Windows and Linux contains a vulnerability in the Python backend, where an attacker could cause an out-of-bounds read by manipulating shared memory data. A successful exploit of this vulnerability might lead to information disclosure. |
NVIDIA Triton Inference Server for Windows and Linux contains a vulnerability in the Python backend, where an attacker could cause an out-of-bounds read by sending a request. A successful exploit of this vulnerability might lead to information disclosure. |
NVIDIA Triton Inference Server for Windows and Linux and the Tensor RT backend contain a vulnerability where an attacker could cause an underflow by a specific model configuration and a specific input. A successful exploit of this vulnerability might lead to denial of service. |
NVIDIA Triton Inference Server for Windows and Linux contains a vulnerability where a user could cause a memory allocation with excessive size value, leading to a segmentation fault, by providing an invalid request. A successful exploit of this vulnerability might lead to denial of service. |
NVIDIA Triton Inference Server for Windows and Linux contains a vulnerability where an attacker could cause an integer overflow through specially crafted inputs. A successful exploit of this vulnerability might lead to denial of service and data tampering. |
NVIDIA Triton Inference Server for Windows and Linux contains a vulnerability where an attacker could cause an integer overflow through a specially crafted input. A successful exploit of this vulnerability might lead to denial of service. |
NVIDIA Triton Inference Server for Windows and Linux contains a vulnerability where an attacker could cause uncontrolled recursion through a specially crafted input. A successful exploit of this vulnerability might lead to denial of service. |
NVIDIA Triton Inference Server for Windows and Linux contains a vulnerability where a user could cause an integer overflow or wraparound, leading to a segmentation fault, by providing an invalid request. A successful exploit of this vulnerability might lead to denial of service. |