In its design for automatic terminal command execution, AI Code offers two options: Execute safe commands and execute all commands. The description for the former states that commands determined by the model to be safe will be automatically executed, whereas if the model judges a command to be potentially destructive, it still requires user approval. However, this design is highly susceptible to prompt injection attacks. An attacker can employ a generic template to wrap any malicious command and mislead the model into misclassifying it as a 'safe' command, thereby bypassing the user approval requirement and resulting in arbitrary command execution.
History

Fri, 27 Mar 2026 14:30:00 +0000

Type Values Removed Values Added
Description In its design for automatic terminal command execution, AI Code offers two options: Execute safe commands and execute all commands. The description for the former states that commands determined by the model to be safe will be automatically executed, whereas if the model judges a command to be potentially destructive, it still requires user approval. However, this design is highly susceptible to prompt injection attacks. An attacker can employ a generic template to wrap any malicious command and mislead the model into misclassifying it as a 'safe' command, thereby bypassing the user approval requirement and resulting in arbitrary command execution.
References

cve-icon MITRE

Status: PUBLISHED

Assigner: mitre

Published:

Updated: 2026-03-27T14:12:04.210Z

Reserved: 2026-03-04T00:00:00.000Z

Link: CVE-2026-30304

cve-icon Vulnrichment

No data.

cve-icon NVD

Status : Received

Published: 2026-03-27T15:16:53.263

Modified: 2026-03-27T15:16:53.263

Link: CVE-2026-30304

cve-icon Redhat

No data.

cve-icon OpenCVE Enrichment

No data.