AI Security + Zero Trust

Practical LLM and Agent Threat Scenarios You Should Test Now

4 minVitruvius Cyber Research2026-03-02

Threat scenarios and test patterns for prompt injection, tool abuse, and identity escalation in agentic systems.

Agentic systems introduce compound risk because prompt handling, tool execution, and identity privileges combine in one chain.

Scenario 1: Prompt injection against connected tools

An attacker manipulates instructions so the agent calls tools outside intended boundaries.

Scenario 2: Cross-context policy bypass

Data from one context contaminates another, producing unsafe output or unauthorized data access.

Scenario 3: Identity escalation through automation accounts

Over-permissioned service identities allow lateral movement once a workflow is compromised.

Scenario 4: Logging blind spots

Without complete traceability from prompt to action, incident response cannot reconstruct impact.

Testing guidance

  • Treat every tool invocation as privileged execution.
  • Validate allow/deny lists under adversarial input.
  • Enforce least privilege for every non-human identity.
  • Capture structured logs for prompts, policy decisions, and actions.
Schedule Red Team Sprint