← All terms

Definition

Agent Safety

The set of practices, mechanisms, and design patterns that ensure AI agents behave reliably, don't cause harm, and operate within defined boundaries.

In Depth

Agent safety encompasses everything from preventing prompt injection attacks to ensuring agents don't take unintended real-world actions. Key safety practices include: principle of least privilege (agents only have access to tools they need), action boundaries (explicit limits on what agents can do), input validation (rejecting malicious or malformed inputs), output monitoring (checking responses before delivery), rate limiting (preventing runaway agent loops), and kill switches (ability to immediately stop agent execution). Safety must be designed into agent systems from the start, not bolted on later.

Build production AI agents with EigenForge

Join the Waitlist