AI security guardrails for LLM applications
Open-source, modular guardrails to protect agents from hallucinations, prompt injection, and sensitive data leaks.
from defend import Client
guard = Client(api_key="dev", base_url="http://localhost:8000")
in_res = guard.input(user_text)if in_res.blocked: raise RuntimeError(in_res.error_response())
raw = your_llm_call(user_text)
out_res = guard.output(raw, session_id=in_res.session_id)if out_res.blocked: raise RuntimeError(out_res.error_response())Pipeline
Guardrails before and after the model
Apply policy-driven checks on both sides of your LLM call.
Input
User prompt / tool input
LLM
Provider call unchanged
Model
Response
Return to user / tools
Input guard
Topic control, PII, injection detection.
Output guard
Prompt leak, PII, safety checks, constraints.
Input guard
Topic control, PII, injection detection.
Input
User prompt / tool input
LLM
Provider call unchanged
Model
Response
Return to user / tools
Output guard
Prompt leak, PII, safety checks, constraints.
Getting started
A simple setup to get started
Pick a provider, choose a model, and get a one-line defend init command.
Your setup
defend init --token "defend_v1_eNp1jjEOwzAMA__C2UtXf6XooFRKYVSxDScKUAT6e-2gS4dsBHmUeOBl1HhFPJBytW2IpbCpdO_-CKit7ImlIYJllszwgGLbj5VMkwojzqSrhMtuqZ1McD8R0fHRL_BzTb-nEz3fiNlUR5gWap-_GTvizb8LS0Nz"
Add guardrails in minutes
Install with pip install pydefend and start guarding input and output.