This document explains agentsdk-go security mechanisms, configuration, and best practices.
The SDK uses a three-layer defense model:
- Sandbox – filesystem and network access control
- Validator – command and parameter validation
- Approval Queue – human-in-the-loop approvals
These layers cooperate with the 6 middleware hook points to enforce checks at critical stages.
- Filesystem allowlist
- Symlink resolution (prevents path traversal)
- Network allowlist
pkg/sandbox/– sandbox managerpkg/security/sandbox.go– sandbox corepkg/security/resolver.go– path resolver
{
"sandbox": {
"enabled": true,
"allowed_paths": [
"/tmp",
"./workspace",
"/var/lib/agent/data"
],
"network_allow": [
"*.anthropic.com",
"api.example.com"
]
}
}import (
"github.com/cexll/agentsdk-go/pkg/security"
)
// Create sandbox
sandbox := security.NewSandbox(workDir)
// Allow paths
sandbox.Allow("/var/lib/agent/runtime")
sandbox.Allow(filepath.Join(workDir, ".cache"))
// Validate path
if err := sandbox.ValidatePath(targetPath); err != nil {
return fmt.Errorf("path denied: %w", err)
}
// Validate command
if err := sandbox.ValidateCommand(command); err != nil {
return fmt.Errorf("command denied: %w", err)
}- Declare all allowed paths in config; avoid runtime adds
- Use absolute paths; avoid ambiguities from relatives
- Review sandbox config regularly; remove unused paths
- Call
ValidatePathfor every tool execution, not just at startup
Checks before execution:
- Dangerous commands (
dd,mkfs,fdisk,shutdown, …) - Dangerous arguments (e.g.,
--no-preserve-root) - Dangerous patterns (
rm -rf,rm -r) - Shell metacharacters (in Platform mode)
- Command length limits
pkg/security/validator.go– command validatorpkg/security/validator_full_test.go– validator tests
dd– raw disk writesmkfs,mkfs.ext4– filesystem formatfdisk,parted– partition editingshutdown,reboot,halt,poweroff– power controlmount– mount filesystem
rm -rf/rm -fr– recursive force deleterm -r/rm --recursive– recursive deletermdir -p– recursive directory delete
|,;,&– command chaining>,<– redirection`– command substitution
import (
"github.com/cexll/agentsdk-go/pkg/security"
)
validator := security.NewValidator()
if err := validator.Validate(command); err != nil {
log.Printf("command blocked: %v", err)
return err
}
// Allow shell metachars (CLI only)
validator.AllowShellMeta(true)validator.BanCommand("kubectl", "cluster ops require approval")
validator.BanCommand("helm", "helm ops require approval")
validator.BanArgument("--force")
validator.BanArgument("--insecure")
validator.BanFragment("sudo rm")- Combine with JSON Schema to validate tool params
- Run validation in
BeforeToolmiddleware - Audit log blocked commands
- Sync with org blacklists regularly
- Enforce approvals for high-risk commands
- Create/manage approval requests
- Session-level allowlist (TTL)
- Decision recording
- Approval event notifications
pkg/security/approval.go– approval queuepkg/security/approval_test.go– tests
import (
"github.com/cexll/agentsdk-go/pkg/security"
)
queue, err := security.NewApprovalQueue("/var/lib/agent/approvals")
if err != nil {
return err
}
request, err := queue.Request(sessionID, command, []string{path})
if err != nil {
return err
}
if queue.IsWhitelisted(sessionID) {
return executeCommand(command)
}
return fmt.Errorf("waiting for approval: %s", request.ID)// Approve (with whitelist TTL)
err := queue.Approve(requestID, approverID, 3600) // 1h whitelist
if err != nil {
return err
}
// Deny
err = queue.Deny(requestID, approverID, "policy violation")
if err != nil {
return err
}- Create and back up the approval storage path before deploy
- Set short TTLs; avoid permanent bypass
- Log all approval actions
- Enforce approval timeouts; auto-deny expired
- Cap whitelist TTL and re-approve regularly
Six checkpoints:
BeforeAgent– request validation, rate limiting, blacklistBeforeModel– prompt injection detection, sensitive-word filterAfterModel– output review, secret redactionBeforeTool– tool permission check, param validationAfterTool– result review, error sanitizationAfterAgent– audit logging, compliance checks
Threats: session abuse (DoS), overlong prompts, malicious IPs.
beforeAgentGuard := middleware.Middleware{
BeforeAgent: func(ctx context.Context, req *middleware.AgentRequest) (*middleware.AgentRequest, error) {
if blacklist.Contains(req.RemoteAddr) {
return nil, fmt.Errorf("IP blocked: %s", req.RemoteAddr)
}
if !rateLimiter.Allow(req.SessionID) {
return nil, fmt.Errorf("too many requests")
}
if len(req.Input) > maxInputLength {
return nil, fmt.Errorf("input too long")
}
return req, nil
},
}Threats: prompt injection, sensitive data leakage, control chars.
beforeModelScan := middleware.Middleware{
BeforeModel: func(ctx context.Context, msgs []message.Message) ([]message.Message, error) {
for _, msg := range msgs {
content := msg.Content
if containsInjection(content) {
audit.Log(ctx, "prompt_injection_detected", content)
return nil, fmt.Errorf("prompt injection detected")
}
if secrets := detectSecrets(content); len(secrets) > 0 {
audit.Log(ctx, "secrets_in_prompt", secrets)
return nil, fmt.Errorf("input contains secrets")
}
msg.Content = filterSensitiveWords(content)
}
return msgs, nil
},
}Threats: dangerous commands, secret leakage, malicious URLs.
afterModelReview := middleware.Middleware{
AfterModel: func(ctx context.Context, output *agent.ModelOutput) (*agent.ModelOutput, error) {
content := output.Content
if dangerous := detectDangerousCommand(content); dangerous != "" {
approvalQueue.Request(sessionID, dangerous, nil)
return nil, fmt.Errorf("model suggested dangerous command: %s", dangerous)
}
cleaned := redactSecrets(content)
if cleaned != content {
audit.Log(ctx, "model_output_redacted", "secrets_found")
output.Content = cleaned
}
return output, nil
},
}Threats: unauthorized tool use, parameter tampering, recursive bypass.
beforeToolGuard := middleware.Middleware{
BeforeTool: func(ctx context.Context, call *middleware.ToolCall) (*middleware.ToolCall, error) {
if !toolRegistry.Exists(call.Name) {
return nil, fmt.Errorf("unknown tool: %s", call.Name)
}
if !rbac.CanInvoke(identity, call.Name) {
audit.Log(ctx, "unauthorized_tool_call", call.Name)
return nil, fmt.Errorf("not authorized to call: %s", call.Name)
}
if err := validateParams(call); err != nil {
return nil, fmt.Errorf("param validation failed: %w", err)
}
if path, ok := call.Params["path"].(string); ok {
if err := sandbox.ValidatePath(path); err != nil {
return nil, fmt.Errorf("path denied: %w", err)
}
}
return call, nil
},
}Threats: secret leakage, error info disclosure, oversized output.
afterToolReview := middleware.Middleware{
AfterTool: func(ctx context.Context, result *middleware.ToolResult) (*middleware.ToolResult, error) {
if secrets := detectSecrets(result.Output); len(secrets) > 0 {
result.Output = redactSecrets(result.Output)
audit.Log(ctx, "tool_output_redacted", "secrets_found")
}
if result.Error != nil {
logSecurityError(ctx, result.Error)
result.Error = errors.New("tool execution failed")
}
if len(result.Output) > maxOutputLength {
result.Output = result.Output[:maxOutputLength] + "...(truncated)"
}
return result, nil
},
}Threats: missing audit, compliance gaps, untraceable incidents.
afterAgentAudit := middleware.Middleware{
AfterAgent: func(ctx context.Context, resp *middleware.AgentResponse) (*middleware.AgentResponse, error) {
record := audit.Entry{
Timestamp: time.Now().UTC(),
SessionID: resp.SessionID,
Input: resp.Input,
Output: resp.Output,
ToolCalls: resp.ToolCalls,
Approved: approvalQueue.IsWhitelisted(resp.SessionID),
UserID: getUserID(ctx),
}
if err := audit.Store(record); err != nil {
log.Printf("audit write failed: %v", err)
return nil, fmt.Errorf("audit logging failed")
}
if err := compliance.Check(resp); err != nil {
audit.Log(ctx, "compliance_violation", err.Error())
return nil, fmt.Errorf("compliance failed: %w", err)
}
return resp, nil
},
}- Sandbox configured with all required allow paths
- Command validator enabled and configured
- Approval queue storage path created with permissions
- Security handlers registered at all middleware hooks
- Middleware timeouts < request timeout
- Audit log path configured and writable
- Network allowlist configured
go test ./pkg/security/... -v
go test ./pkg/middleware/... -v
go test ./test/integration/security/... -vTrack metrics:
middleware_stage_rejections_total{stage="before_agent"}middleware_stage_rejections_total{stage="before_tool"}approval_queue_pending_totalsandbox_violations_total{type="path"}sandbox_violations_total{type="command"}audit_log_failures_total
Alert on: high rejection rate, approval backlog, audit write failures, sandbox violation spikes.
Mitigation:
- Call
Sandbox.ValidatePathon all path params - Re-validate in
BeforeTool - Use absolute paths, resolve symlinks
- Restrict allowed prefixes
Test:
go test ./pkg/security -run TestSandbox_PathTraversalMitigation:
- Detect patterns in
BeforeModel - Maintain injection signature list
- Log suspected injections
- Require approval for high-risk inputs
Detection example:
func containsInjection(input string) bool {
patterns := []string{
"ignore previous instructions",
"ignore above",
"disregard all",
"system prompt",
}
lower := strings.ToLower(input)
for _, pattern := range patterns {
if strings.Contains(lower, pattern) {
return true
}
}
return false
}Mitigation:
- Scan secrets in
BeforeModelandAfterModel - Clean tool output in
AfterTool - Regex common patterns
- Keep pre-redaction data in encrypted storage
Patterns:
var secretPatterns = []*regexp.Regexp{
regexp.MustCompile(`sk-[a-zA-Z0-9]{48}`), // API Keys
regexp.MustCompile(`[0-9]{4}-[0-9]{4}-[0-9]{4}-[0-9]{4}`), // Credit Cards
regexp.MustCompile(`ghp_[a-zA-Z0-9]{36}`), // GitHub Tokens
regexp.MustCompile(`xox[baprs]-[a-zA-Z0-9-]+`), // Slack Tokens
}- Validate all commands with
Validator.Validate - Block shell metachars (Platform mode)
- Use parameterized execution, not string concatenation
- Limit command length
- Enforce RBAC in
BeforeTool - Require approval for privileged ops
- Limit recursion depth
- Log all authorization decisions
Middleware errors should trigger alerts:
if err != nil {
alert.Send(alert.SecurityEvent{
Stage: "before_tool",
Error: err.Error(),
SessionID: sessionID,
Timestamp: time.Now(),
})
}approvalQueue.RevokeAll()
approvalQueue.SetGlobalApprovalRequired(true)audit-export --since 1h --output /tmp/audit.json
audit-analyze /tmp/audit.json --detect-anomalies- Patch detection logic
- Run regression tests
- Gradually restore service
- Watch for anomaly metrics
- Record incident timeline
- Root-cause analysis
- Update security config
- Refine detection rules
- Update documentation
- Enable all security checks by default
- Define JSON Schemas for every tool
- Cover security cases in unit tests
- Use static analysis
- Manage policies via config files
- Enable all monitoring metrics
- Configure alert rules
- Prepare incident response playbooks
- Review audit logs regularly
- Update blacklists and validators
- Run red/blue exercises
- Stay current with security patches
- Use append-only storage for audit logs
- Link audit records to approval decisions
- Back up audit data regularly
- Enforce audit log integrity checks