Using dx —llm flag for seamless AI agent automation and non-interactive workflows
The --llm flag tries to contribute to solve context optimization problem for AI agents. Instead of manually writing hundreds of lines describing available tools, agents just run dx --llm and automatically discover what they can do in any project.
# Agent runs this at start of every task$ dx --llmUse dx non-interactively. No TUI.- dx aliases # list alias table- dx <alias> # run leaf cmd; returns exit codeRules: do not expect prompts; check exit codes.# Then discovers actual available commands$ dx aliasesALIAS NAME TYPE DETAILSbuild Build Release cmd cargo build --releasetest Run Tests cmd cargo test --alldeploy Deploy K8s cmd kubectl apply -f k8s/lint Code Quality cmd cargo clippy -- -D warnings
Result: Agent always knows current project capabilities without manual context management.
# Verify if command/script is safe for AI executiondx ai verify ./deploy.shdx ai verify "rm -rf /" # Obviously unsafedx ai verify "cargo build" # Generally safe
AI analyzes:
Destructive patterns - rm -rf, dd, format, etc.
Network operations - External API calls, data uploads
File system risks - Writing to system directories
Permission escalation - sudo, su, privilege requests
Resource consumption - Infinite loops, fork bombs
Output examples:
Copy
Ask AI
$ dx ai verify "cargo build --release"✅ SAFE: Standard build command, no destructive operations$ dx ai verify "./deploy.sh"⚠️ REVIEW NEEDED: Deploys to production, requires manual confirmation$ dx ai verify "curl -X POST api.com/delete-all"❌ UNSAFE: Destructive API operation detected