The --llm flag tries to contribute to solve context optimization problem for AI agents. Instead of manually writing hundreds of lines describing available tools, agents just run dx --llm and automatically discover what they can do in any project.

Context Optimization Problem

Before dx —llm: Manual context files that get outdated
<!-- agents.md - manually maintained -->
Available tools in this project:
- Build: cargo build --release
- Test: cargo test
- Deploy: kubectl apply -f k8s/
- Lint: cargo clippy
- Format: cargo fmt
<!-- Gets outdated when commands change -->
With dx —llm: Dynamic, always-current context
# Agent runs this at start of every task
$ dx --llm
Use dx non-interactively. No TUI.
- dx aliases  # list alias table
- dx <alias>  # run leaf cmd; returns exit code
Rules: do not expect prompts; check exit codes.

# Then discovers actual available commands
$ dx aliases
ALIAS    NAME           TYPE  DETAILS
build    Build Release  cmd   cargo build --release
test     Run Tests      cmd   cargo test --all
deploy   Deploy K8s     cmd   kubectl apply -f k8s/
lint     Code Quality   cmd   cargo clippy -- -D warnings
Result: Agent always knows current project capabilities without manual context management.

Context & Cost Optimization

The dx —llm

Dynamic context discovery in minimal tokens:
# Agent workflow (4 commands, ~50 tokens total)
1. dx --llm                     # Get usage instructions
2. dx aliases                   # Discover commands

Agent starts each task with 2 commands:

After dx —llm:
$ dx --llm && dx aliases
# Gets current, accurate list of ALL available commands

Auto-Updated Context

# When project evolves, dx context updates automatically:

# Developer adds new command to dx.yaml:
- name: "Deploy with Rollback"
  alias: "deploy-safe"
  cmd: "kubectl apply -f k8s/ && kubectl rollout status"

# Next time agent runs:
$ dx aliases
# Automatically includes new deploy-safe command
# No manual context file updates needed!

Agent-Friendly Design

No TUI Required

Commands run directly in shell, perfect for automated environments

Exit Code Aware

Proper exit codes enable AI agents to detect success/failure

Stdio Inheritance

Commands inherit stdin/stdout for seamless pipeline integration

Recording Built-in

AI can record sessions for debugging and analysis

AI Safety & Verification

Command Safety Check

# Verify if command/script is safe for AI execution
dx ai verify ./deploy.sh
dx ai verify "rm -rf /"           # Obviously unsafe
dx ai verify "cargo build"        # Generally safe
AI analyzes:
  • Destructive patterns - rm -rf, dd, format, etc.
  • Network operations - External API calls, data uploads
  • File system risks - Writing to system directories
  • Permission escalation - sudo, su, privilege requests
  • Resource consumption - Infinite loops, fork bombs
Output examples:
$ dx ai verify "cargo build --release"
 SAFE: Standard build command, no destructive operations

$ dx ai verify "./deploy.sh"  
⚠️  REVIEW NEEDED: Deploys to production, requires manual confirmation

$ dx ai verify "curl -X POST api.com/delete-all"
 UNSAFE: Destructive API operation detected

Safety Configuration

# config.toml
[ai.safety]
auto_verify = true              # Check all commands before execution
block_unsafe = true             # Block obviously dangerous commands  
require_confirmation = ["deploy", "delete", "rm"]  # Always confirm these
whitelist_patterns = ["cargo", "npm", "git"]       # Always allow these
AI safety is improving rapidly but not perfect. Always review AI recommendations, especially for:
  • Production deployments
  • Data deletion operations
  • System configuration changes
  • External API calls with side effects
Use AI verification as a first line of defense, not the only one.
AI agents work best when dx aliases are semantically meaningful (build, test, deploy) rather than cryptic (cmd1, run-x).