Skip to main content

Overview

Adjutant uses a local Bun server that fronts the Vercel AI Gateway. This provides one API surface for multiple model providers and enables local control inside Autopilot Desktop. Note: This is only needed for Adjutant workflows. Codex-only usage does not require the AI Gateway.

Prerequisites

  • Bun runtime
  • AI Gateway API key (Vercel)

Environment Variables

Create or update .env in the repo root:
AI_GATEWAY_API_KEY=your_vercel_ai_gateway_api_key_here
AI_GATEWAY_BASE_URL=https://ai-gateway.vercel.sh/v1

AI_SERVER_PORT=3001
AI_SERVER_HOST=localhost

DEFAULT_LLM_PROVIDER=anthropic
DEFAULT_LLM_MODEL=claude-sonnet-4.5
FALLBACK_LLM_MODEL=openai/gpt-4o

DSPY_LM_ENDPOINT=http://localhost:3001
DSPY_MAX_TOKENS=4096
DSPY_TEMPERATURE=0.7

How It Starts

Autopilot Desktop starts the AI server during app setup. The server runs locally and exposes OpenAI-compatible endpoints:
  • POST /v1/chat/completions
  • POST /v1/embeddings
  • GET /health

Model Routing

Routing can choose a primary model with fallbacks (planning vs exploration vs synthesis), allowing fast paths and recovery when a provider fails.

Troubleshooting

  • Port conflicts: change AI_SERVER_PORT
  • Invalid key: verify AI_GATEWAY_API_KEY
  • Health check: curl http://localhost:3001/health