Overview
Adjutant uses a local Bun server that fronts the Vercel AI Gateway. This provides one API surface for multiple model providers and enables local control inside Autopilot Desktop. Note: This is only needed for Adjutant workflows. Codex-only usage does not require the AI Gateway.Prerequisites
- Bun runtime
- AI Gateway API key (Vercel)
Environment Variables
Create or update.env in the repo root:
How It Starts
Autopilot Desktop starts the AI server during app setup. The server runs locally and exposes OpenAI-compatible endpoints:POST /v1/chat/completionsPOST /v1/embeddingsGET /health
Model Routing
Routing can choose a primary model with fallbacks (planning vs exploration vs synthesis), allowing fast paths and recovery when a provider fails.Troubleshooting
- Port conflicts: change
AI_SERVER_PORT - Invalid key: verify
AI_GATEWAY_API_KEY - Health check:
curl http://localhost:3001/health