Docker and Local AI Infrastructure: The Service Layer
docker-compose.yml is your infrastructure as code. It is the single source of truth for your local AI platform — 8 services, one network, zero port conflicts, because ports are assigned not discovered.
You do not discover infrastructure problems. You design away from them. The engineer who starts a new service by asking "what port is free right now?" is building a system that will break in ways that are embarrassing to explain.
Port conflicts. Services that cannot find each other. Manual startup sequences that someone memorized and nobody documented. An environment that works on the machine where it was built and nowhere else.
Docker Compose is the answer to all of these — not because containers are inherently elegant, but because a compose file forces you to make deliberate decisions about every service before you write a line of application code.
docker-compose.yml is your infrastructure as code. It is the single source of truth for your local AI platform. Port assignments, network topology, service dependencies, volume mounts — all in one file, all versioned, all reviewable.
The 8-Service Stack
Here is the current local AI infrastructure, defined in ~/Documents/Dev/docker-compose.yml:
| Service | Port | Function |
|---|---|---|
| excalidraw | :3000 | Diagram and visual MCP server |
| rewired-media | :8792 | Rewired Minds content pipeline API |
| mc-backend | :8001 | Mission Control backend (Python) |
| mc-frontend | :5174 | Mission Control dashboard (React) |
| invictus-backend | :8000 | Invictus Labs core backend |
| invictus-frontend | :5173 | Invictus Labs main frontend (React) |
| agent-one-on-one | :8765 | Agent coaching service |
| capcut-mcp | :9000 | CapCut MCP server for content pipeline |
Eight services. All on one bridge network called invictus-net. All defined in one file. All startable with docker compose up -d.
This is not incidental. This is the design. When a new service joins the platform, it gets a port assigned to it before the code is written, added to the compose file before the first commit, and connected to invictus-net as a first-class network citizen from day one.
The Bridge Network
The invictus-net bridge network is the coordination layer. Every service on it is discoverable by hostname — not IP, not localhost, not "whatever docker inspect says right now."
mc-backend reaches invictus-backend at http://invictus-backend:8000. Not at http://127.0.0.1:8000. Not at a hardcoded IP. By the service name defined in the compose file.
This matters for AI agent integration. When capcut-mcp at :9000 needs to call the mission control backend, it does not need to know where that service is running. It knows the name. The network handles the resolution. Service discovery is automatic, not configured per-connection.
networks:
invictus-net:
driver: bridge
services:
mc-backend:
networks:
- invictus-net
capcut-mcp:
networks:
- invictus-net
Two services, one network, discoverable by name. That is the entire network configuration.
Image Selection: The Decision Matrix
Choosing the right base image is not a philosophical question. It is an operational one with known tradeoffs.
python:3.11-slim — the standard choice for Python services. Smaller than the full Python image. Still has pip, setuptools, and the C libraries most packages need. No Alpine compatibility issues. This is the default for mc-backend and invictus-backend.
Alpine Linux — smallest available footprint. Significant tradeoffs. Alpine uses musl libc instead of glibc, which means some compiled Python packages fail to install. Critically: Alpine resolves localhost differently from Debian-based images on some network configurations. Always use 127.0.0.1 explicitly in Alpine containers, not localhost. Node services on Alpine: no wget by default — use curl or install wget explicitly via apk add wget.
node:20-slim — for Node/TypeScript services. Same philosophy as python:3.11-slim. Slim unless you need Alpine's footprint.
The rule: use python:3.11-slim or node:20-slim as the default. Drop to Alpine only when image size is a hard constraint and you have verified your dependencies compile cleanly against musl.
Alpine gotcha: localhost may not resolve correctly in all Alpine container network configurations. Use 127.0.0.1 explicitly. Node services on Alpine have no wget — use curl instead.
The Port Assignment Strategy
Every service in the stack has a port assigned before it is built. The port is not chosen at runtime. It is not "whatever was available." It is a deliberate, documented decision.
The strategy: group ports by function tier. :3000–:3999 for tooling and MCP servers. :5000–:5999 for frontends. :8000–:8999 for backends and APIs. :9000–:9999 for content pipeline services.
This makes the port range meaningful. When someone sees :8001 in a log line, they know it is a backend service without checking the compose file. When a new service is added, its tier determines its port range. Conflicts are impossible because the naming convention enforces separation.
Service Health Checks
Docker Compose supports health check definitions. Use them.
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8001/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
With health checks defined, dependent services can use condition: service_healthy in their depends_on configuration. The database is not just "started" — it is healthy before the application server connects to it.
This eliminates an entire class of startup-order race conditions that would otherwise require sleep calls, retry loops, or manual startup sequences.
The MCP Server as a Docker Service
capcut-mcp at :9000 is a containerized MCP server. Claude Code connects to it via the local network exactly as it would connect to any other MCP endpoint. The container boundary is invisible to the client.
This is the pattern for any tool that needs to run as a persistent service available to AI agents: containerize it, assign it a port, put it on invictus-net, add it to the compose file. The agent connects to it the same way regardless of what language it is written in, what dependencies it requires, or what operating system it was designed for.
In preparing for battle I have always found that plans are useless, but planning is indispensable.
— General Dwight D. Eisenhower · Crusade in Europe
The compose file is the planning artifact. It forces every architecture decision — service count, port assignment, network topology, volume strategy — to be made explicitly before anything runs. The plan may change. The discipline of planning does not.
The docker-compose.yml is not just operational tooling. It is the architectural record of your local AI platform. New engineers, new agents, and future-you six months from now can read it and understand the full system topology in ten minutes.
Volume Strategy
Mount specific files or subdirectories, not entire project directories. Whole-directory mounts bring in node_modules, .git directories, build artifacts, and anything else that happens to be present. They slow down container startup and pollute the container filesystem with development artifacts.
The principle: containers should contain exactly what they need to run, mounted from exactly the paths that contain it. Surgical volume mounts are a reliability requirement, not a performance optimization.
Starting and Managing the Stack
# Start all services in the background
docker compose up -d
# Start a specific service
docker compose up -d mc-backend
# View logs for a service
docker compose logs -f mc-backend
# Rebuild a service after code changes
docker compose up -d --build mc-backend
# Stop everything
docker compose down
The entire 8-service stack starts with one command. Every service is healthy, network-connected, and discoverable within thirty seconds. That is the value of the compose file done correctly.
Drill
If you are running three or more local services manually — started in separate terminals, on ad-hoc ports, with no documented startup procedure — write a compose file today. Start with two services and one shared network. Add a bridge network. Define ports explicitly. Write a health check for each service.
You do not need all eight services on day one. You need the discipline of the compose file from day one. Every service you add later inherits the structure. Every service you run without a compose entry is technical debt that accumulates silently until it breaks in the middle of something that matters.
Bottom Line: Docker Compose is the infrastructure layer that separates a local AI operating system from a collection of terminal windows. One compose file. One bridge network. Explicit port assignments. Named service discovery. Health checks. Surgical volume mounts. Eight services start in thirty seconds and communicate by hostname, not IP. That is the architecture. Build it once. Run it everywhere.
Explore the Invictus Labs Ecosystem