Run Aura in Docker

A reproducible Aura for CI pipelines, ephemeral dev containers, and self-hosted Mothership hubs.

The official Aura image lives at ghcr.io/naridon-inc/aura. It is a slim Debian-based image with the aura binary on PATH, no build toolchain, and sane defaults for running either as a CLI invocation or as a long-lived Mothership daemon. Multi-arch manifests cover linux/amd64 and linux/arm64, so it runs natively on Apple Silicon laptops and on ARM cloud instances.

This guide covers three usage shapes: one-shot CLI invocations, a persistent dev container, and a Docker Compose deployment of a team Mothership.

Prerequisites

  • Docker Engine 24+ or Docker Desktop 4.30+.
  • docker compose (the plugin, not the legacy docker-compose binary).
  • 500 MB of free disk for the image plus whatever your repositories consume.

Image tags

The image follows standard tagging conventions:

  • ghcr.io/naridon-inc/aura:latest — the latest stable release. Moves.
  • ghcr.io/naridon-inc/aura:0.14.1 — pinned to a specific version. Immutable.
  • ghcr.io/naridon-inc/aura:0.14 — floats within a minor.
  • ghcr.io/naridon-inc/aura:edge — the tip of main. Breaks occasionally. Do not use in CI unless you know what you are doing.

For reproducible CI runs, always pin to a specific version.

One-shot CLI usage

The simplest pattern: mount your repo into the container, run an Aura subcommand, exit.

docker run --rm \
  -v "$PWD":/work \
  -w /work \
  ghcr.io/naridon-inc/aura:0.14.1 \
  aura pr-review --base main

The --rm flag ensures the container is discarded after the command exits. The bind mount at /work is where Aura reads and writes your .aura/ directory.

For operations that need Git remote access — aura team join, aura live-sync-pull — pass your Git credentials and SSH config through:

docker run --rm \
  -v "$PWD":/work \
  -v "$HOME/.ssh":/root/.ssh:ro \
  -v "$HOME/.gitconfig":/root/.gitconfig:ro \
  -w /work \
  ghcr.io/naridon-inc/aura:0.14.1 \
  aura live-sync-pull

Running as root inside the container and then writing to a host-mounted directory produces root-owned files on your host. Match the UID explicitly:

docker run --rm \
  --user "$(id -u):$(id -g)" \
  -v "$PWD":/work \
  -w /work \
  ghcr.io/naridon-inc/aura:0.14.1 \
  aura status

The image's entrypoint is aura, so you can also shorten invocations:

docker run --rm -v "$PWD":/work -w /work \
  ghcr.io/naridon-inc/aura:0.14.1 status

CI pipeline usage

In GitHub Actions, a typical pre-merge semantic review step looks like this:

jobs:
  aura-review:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0
      - name: Aura PR review
        run: |
          docker run --rm \
            -v "$PWD":/work -w /work \
            ghcr.io/naridon-inc/aura:0.14.1 \
            pr-review --base origin/${{ github.base_ref }} --format json \
            > aura-review.json
      - uses: actions/upload-artifact@v4
        with:
          name: aura-review
          path: aura-review.json

fetch-depth: 0 matters: Aura's semantic diff needs access to the merge-base commit. Shallow clones will produce partial results.

In GitLab CI:

aura-review:
  image: ghcr.io/naridon-inc/aura:0.14.1
  script:
    - aura pr-review --base $CI_MERGE_REQUEST_DIFF_BASE_SHA

GitLab sets CI_MERGE_REQUEST_DIFF_BASE_SHA on merge request pipelines; it is the correct base for a semantic diff.

Persistent dev container

For developers who prefer a disposable host and a containerized toolchain, run the image with a volume for .aura/ state so it survives container restarts.

docker volume create aura-home

docker run -it \
  --name aura-dev \
  -v aura-home:/root/.aura \
  -v "$PWD":/work \
  -w /work \
  --entrypoint bash \
  ghcr.io/naridon-inc/aura:0.14.1

The named volume aura-home stores your MCP agent memory, handover payloads, session history, and credentials. Per-repo state (<repo>/.aura/) lives in the bind mount and persists on your host.

For a VS Code dev container, add to your .devcontainer/devcontainer.json:

{
  "image": "ghcr.io/naridon-inc/aura:0.14.1",
  "overrideCommand": true,
  "mounts": [
    "source=aura-home,target=/root/.aura,type=volume"
  ],
  "remoteUser": "root",
  "customizations": {
    "vscode": {
      "extensions": ["auravcs.aura-vscode"]
    }
  }
}

Running Mothership under Docker Compose

The Mothership daemon is a long-lived process that accepts TCP connections from teammates. Docker Compose is a clean way to run it on a team-hub host, with a volume for persistent state and optional TLS termination in front.

docker-compose.yml:

services:
  mothership:
    image: ghcr.io/naridon-inc/aura:0.14.1
    container_name: aura-mothership
    restart: unless-stopped
    command: ["team", "serve", "--bind", "0.0.0.0:7777"]
    ports:
      - "7777:7777"
    volumes:
      - mothership-data:/var/lib/aura
    environment:
      AURA_DATA_DIR: /var/lib/aura
    healthcheck:
      test: ["CMD", "aura", "team", "status"]
      interval: 30s
      timeout: 5s
      retries: 3

volumes:
  mothership-data:

Bring it up:

docker compose up -d
docker compose logs -f mothership

Generate a team join token against the running container:

docker compose exec mothership aura team invite --expires 24h

Hand that token to teammates; they run aura team join <token> on their laptops to connect. See connecting to a team for the client flow.

TLS termination

Mothership speaks its own authenticated, end-to-end-encrypted protocol on the wire. TLS is not required for security — the join token doubles as a keying secret — but many corporate networks prefer traffic that looks like HTTPS. Put Caddy or nginx in front:

  caddy:
    image: caddy:2
    restart: unless-stopped
    ports:
      - "443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      - caddy-data:/data
    depends_on:
      - mothership

volumes:
  mothership-data:
  caddy-data:

Caddyfile:

mothership.example.com {
  reverse_proxy mothership:7777
}

Point clients at aura team join <token> normally — the token encodes the endpoint Caddy fronts.

Backups

The mothership-data volume contains:

  • Function-level change log (the "live sync" ledger).
  • Team zone claims.
  • Unread messages and sentinel inbox.
  • Team knowledge snapshots.

Back it up nightly:

docker run --rm \
  -v mothership-data:/data:ro \
  -v "$PWD":/backup \
  alpine tar -czf /backup/mothership-$(date +%F).tar.gz -C /data .

Restore by extracting into a fresh volume before the container starts.

Volume layout

The image expects two state directories, both configurable via environment variables:

  • AURA_HOME (default /root/.aura) — per-user state: MCP memory, handover payloads, credentials, session history. Mount a volume here for persistent CLI sessions.
  • AURA_DATA_DIR (default /var/lib/aura in the Mothership unit) — per-daemon state: the live-sync ledger, team zones, messaging. Mount a volume here for Mothership deployments.

Per-repository state — .aura/ at the root of each repo — is not controlled by these variables. It lives next to the code and is carried by whatever bind mount you use for your work directory.

Upgrading

Pull the new tag and recreate the container:

docker compose pull
docker compose up -d

Aura's on-disk format is forward-compatible within a minor version. Between minors, the daemon runs a migration on first start; back up the volume first.

Troubleshooting

"Permission denied" writing to /work/.aura — UID mismatch. Run with --user "$(id -u):$(id -g)".

"inotify watch limit reached" — a Linux host kernel setting, not a Docker setting. Raise fs.inotify.max_user_watches on the host.

Hook not firing inside a dev container — Aura installs hooks by writing .git/hooks/pre-commit in your repo. If your Git operations happen outside the container, the hook will invoke the host's aura binary, which may not exist. Install Aura on the host too, or run Git commands from inside the container.

See the full troubleshooting guide for more.

Next steps