How Aura Works (High Level)

A tour of the engine, from parser to pre-commit hook. Just enough mechanism to understand what you're using.

This page is a conceptual tour of Aura's architecture. It skips implementation details in favor of the shape of the system — what the components are, what they do, and how they fit together. For the rationale behind specific choices (Rust, tree-sitter, P2P, Apache 2.0, MCP), see Design Decisions.

The tour moves from the inside out: parser, logic graph, merge engine, intent, Sentinel, Mothership, MCP, hook.

The layered picture

Aura is a stack of layers, each building on the ones below:

┌───────────────────────────────────────────────────┐
│  MCP server  •  CLI  •  Pre-commit hook           │  interfaces
├───────────────────────────────────────────────────┤
│  Sentinel    •  Live Sync  •  Mothership          │  collaboration
├───────────────────────────────────────────────────┤
│  Intent log  •  Snapshots  •  Impact alerts       │  provenance
├───────────────────────────────────────────────────┤
│  Semantic merge engine  •  Diff  •  Rewind        │  operations
├───────────────────────────────────────────────────┤
│  Logic graph (AST-normalized, content-addressed)  │  model
├───────────────────────────────────────────────────┤
│  Tree-sitter parsers  •  Git interop              │  foundation
└───────────────────────────────────────────────────┘

Everything above the logic graph speaks semantics. Everything below speaks syntax. The layer boundary is the point of the entire system.

Layer 1: the parser

The bottom of the stack is a set of tree-sitter parsers. Tree-sitter is a battle-tested incremental parser framework with grammars for nearly every mainstream language. Aura uses it because:

  • It is language-agnostic. One framework, dozens of languages.
  • It is fast. Sub-millisecond parses on typical files.
  • It is incremental. Small edits don't re-parse the whole file.
  • Its trees are stable and canonical, which is a precondition for content addressing.

When Aura ingests a source file, it runs the appropriate tree-sitter grammar and produces a concrete syntax tree. That tree is then normalized — whitespace collapsed, comments stripped for hashing purposes, certain trivially-equivalent forms canonicalized — into an Aura AST. The Aura AST is the input to the layer above.

Layer 2: the logic graph

The logic graph is Aura's core data structure. It is:

  • A set of logic nodes — functions, methods, classes, modules, and language-specific analogues.
  • A set of edges — calls, imports, inheritance, usage.
  • Each node has an aura_id, which is the hash of its normalized AST.
  • Each node tracks its location history (what file, what range) separately from its identity.
  • Metadata is attached to nodes: intent annotations, agent provenance, checkpoints, zone claims.

Crucially, identity is content, and location is metadata. Moving a function across files changes location; it does not change identity. This is the single most important property of the logic graph, and every interesting Aura feature derives from it.

The graph is persisted under .aura/ in the repository. It is rebuilt from source on initial ingest and incrementally updated as files change.

Layer 3: diff, merge, and rewind

With the logic graph in hand, the core VCS operations become operations on the graph.

Diff

An Aura diff is a delta between two graph states. It reports:

  • Nodes added (new aura_ids).
  • Nodes removed (identities no longer present).
  • Nodes modified (same identity lineage, different content after some edit).
  • Nodes moved (identity stable, location changed).
  • Nodes renamed (identity stable, name changed, location possibly unchanged).

This is strictly more informative than a text diff, and it is blind to the noise text diffs suffer from (reformatting, renaming, reordering).

Merge

Aura's semantic merge engine performs three-way merge on the graph, not on the text. The common ancestor, the two branch states, and the merge target are all graphs. The merge operates node by node:

  • Nodes modified on only one branch: accept the modification.
  • Nodes moved on only one branch: accept the move.
  • Nodes renamed on only one branch: accept the rename.
  • Nodes modified on both branches in compatible ways: merge the subtrees.
  • Nodes modified on both branches in conflicting ways: surface a node-level conflict, with a structured description of the disagreement.

Because identity is content, most noisy "conflicts" that Git produces disappear entirely. The conflicts that remain are the meaningful ones.

Rewind

aura rewind is a surgical revert at the node level. Instead of reverting a commit (which reverts a whole text snapshot), rewind reverts a single function or class to its last safe state. Because every node is content-addressed and every edit is recorded, Aura can reconstruct any node's historical state and substitute it back into the current graph without disturbing the rest of the repository.

This operation has no clean analogue in Git. In Git, reverting one function is a manual editing exercise.

Layer 4: provenance — intent, snapshots, impact alerts

Above the operational layer sits the provenance layer, which is where Aura's opinions about why code changes live.

Intent log

Every commit carries a logged intent string (for machine consumption) and, optionally, richer structured fields. The intent log is a durable, append-only record, tied cryptographically to the AST diffs it describes. aura_log_intent is the primary interface; the CLI, MCP tools, and hooks all write to the same log.

Snapshots

Before a meaningful edit, Aura can take a snapshot — a durable, file-granular backup that survives even destructive local operations. Snapshots are what aura rewind sometimes uses as a fallback when it cannot reconstruct from the graph alone. Snapshots also participate in team zone checks: taking a snapshot of a file claimed by a teammate triggers a zone warning or block.

Impact alerts

The logic graph tracks edges between nodes. When a node is modified on a branch, Aura can compute the set of other nodes that depend on it. If a teammate's branch modifies a function you depend on, Aura surfaces an impact alert — a structured notification that a function under your influence is being reshaped elsewhere. You can resolve it (after adjusting your code) or acknowledge it (if it doesn't affect you).

Impact alerts are how long-running branches stay informed about each other without requiring constant merging.

Layer 5: collaboration — Sentinel, Live Sync, Mothership

The collaboration layer turns single-user Aura into a team tool.

Mothership

Mothership is a self-hosted hub. It accepts inbound connections from team members' Aura instances, authenticated with JWT, over TLS. It stores:

  • The team's shared logic graph.
  • Intent logs across the team.
  • Zone claims, impact alerts, and agent messages.
  • Team presence and active-session state.

Mothership is designed to run on infrastructure you control. It is a single binary (plus a TLS certificate and a JWT secret). It can be deployed via Docker, Kubernetes, or a systemd unit on a VM. There is no third-party dependency.

Teams that prefer not to self-host can use a hosted Mothership, but the self-hosted path is the default and the one Aura optimizes for.

Live Sync

Live Sync is the real-time function-level synchronization between team members, refreshed every five seconds. Unlike commit-push-pull cycles, Live Sync operates at the level of individual function bodies. When a teammate (or their agent) modifies calculate_tax, your local Aura instance sees the new body within seconds, without anyone committing or pushing.

This enables the workflows Aura is known for:

  • Seeing that a function you depend on is being rewritten in real time.
  • Cross-branch impact alerts before any merge happens.
  • Agent collision detection across teammates' local sessions.

Live Sync is always function-granular, never file-granular. It never tries to apply live text to your working copy; it surfaces structured information your tools can react to.

Sentinel

Sentinel is the agent coordination protocol. It runs on top of Mothership (or standalone, for local multi-agent setups). Sentinel gives agents:

  • Presence — who else is active in this repo.
  • Messages — point-to-point or broadcast communication between agents.
  • Zone claims — "I'm editing auth.py for the next 20 minutes, warn others."
  • Collision detection — "Another agent is modifying the same function right now."
  • Vendor-neutral routing — Claude, Cursor, Gemini, Copilot all participate.

Agents drive Sentinel through MCP tools: aura_sentinel_send, aura_sentinel_inbox, aura_sentinel_agents, aura_zone_claim. Humans can drive it through the CLI.

Layer 6: interfaces — MCP, CLI, pre-commit hook

The top of the stack is the surfaces through which humans and agents actually interact with Aura.

The CLI

The aura binary exposes every core operation: status, diff, merge, log intent, snapshot, rewind, pr-review, doctor, session management, team operations. It is the primary interface for humans.

The MCP server

The MCP server exposes the same functionality — and a bit more — as Model Context Protocol tools. Any MCP-speaking agent can call them. There are roughly 29 tools covering:

  • State inspection (aura_status, aura_live_sync_status, aura_live_impacts).
  • Intent (aura_log_intent).
  • Planning (aura_plan_discover, aura_plan_lock, aura_plan_next).
  • Verification (aura_prove, aura_pr_review).
  • Snapshots and rewind (aura_snapshot, aura_snapshot_list, aura_rewind).
  • Team messaging (aura_msg_send, aura_msg_list).
  • Agent coordination (aura_sentinel_send, aura_sentinel_inbox, aura_sentinel_agents, aura_zone_claim).
  • Sync (aura_live_sync_pull, aura_live_sync_push).
  • Diagnostics (aura_doctor).
  • Context compression (aura_handover).

The MCP server is how agents become first-class citizens. It is not optional; it ships with every Aura installation.

The pre-commit hook

The hook is what makes intent validation load-bearing. On every commit, the hook:

  1. Reads the logged intent for the current commit.
  2. Computes the AST-level diff against the commit's parent.
  3. Compares the stated intent against the actual structural changes.
  4. Flags, warns, or blocks based on configured strictness.
  5. Optionally triggers auto-push to Mothership, informing teammates of your changes.

With strict mode on, commits whose intent does not match their changes are blocked. Teams can configure what "match" means and how strict the threshold is.

The end-to-end flow

Follow a typical edit through the system:

  1. Agent or human opens a file. The file is parsed by tree-sitter; the logic graph is updated.
  2. Agent calls aura_snapshot to back up the file. Zone check runs; if another teammate has claimed this file, a warning or block is emitted.
  3. Agent edits. Tree-sitter reparses incrementally; the graph updates.
  4. Agent calls aura_log_intent with a statement of purpose.
  5. Agent commits. The pre-commit hook runs, validates intent vs AST diff, and either blocks or lets it through. On success, the changes are auto-pushed to Mothership.
  6. Mothership receives the changes. Teammates' Aura instances pull them via Live Sync within five seconds.
  7. Teammates with dependencies see an impact alert. They resolve or acknowledge.
  8. Other agents in the Sentinel network see the update and avoid collisions.
  9. At merge time, the semantic merge engine operates on the graph, producing node-level conflicts only where real semantic disagreement exists.

No part of this flow requires a third-party service. Everything runs on the team's infrastructure.

What this gets you

The layered architecture is not an aesthetic choice. It is what makes the user-facing promises true:

  • Semantic diff because the graph is the primary object.
  • Semantic merge for the same reason.
  • Intent validation because the diff is structured enough to compare.
  • Rename-stable identity because identity is content-addressed.
  • Real-time cross-branch awareness because Live Sync operates on graph deltas.
  • Agent coordination because Sentinel sits on the shared substrate.
  • Sovereignty because nothing in the stack requires a third party.

Remove any layer and the promise above it weakens or disappears. Replace any layer with a text-based alternative and the promise breaks entirely.

What this costs

The architecture is not free:

  • Parsing cost: every file has to be parsed. Tree-sitter makes this cheap but not zero.
  • Graph storage cost: .aura/ is not large, but it is not nothing (typically 5–15% of repo size).
  • Operational complexity: Mothership is another service to run. Small teams can skip it; large teams benefit.
  • Learning curve: intent, impact alerts, zone claims, Sentinel — new concepts to learn.

We believe the cost is worth it for the workload Aura targets. If you're not in that target, Who Should Use Aura will say so frankly.

Next