# Key Design Decisions > Every meaningful choice in Aura comes with a rationale. These are the five decisions that shape the rest of the system. Software projects accumulate decisions. Some are principled; some are accidental; some are inherited from the tools available at the moment of first commit. For Aura, five decisions matter more than the rest because they are structural — they determine what the project can become. This page explains each one, what alternatives were considered, and why the chosen path won. The five: 1. **Why Rust** — for the engine. 2. **Why tree-sitter** — for the parser. 3. **Why P2P** — for team collaboration. 4. **Why Apache 2.0** — for the license. 5. **Why MCP** — for agent integration. ## Decision 1: Why Rust ### The choice The Aura engine, CLI, merge engine, Mothership, Sentinel, Live Sync, and MCP server are all written in Rust. ### The reasoning Aura is infrastructure. It sits underneath editors, CI systems, agents, and review tools, and it runs on every engineer's machine continuously. Infrastructure has three non-negotiable requirements: **performance**, **safety**, and **portability**. - **Performance**: a VCS primitive runs thousands of times per working day. `aura status`, `aura diff`, hash computations, AST normalizations — if any of these takes hundreds of milliseconds, the tool is unusable. Rust compiles to native code with no runtime, no garbage collector, and predictable latency. Tree-sitter itself is written in C and Rust for the same reason. - **Safety**: Aura manages structured data that represents people's source code. A crash in the engine can corrupt the logic graph. Rust's ownership model and type system catch entire classes of bugs — use-after-free, data races, null dereferences — at compile time. The alternative languages that match Rust's performance (C, C++) do not match its safety. - **Portability**: Rust produces single-binary executables for every major platform (Linux x86_64, Linux aarch64, macOS x86_64, macOS aarch64, Windows x86_64). Install is a download. No runtime dependencies. This is the right shape for a tool that must work on every engineer's laptop and every CI runner. ### Alternatives considered - **Go** — also produces single binaries and compiles fast. But Go's type system is weaker (no algebraic data types, no pattern matching, no sum types), and its garbage collector introduces latency that matters for interactive tools. Go is excellent for services; Rust is better for engines. - **C++** — performance is there, portability is there. Safety is not. For a tool that must be trusted with source code, C++'s memory-safety footprint is unacceptable. - **TypeScript/Node** — excellent for ecosystem reach, but not for performance-critical engines. Node's startup time alone would ruin `aura status`. - **OCaml, Haskell** — great type systems, but small communities and rough cross-platform stories. Hard to hire for. Hard to distribute. Rust wins because it is the only language that meets all three requirements — performance, safety, portability — without meaningful compromise. ### Secondary benefits - A strong ecosystem of libraries (serde, tokio, clap, reqwest) that let the project move fast. - Cargo's dependency and build tooling is first-class. - Rust attracts a workforce that cares about correctness and infrastructure, which is the culture the project wants. - Binary reproducibility is straightforward, which matters for the [cryptographically verifiable principle](/core-principles). ## Decision 2: Why tree-sitter ### The choice Aura uses tree-sitter as its parser framework, with the full set of tree-sitter grammars supplying per-language support. ### The reasoning A semantic VCS has to understand source code. The question is how broadly. Options ranged from writing bespoke parsers for a few key languages, to relying on compiler front-ends (Roslyn, Clang, go/parser), to using a generic parser generator. Tree-sitter won decisively because it solves every requirement at once: - **Language-agnostic**: tree-sitter grammars exist for every mainstream language — Rust, Python, TypeScript, JavaScript, Go, Java, C, C++, C#, Ruby, PHP, Swift, Kotlin, Scala, Haskell, OCaml, Elixir, Erlang, Lua, Shell, HTML, CSS, SQL — and many less mainstream ones. Aura gains multi-language support for free. - **Incremental**: tree-sitter is designed for editor integration, where parsers must reparse in sub-millisecond time after small edits. This performance profile suits Aura perfectly, because Aura is updating its logic graph continuously as files change. - **Error-tolerant**: tree-sitter parses incomplete or syntactically invalid code into a partial tree rather than failing. This matters because real working copies are often syntactically invalid between keystrokes. - **Battle-tested**: GitHub, Neovim, Zed, Atom (historically), and many other editors use tree-sitter in production. The bugs have been shaken out over years. - **Stable trees**: the parse trees are canonical and deterministic, which is a prerequisite for content-addressing an AST. Two parses of the same source always produce the same tree. ### Alternatives considered - **Bespoke parsers per language** — would take years. Would never reach the long tail of languages. - **Compiler front-ends (LLVM, Roslyn)** — heavy dependencies, language-specific, slow to invoke, wrong shape for a VCS. - **Universal Abstract Syntax Network / semantic** — interesting research, small community, not production-ready for our use. - **ast-grep** — great tool, uses tree-sitter underneath. Good for certain workflows but not a VCS primitive. Tree-sitter's unique property is that it gives you **one framework, one API, and dozens of languages**. For a project with a multi-language mandate from day one, it was the only viable choice. ### Tradeoffs Tree-sitter parses concrete syntax, not resolved semantics. It does not know that `foo` in file A refers to `foo` in file B — name resolution is outside its scope. Aura handles name resolution separately, in the logic-graph layer, using heuristics and (for stronger cases) language-specific plugins. This is the right architectural split: tree-sitter handles the syntax layer where it is excellent, and Aura handles the semantic layer where it is excellent. ## Decision 3: Why P2P ### The choice Team collaboration in Aura is **peer-to-peer through a self-hosted Mothership**, not centralized through a third-party cloud. ### The reasoning The collaboration story is where Aura's sovereignty principle most directly translates into architecture. The dominant model for team VCS — GitHub, GitLab.com, Bitbucket — centralizes everything on a US-hosted service. This model is convenient but increasingly incompatible with the workloads Aura targets: - **Regulatory**: EU AI Act, GDPR, UK data-residency rules, defense regulations, healthcare rules, finance rules — all push against third-party hosting. - **Geopolitical**: sanctions, export controls, sudden policy shifts have made single-provider dependence a board-level risk. - **Contractual**: many defense and government customers contractually require self-hosted infrastructure. - **Trust**: agents now route through third-party APIs; adding another third party for code hosting multiplies risk. P2P — where each team member's Aura instance connects to a team-owned Mothership, and Motherships can federate — answers all of these. - **Self-hosted**: the team owns the hardware, the jurisdiction, the keys. - **No forced cloud**: the default workflow never requires an external service. - **Federation-ready**: multi-region teams can run multiple Motherships that sync at the edge. - **Open source**: the Mothership binary is the same open-source code as everything else. ### Alternatives considered - **Centralized SaaS as the default** — would have been easier commercially. Violates the sovereignty principle. Non-starter. - **True mesh P2P with no server** — elegant in theory, operationally painful (NAT traversal, presence, authentication, replay protection). Mothership is a pragmatic middle path: a single team-owned server that every team member connects to, with the option of federating across multiple. - **Federated protocols (ActivityPub-style)** — overkill for team-internal collaboration. Would add complexity without proportionate benefit. The Mothership model — one self-hosted hub per team, optional federation between hubs — is the right unit of granularity. Teams own their coordination layer. Multi-team organizations can federate. Nobody else is in the loop. ### Tradeoffs Self-hosting is not free. Someone has to run the Mothership. Aura invests heavily in making this easy (single binary, Docker, Kubernetes manifests) but it is still an operational task. For teams that cannot self-host, a hosted Mothership offering exists — as a commercial product, on top of the same open-source engine. The engine does not change. ## Decision 4: Why Apache 2.0 ### The choice Every part of Aura's engine — CLI, merge engine, Mothership, Sentinel, Live Sync, MCP server — is licensed under Apache 2.0. ### The reasoning The license was not a marketing decision. It was a structural one. The four candidate licenses were MIT, Apache 2.0, MPL 2.0, and AGPL. Each makes a different tradeoff. - **MIT** — maximally permissive. Permits anything including silent proprietary forks. No explicit patent grant, which is problematic for a project that touches novel algorithms and wants to avoid future patent entanglement. - **Apache 2.0** — permissive with an **explicit patent grant**. Commonly accepted by enterprises and governments. Compatible with almost every other license. The most-adopted permissive license among infrastructure projects (Kubernetes, Kafka, Rust, most of CNCF). - **MPL 2.0** — weak copyleft at the file level. Would discourage certain forms of commercial integration. - **AGPL** — strong copyleft including over-the-network use. Would discourage exactly the organizations we want to adopt Aura (large enterprises, government, regulated industries). Apache 2.0 won because: 1. **Patent grant**: Aura's semantic merge techniques overlap with active research and commercial interest. An explicit patent grant protects contributors and users. 2. **Enterprise acceptance**: Apache 2.0 is on almost every enterprise's pre-approved license list. MIT is too, but AGPL is almost never. 3. **Government acceptance**: Apache 2.0 is the standard license for infrastructure used by federal governments (in the US and EU). 4. **Ecosystem norm**: most of the infrastructure Aura integrates with (Rust, Kubernetes, tree-sitter, most of the CNCF stack) is Apache 2.0. ### What Apache 2.0 does and does not allow - **Permits**: use, modification, redistribution, commercial use, integration into proprietary software. - **Requires**: attribution, a copy of the license, notice of changes to modified files. - **Grants**: an explicit patent license from contributors. - **Does not require**: source disclosure, reciprocal licensing, network-use provisions. This is the correct shape for a substrate. Substrates win when they are broadly adoptable with minimum friction. Apache 2.0 minimizes friction. ### Why not "open core" A common commercial model is open core: a permissive license on the basics, a proprietary license on the advanced features. Aura explicitly does not do this for the engine. Every semantic feature is open. This is a deliberate choice, tied to the [core principles](/core-principles): sovereignty and trust require that the substrate itself be fully open. The company monetizes through offerings *around* the engine — hosted Mothership, enterprise support, web dashboard, forge integrations. The engine stays open. This is the same pattern as MongoDB, Elastic, GitLab, and many others. ## Decision 5: Why MCP ### The choice Aura exposes its capabilities as **Model Context Protocol (MCP)** tools, making it natively drivable by any MCP-speaking AI agent. ### The reasoning MCP is an open protocol for connecting AI agents to external tools and data sources. It was introduced by Anthropic in late 2024 and has since been adopted by major AI vendors and agent frameworks. It plays the role, for agents, that HTTP plays for browsers or LSP plays for editors: a shared interface that lets many clients talk to many servers. For Aura, MCP is the right integration surface because: - **Vendor-neutral**: Claude, Cursor, Gemini, Copilot Agent, and custom frameworks all speak MCP. Aura gets cross-vendor support for free. - **Structured**: MCP tools are strongly-typed, documented, and introspectable. Agents discover them and use them correctly, without prompt engineering. - **Stateful when useful**: MCP supports long-running sessions, useful for things like `aura_handover` and session resume. - **Open standard**: not owned by any single vendor. Fits the [agent-native principle](/core-principles). The alternative — writing a separate integration for each agent vendor (a Claude plugin, a Cursor extension, a Gemini integration, a Copilot bridge, a LangChain tool) — is a never-ending maintenance burden and fragments the ecosystem. MCP collapses N integrations into one. ### Alternatives considered - **Per-vendor plugins** — exponential maintenance. Always lags. Fragments. - **Language Server Protocol (LSP)** — designed for editor tooling, not agent tooling. Wrong shape. - **Custom JSON-RPC surface** — every agent framework would have to adapt. Same problem as per-vendor plugins. - **CLI-only with agents shelling out** — possible, but loses the structure, discoverability, and session semantics that MCP provides. MCP was the right choice as soon as it existed. Aura was an early adopter and ships with 29+ MCP tools covering the full feature surface. ### The knock-on effect Because Aura is MCP-native, every feature that ships lands simultaneously for humans (via CLI) and for agents (via MCP). There is no agent lag. This is what makes the [agent-native principle](/core-principles) operational rather than aspirational. ## How the five decisions interact These decisions are not independent: - **Rust + tree-sitter** give the engine performance and language reach. Either without the other would be a compromise. - **P2P + Apache 2.0** make sovereignty real. A P2P architecture under a restrictive license would scare off enterprises; an Apache 2.0 project with a SaaS-only model would fail the sovereignty test. - **MCP + agent-native principle** make the tool relevant to where software development is going, not just where it has been. - **Rust + Apache 2.0 + MCP** together are the reason Aura has a shot at becoming the default substrate: fast, safe, cross-vendor, broadly licensed. Remove any one of these decisions and Aura is a different, weaker project. Together, they compose into a tool that has a plausible path to the substrate role. ## What we will not change These five decisions are stable. They are not revisited lightly. - The engine language will remain Rust. - The parser framework will remain tree-sitter (additional language support may require per-language plugins; the framework stays). - The collaboration architecture will remain self-hosted and P2P-capable by default. - The license will remain Apache 2.0. - The agent surface will remain MCP-native. Changes of any of these would represent a direction change for the project. They are not on the table. ## What we will iterate on Everything above the decisions layer is subject to iteration: - Specific tool signatures. - On-disk format details (within the stability guarantees). - Workflow defaults. - Naming conventions. - UI affordances. Implementation details will evolve. The spine will not. ## Next - [Core Principles](/core-principles) — the principles these decisions serve. - [How Aura Works](/how-aura-works) — the architecture they produce. - [Mission and Roadmap](/mission-and-roadmap) — where the project is heading. - [The Aura Philosophy](/aura-philosophy) — the beliefs underneath.