# Who Should Use Aura > If your code is being edited faster than humans can read it, you need a VCS that reads it for you. Aura is not for everyone, and pretending it is would be dishonest. A solo developer working on a static site in a one-file repo does not need semantic version control. A team of five humans writing Python at human speed, with no AI in the loop, will do fine on Git for another decade. But Aura is built for a specific and rapidly growing set of engineering organizations. If any of the profiles below describes you, Aura is not a curiosity — it is a tool that solves a concrete, load-bearing problem in your workflow. ## Profile 1: teams with AI agents in the loop This is the primary adopter profile and the one that motivated Aura's creation. You qualify if any of the following is true: - At least one engineer on your team uses Claude Code, Cursor, Gemini CLI, Copilot Agent, or a similar AI coding agent for production work. - AI agents open more than 10% of your merged pull requests. - You run multi-agent workflows — two or more agents working on the same repository, possibly on the same branch. - You use agent orchestration frameworks (LangGraph, CrewAI, autogen, Anthropic's Claude Teams) where multiple agents collaborate on code. - Your commit velocity has increased by more than 2x in the last year without a proportional increase in headcount. **The specific problems Aura solves for you:** - Text-diff merges fail at roughly 27% for parallel agent patches. Aura's AST-level merge cuts most of this by construction, because agents who renamed, moved, or reformatted code no longer produce conflicts against each other. - Agents confidently commit changes that drift from stated intent. Aura's intent validation catches the drift before it lands. - Agents running in parallel collide silently. Sentinel surfaces the collisions in real time. - Agents have no common coordination substrate across tools (Claude and Cursor don't talk to each other by default). Aura's Sentinel protocol is tool-agnostic — any MCP-speaking agent participates. If this is you, adopt Aura. The payback is measurable within a week. ## Profile 2: high merge-conflict churn You may not have AI agents yet, but you have the same symptom from a different cause: your team produces more merges than the merge engine can handle gracefully. You qualify if: - You run a large monorepo where many teams touch shared modules. - Your CI frequently rejects merges due to conflicts that require human reconciliation. - You spend engineering hours per week on merge-conflict triage. - You have adopted stacked diffs, trunk-based development, or other discipline-heavy workflows specifically to reduce conflict surface. - Refactors are delayed or abandoned because the merge cost is too high. Aura's semantic merge does not eliminate conflicts — two people changing the same logic still conflict — but it dramatically reduces false conflicts caused by renames, moves, reformatting, and reordering. For a team already feeling merge pain, this is a direct productivity gain independent of any AI consideration. ## Profile 3: EU, UK, and APAC data-sovereignty teams Regulatory pressure has made US-hosted forges a compliance problem for a growing share of the world's engineering organizations. You qualify if: - You operate under EU AI Act compliance obligations that require traceable provenance of AI-touched code. - You are in a regulated industry (finance, healthcare, defense, critical infrastructure, legal) with data-residency requirements. - Your legal team has raised concerns about code flowing through US-hosted AI assistants. - You are a government contractor, a national lab, or a sovereign cloud customer. - You are based in a jurisdiction where US export controls or sanctions create dependency risk. **Why Aura:** - [Mothership Mode](/mothership-overview) gives you a fully self-hosted P2P hub. TLS, JWT, no third-party cloud. - Apache 2.0 license: no vendor can revoke your access. - All data — commits, intent logs, agent coordination — stays on infrastructure you control. - Cryptographic audit trail on every logic change, suitable for regulatory evidence. - No telemetry, no phone-home, no hosted dependency. If sovereignty is a contractual or regulatory requirement, Aura is one of a very short list of viable toolchains. It may be the only one that also meets your AI-coordination needs. ## Profile 4: security-conscious and defense-adjacent orgs Separate from sovereignty is the concern about **what AI agents actually did** to your codebase. For teams with a high security bar, commit messages written by an LLM are not sufficient audit evidence. You qualify if: - You work in defense, intelligence, aerospace, or critical infrastructure. - You produce software subject to DO-178C, FDA 21 CFR Part 11, SOC 2, ISO 27001, or similar regimes. - You have formal change-management processes that require documented intent for every change. - You need to prove, for compliance, that an AI-authored commit did what it said it did. **Why Aura:** - Every commit has a structured, validated intent field. - The pre-commit hook compares intent against actual AST changes and blocks mismatches. - Every logic node has a cryptographic hash, so tampering is detectable. - Full audit trail of which agent (and which human) signed which change. - `aura_prove` can demonstrate, mathematically, that a required behavior is implemented and wired. For regulated environments, this is not a nice-to-have. It is an evidentiary advantage. ## Profile 5: open-source maintainers drowning in AI PRs A quieter but rapidly growing adopter profile: the open-source maintainer whose inbox is filling with low-quality AI-generated pull requests. You qualify if: - You maintain an open-source project that attracts contributions. - You have received multiple PRs that were clearly LLM-generated and failed on subtle issues (hallucinated APIs, drifted intent, silent behavior changes). - You spend more time triaging than coding. - You have considered limiting contributions to filter noise. **Why Aura:** - Intent validation filters PRs whose claims don't match their code. - AST-level diffs make it faster to see *what actually changed* versus *what the PR claims*. - Semantic diff surfaces meaningful changes and hides formatting noise. - `aura_pr_review` produces structured findings that accelerate review. Aura does not turn a bad PR into a good one, but it surfaces the mismatch in seconds instead of minutes. ## Profile 6: research labs and advanced AI teams If you are on the frontier — training models, building agent frameworks, running autonomous coding systems — you are the most affected by the substrate problem. You qualify if: - You run autonomous code-writing systems that commit without human review. - You produce training data that includes code changes, and you want that data to include semantic structure rather than text diffs. - You are researching multi-agent collaboration and need a coordination substrate. - You are building AI dev tools and want to expose structured code state to your models. **Why Aura:** - MCP-native: your agents get structured access to the logic graph. - AST-level data is better training material than text diffs. - Sentinel provides a research-grade multi-agent coordination testbed. - Open source end-to-end: no black-box limits on what you can measure or extend. ## Profile 7: enterprises with long-lived codebases and institutional memory Aura's intent log and semantic history produce a new kind of artifact: an AI-readable story of why the code is the way it is. You qualify if: - Your codebase is old enough that most original authors are gone. - New engineers (human or AI) need context to work safely. - Institutional knowledge is documented poorly or not at all. - Onboarding AI agents into your codebase is a recurring cost. **Why Aura:** - Every commit has a durable, validated intent statement. - The logic graph is queryable: "who calls this? what depends on it? what has changed in the last quarter?" - `aura_handover` compresses the current state into a dense context artifact agents can ingest. - New agents arrive to a codebase that has a story, not just files. ## Who should *not* use Aura (yet) To be direct: - **Solo developers on tiny projects.** The overhead isn't worth it. - **Static-content repos** (docs, dotfiles, data files). No meaningful AST. - **Teams fully satisfied with Git, with no AI, no sovereignty concern, and no merge pain.** You are not the target. Git is fine. Come back when the situation changes. - **Teams with a hard dependency on a Git feature Aura doesn't mirror.** Submodules, LFS, and certain exotic workflows are Git-only. Aura coexists with Git, so most of these still work — but make sure before you bet. We would rather you not adopt Aura than adopt it for the wrong reason. ## The sizing question Aura scales from solo-with-agent (one human, three Claudes) to enterprise (thousands of developers with a self-hosted Mothership). The sweet spot for early adoption is the **5–50 engineer team with active AI usage**, because the pain is acute, the coordination cost is high, and the sovereignty argument often matters. At the solo-with-agent end, Aura is still useful: intent validation and AST diff catch agent mistakes that would otherwise reach production. At the enterprise end, Aura's Mothership deployment, zone claims, and team-wide impact alerts become load-bearing infrastructure. ## A simple decision tree - **Do you have AI agents writing any production code?** → Strong yes, try Aura. - **Do you have a data-sovereignty requirement?** → Strong yes, try Aura + Mothership. - **Do you have merge-conflict pain at scale?** → Yes, try Aura's semantic diff in observation mode first. - **Do you care about intent auditability?** → Yes, enable `aura log intent` and the hook. - **None of the above?** → Not yet. Git is fine. ## Next - [What is Aura?](/what-is-aura) — canonical definition. - [Aura vs Git](/aura-vs-git) — the comparison. - [How Aura Works](/how-aura-works) — architecture. - [Mothership Overview](/mothership-overview) — the sovereign hub.