Threat Model

"Security is not a feature. It is the absence of assumptions you cannot defend."

Overview

This document is the canonical threat model for Aura, the semantic version control engine developed by Naridon, Inc. (Zürich, Switzerland) and released under the Apache 2.0 license. It enumerates the adversaries Aura is designed to resist, the specific defenses in place, and — equally important — the classes of attack Aura does not attempt to mitigate. If you are a CISO, a platform engineer, or a member of a regulated team evaluating Aura for adoption, this page is the single most important one in the documentation set.

Aura is self-hosted by default. The Mothership — the coordination daemon that brokers peer introductions, issues join tokens, and maintains the immutable audit trail — runs on infrastructure you control. There is no mandatory cloud service, no phone-home telemetry, and no closed blob in the core binary. This architectural posture determines the shape of the threat model: we concern ourselves with the boundary between your network and the outside world, the boundary between trusted peers within your network, and the boundary between human and AI actors inside your repository.

Threat Scope

The threat model divides adversaries into five categories. Each is treated as a distinct actor with a distinct capability set.

1. Passive network eavesdropper

An attacker who can observe traffic between peers or between a peer and the Mothership. This includes ISPs, compromised Wi-Fi, SOHO routers, and adversaries operating at intermediate network hops.

Capabilities: Read packets in transit. Cannot actively modify traffic without detection (that escalates them to category 5). Cannot decrypt TLS without possession of a private key.

Aura's response: All peer-to-peer traffic is protected by TLS 1.3 with peer-certificate pinning. Mothership-issued join tokens are JWTs signed with Ed25519; even if captured, they are time-bounded and revocable. Function bodies and intent payloads never traverse cleartext channels. See end-to-end encryption for the detailed transport specification.

2. Compromised peer

A legitimate peer machine whose operating system, user account, or Aura installation has been taken over by an attacker. This is a realistic scenario: developer laptops are lost, stolen, or malware-infected with regularity.

Capabilities: Full access to the local Aura identity key, all logic nodes the peer has synced, and the capability to push malicious intents signed with the peer's valid key.

Aura's response: The compromise cannot be prevented by Aura — it originates below our trust boundary. What Aura can do is contain and detect. Every intent is cryptographically signed, so after a compromise is identified, operators can enumerate exactly which intents originated from the compromised key, revoke the key via aura mothership revoke-peer <id>, rotate the identity, and surgically rewind the malicious changes with aura rewind. See incident response for the full playbook.

3. Rogue AI agent

An AI coding agent — whether through prompt injection, a supply-chain compromise of its tooling, or simple misbehavior — that attempts to perform actions outside its intended scope.

Capabilities: Whatever the human operator's session grants it: file edits, commits, pushes, messaging other agents. Constrained by the capability token issued at session start.

Aura's response: Every AI agent operates under a scoped capability token. See agent permissions for the capability matrix. An agent without push cannot push. An agent without claim_zone cannot lock out other participants. Revocation (aura agent revoke <agent-id>) propagates in seconds. Every agent action is annotated in the intent log with its agent-id and the specific capability invoked, producing an auditable trail of who — human or machine — did what.

4. Insider with valid join token

A legitimate team member, or a former team member whose access has not yet been revoked, who chooses to exfiltrate code or sabotage the repository.

Capabilities: Everything their role permits. If they are a contributor they can commit. If they are an admin they can revoke others.

Aura's response: Aura does not attempt to make insiders harmless — that is a policy and HR problem, not a cryptographic one. What Aura does is make insider actions visible and reversible. The intent log is append-only; it cannot be rewritten. Zone-level RBAC limits blast radius by role. When a team member departs, aura mothership revoke-peer invalidates their identity in seconds, and aura trace <function> lets auditors reconstruct every change that identity touched. Code exfiltration — the insider clones the repo and walks away — is not preventable by any VCS. It is a problem for endpoint DLP, not Aura.

5. Supply-chain attacker

An adversary who compromises the build toolchain, a dependency, or the distribution channel for Aura binaries.

Capabilities: Inject malicious code into the binary users run, substitute a signed release, tamper with Cargo dependencies.

Aura's response: Releases are signed with minisign; signatures are published alongside the binary and on the Naridon release channel. Aura publishes a full SBOM per release. Builds are deterministic: given the same source and toolchain fingerprint, the hash of the resulting binary is reproducible and can be independently verified. aura self-verify checks the running binary against its published signature and toolchain fingerprint. Dependency audit runs in CI via cargo-audit. See supply-chain integrity for verification procedures.

Explicitly out of scope

Aura does not claim to defend against:

  • Nation-state physical-access adversaries who can compel key disclosure or seize hardware.
  • Side-channel attacks against the host CPU (Spectre-family, cache timing, RowHammer). These are OS- and silicon-level concerns.
  • Social engineering of legitimate users to disclose passphrases or approve malicious intents.
  • Endpoint malware that snoops input or screen contents below the Aura process. Your EDR is responsible here.
  • Coercion of the Mothership operator to disclose the Mothership signing key.

Being explicit about these exclusions is itself a security property: we do not overclaim, and you do not overtrust.

Mechanism

STRIDE mapping

| Threat category | Aura primary defense | Residual risk | |---|---|---| | Spoofing | Ed25519 peer identity keys; JWT join tokens signed by the Mothership; TLS peer-cert pinning | Stolen device with unlocked key | | Tampering | BLAKE3 content-addressed logic nodes; Merkle-chained intent log; signed intents | Compromise before hashing (upstream of Aura) | | Repudiation | Every intent carries author identity + signature; log is append-only | Insider who disputes a signature requires forensic key review | | Information disclosure | TLS 1.3 transport; no cloud relay by default; federated mode encrypts peer-to-peer with Mothership blind to content | Endpoint disk exfiltration | | Denial of service | Rate-limited join endpoints; bounded sync queues; zone claims time out | A motivated attacker inside your network can still flood the Mothership | | Elevation of privilege | Capability-scoped agent tokens; zone RBAC; admin actions require admin-capability tokens | Admin key compromise defeats this — treat admin keys like CAs |

Trust boundaries

Aura recognizes four explicit trust boundaries:

  1. Network boundary — between your infrastructure and everything else. Crossed by: peer-to-peer sync (TLS), Mothership API (TLS), signed release downloads.
  2. Peer boundary — between peer machines on your network. Crossed by: function-body sync, intent propagation, agent messages.
  3. Process boundary — between the Aura daemon and the host OS. Crossed by: file reads/writes, signing-key access, hook invocations.
  4. Agent boundary — between an AI agent and the Aura MCP server. Crossed by: tool invocations. Governed by agent permissions.

Every defense in the system is anchored to one of these four boundaries. If a proposed change does not strengthen a boundary or contain a boundary crossing, it is not a security feature — it is cosmetic.

What tamper-evidence gives you

Tamper-evidence is not tamper-prevention. An attacker who owns a peer can produce new, validly signed intents. What they cannot do is:

  • Rewrite prior intents without breaking the Merkle chain.
  • Forge an intent from a different author without that author's private key.
  • Hide an intent from the log, since peers replicate the log and any missing entry is detectable on sync.

This matters because incident response begins with knowing what happened. With tamper-evidence, the log becomes admissible evidence: every malicious action has a signature and a parent hash. Without it, forensics is guesswork.

Configuration

Most threat-model-relevant settings live in aura.toml at the Mothership root and in ~/.aura/identity.toml on each peer. The security-relevant knobs:

[mothership]
# Bind only to the interface in the jurisdiction you control
bind = "10.0.5.3:7443"
# Disallow WAN peers unless they come via explicit CIDR allowlist
peer_cidr_allowlist = ["10.0.0.0/8", "192.168.0.0/16"]

[security]
# Require TLS peer-cert pinning for all peer connections
require_peer_pinning = true
# Reject join tokens older than this many minutes
join_token_max_age_minutes = 60
# Secret detection enforcement level: off | warn | block
secret_detection = "block"
# Enforce strict mode: block commits when stated intent contradicts AST delta
strict_mode = true
# Once enabled, only a human with the passcode can disable
strict_mode_locked = true

[federation]
# "directory-only" = Mothership never sees plaintext function bodies
mode = "directory-only"

See data residency for the jurisdictional binding flags and secret detection for the full secret-scanning ruleset.

Limitations

The honest list of what this threat model does not cover:

  • We do not audit your OS. If the kernel is rooted, nothing in userspace can save you.
  • We do not audit your humans. Clearance, HR offboarding, and access review are out of scope.
  • We are not SOC 2 Type II audited as a managed service — Aura is open source software you deploy yourself. Your own deployment, if hardened appropriately, may be used as a component in a SOC 2 program, but Aura itself does not carry the attestation.
  • Cryptographic primitives age. Ed25519 and BLAKE3 are current-best choices, but a 20-year-horizon deployment must plan for agility. Key rotation is supported; algorithm rotation is a manual upgrade.
  • DoS resistance is best-effort. Aura assumes the network between peers is not actively hostile. Defending against an on-network flood requires infrastructure outside Aura's scope.

See Also