Knowledge Sharing
The shared memory layer. Agents learn once, the team benefits forever.
Why This Exists
Every AI agent, on every session, starts from zero. It reads the code. It maybe reads the README. It runs some tools. It builds a working model of your codebase. Then the session ends and that model evaporates.
Now imagine four agents working on the same repo, each building their own working model from scratch, each making the same discovery — "oh, the rate limiter trips if you reduce the retry delay below 200ms" — three separate times. In the fourth session, an agent makes that exact mistake because the previous three sessions' learning was never recorded. Three incidents. One tribal piece of knowledge that was never shared.
Knowledge sharing exists because every one of those discoveries is expensive to produce and cheap to replay. If an agent has learned something useful about your codebase, there is no reason the next agent should have to learn it again.
The team knowledge base is a structured, queryable memory of things agents have figured out. It lives next to the code, travels with the repo, and is consulted before acting.
How It Works
Knowledge lives in .aura/knowledge/ and is indexed by the Mothership for cross-repo queries. An entry looks like this:
{
"id": "k_01HW...",
"topic": "retry policy",
"summary": "retry_logic uses exponential backoff with 200ms floor",
"detail": "Reducing base delay below 200ms trips upstream rate limiter. See incident-0042. Do not lower.",
"refs": [
{"kind": "node", "id": "py:src/net/retry.py::retry_logic"},
{"kind": "incident", "id": "0042"}
],
"author": {"session": "claude-7f3a2b", "kind": "claude-code"},
"confidence": "high",
"tags": ["retry", "rate-limit", "net"],
"created_at": "2026-03-14T09:22:00Z",
"confirmed_by": ["gemini-0091", "human:ashiq"]
}
Entries are typed. They have refs that anchor them to specific logic nodes, so when the node changes, the entry can be invalidated or flagged for re-review. They have a confidence rating and a confirmed_by list — other agents and humans can endorse an entry, which raises its weight in query results.
The knowledge store is append-mostly. Entries are not edited in place; they are superseded by new entries, and the old ones remain in history. This is deliberate — it lets you audit what was believed true at a given point in time.
Writing entries
An agent that has learned something useful calls the knowledge write tool:
aura_memory_write
topic: "retry policy"
summary: "retry_logic uses exponential backoff with 200ms floor"
detail: "Reducing base delay below 200ms trips upstream rate limiter."
refs: [{kind: "node", id: "py:src/net/retry.py::retry_logic"}]
tags: ["retry", "rate-limit"]
confidence: "high"
The tool returns an id. The entry is immediately visible to all other agents in the repo.
Querying
Before acting, an agent queries the knowledge base with the topic or free text:
aura_memory_read
query: "retry backoff"
limit: 5
Results are ranked by a blend of text relevance, recency, confidence, and confirmed_by count. The agent then reads the top entries — cheap, O(KB) — and adjusts its plan.
Decay
Knowledge without refs, or pinned to nodes that have since been deleted, is marked stale. Stale entries still appear in queries with a stale: true marker. A human or an agent can re-confirm, update, or retire them.
Best Practices
Write when you are surprised
The best trigger for a knowledge entry is the moment you learn something you did not expect. "Oh, that's why" is the internal signal. If the same surprise could happen to the next agent, write it down.
Bad knowledge entry: "The function takes a string."
Good knowledge entry: "The function expects a normalized string — it fails silently if passed unicode with combining characters. See issue-089."
Anchor to nodes, not to prose
An entry that says "in the auth module, we..." is brittle. An entry that says "node py:src/auth.py::login_user requires the session token to be pre-hashed" is durable. When the module moves, the ref moves with it. When the node is deleted, the entry is auto-flagged.
Keep summaries terse, details rich
The summary is what shows up in the query result list. It must fit one line. The detail is where the reasoning lives. Agents pre-screen with summaries and load details only for the entries they care about.
Confirm useful entries
When an entry helped you avoid a mistake, confirm it:
aura_memory_confirm id=k_01HW...
Confirmations are how the team elevates truly useful entries above the noise floor. An entry with ten confirmations across three agents is more trustworthy than an entry with zero confirmations from one agent two months ago.
Retire obsolete entries
When you change code in a way that invalidates an entry, retire it:
aura_memory_forget id=k_01HW... reason="Rate limiter replaced in v0.15"
The entry is moved to history, not deleted. A future agent asking about retry policy will see the retirement note.
Examples
The classic retry example
Three months ago, Claude was debugging a flaky integration test. It discovered, after an hour, that setting retry_delay=100 caused the upstream to trip its rate limiter, which cascaded into auth failures. Claude wrote:
aura_memory_write
topic: "retry policy"
summary: "retry_logic base delay must be ≥200ms"
detail: "Upstream returns 503 under sustained <200ms retries; cascades
to auth rate limiter. Took 1h to diagnose. See incident-0042."
refs: [{kind: "node", id: "py:src/net/retry.py::retry_logic"}]
tags: ["retry", "rate-limit"]
confidence: "high"
This week, Gemini was working on latency optimization. Before reducing delays it ran:
aura_memory_read query="retry delay"
The top result was Claude's entry. Gemini kept the floor at 200ms, shipped the optimization, and the flake never came back.
That is the entire point of the system.
Querying with context
Knowledge queries can be scoped to a node to pull every entry that references it:
aura_memory_read
scope: {kind: "node", id: "py:src/payments.py::charge"}
Useful at the start of editing a node — it gives you every lesson anyone has ever recorded about that function, in descending confidence order.
Human-written entries
Humans can write knowledge entries too, from the CLI:
aura knowledge write \
--topic "auth" \
--summary "Never log the raw session token" \
--detail "Logged tokens were exfiltrated in incident-0031. Use redact_token()." \
--ref node:py:src/auth.py::login_user \
--confidence high
The entry appears alongside agent-authored ones. Humans and agents share the same store; that is the point.
Migrating knowledge across repos
When knowledge applies broadly — say, your internal auth conventions — you can mark an entry as shareable, and the Mothership will replicate it across all repos in your organization:
aura_memory_write
...
scope: "org"
organization: "acme"
Org-scoped entries are queryable from any Aura-enabled repo in the org. New services start with the accumulated wisdom of every prior service.
Configuration
[knowledge]
enabled = true
location = ".aura/knowledge/"
sync_to_mothership = true
default_confidence = "medium"
decay_window = "90d"
[knowledge.query]
include_stale = false
max_results = 10
rerank_with_ai = true # use an LLM to rerank top-K for relevance
rerank_with_ai = true adds a small latency cost but meaningfully improves the quality of query results on large knowledge bases. For repos with fewer than a hundred entries it is unnecessary.
Governance
A knowledge base is only as good as its signal-to-noise ratio. Sentinel supports a few governance primitives:
- Review queues. Entries written during low-confidence sessions land in a queue for a human or senior agent to review before going live.
- Retirement audits. A weekly job flags entries whose referenced nodes have changed since the entry was written.
- Authorship stats. The dashboard shows which agents are writing useful knowledge and which are just narrating.
The knowledge base is not a journal. It is a filtered, curated, durable record of what the team has learned. Treat it like a codebase — review, refactor, retire.
Integrating Knowledge Into The Workflow
The question that matters most is: when do agents actually read the knowledge base? If the answer is "rarely," the system is ornamental. If the answer is "before every significant edit," it is doing its job.
The protocol files enforce the pattern. CLAUDE.md includes aura_memory_read as a mandatory step before planning. .cursorrules includes it before each wave of inline edits. GEMINI.md includes it as part of pre-expedition context loading. The Copilot integration, which cannot query the knowledge base from the client, surfaces relevant entries as PR comments instead.
The query is cheap — a few tens of milliseconds, a few hundred tokens — and the payoff when a relevant entry exists is very large. The asymmetry is what makes the habit sustainable.
Automatic surface-up
Sentinel also automatically surfaces relevant knowledge when the editing context makes it obvious. When an agent calls aura_snapshot on a file, Sentinel checks the knowledge base for entries whose refs intersect that file and includes a summary in the response:
[Cursor] → aura_snapshot file=src/net/retry.py
← ok, no zone conflicts
📘 KNOWLEDGE: 2 entries relevant to this file
— retry base delay must be ≥200ms (high, 3 confirms)
— retry budget applies across subcalls (medium, 1 confirm)
Call aura_memory_read to expand.
The agent doesn't have to think to fetch relevant context — Sentinel brings it along.
Knowledge vs Documentation
A recurring question: how is the knowledge base different from a docs/ folder or a team wiki?
The honest answer is that the two overlap. Both are team memory. The differences that matter in practice:
- Granularity. Knowledge entries are node-scoped. Docs are page-scoped. Entries survive refactors because their refs move with the AST.
- Machine accessibility. The knowledge base is designed to be queried by an LLM in two hundred milliseconds. Docs are designed to be read by humans.
- Decay tracking. Entries are auto-flagged stale when the code they reference changes. Docs rot silently.
- Authorship. Knowledge is written incidentally during work. Docs are written deliberately after work.
Teams typically keep both. Docs for the high-level narrative, knowledge for the surprises and the gotchas. The protocol files ensure agents consult knowledge, not docs, when making edit-time decisions.
See Also
- The Sentinel Overview — the coordination layer knowledge plugs into.
- Agent Inbox — where discoveries start as conversations before graduating to durable entries.
- Multi-Agent Workflows — how knowledge changes long-running workflows.
- Claude Code Integration — how the CLAUDE.md protocol teaches Claude to query before acting.