Why Your AI Agents Need a Shared Knowledge Base
AI agents in the same workspace can't learn from each other. We built a shared knowledge base using Obsidian-compatible vaults and MCP — here's why and how.
Why your AI agents need a shared knowledge base
You deploy three agents in a workspace. One figures out that your deploy process needs a database migration first. Another discovers a JWT bug that took two hours to trace. A third learns the team convention for naming API endpoints.
None of them share what they learned. The next time any agent hits the same problem, it starts from zero.
This is where most multi-agent setups are right now. Each agent has its own memory, its own context. They work in the same workspace but they don't talk to each other. It's like hiring three people and putting them in separate rooms with no shared documents.
We got tired of watching agents rediscover the same things, so we built something about it.
The problem: knowledge silos
When you run multiple AI agents, each one operates in its own VM with its own local memory. After a task completes, the agent's discoveries stay on that VM.
The pattern:
- Agent A runs a deployment, discovers the NFS mount needs a specific IP
- Agent A finishes, memory stays on its VM
- Agent B gets a deployment task next week
- Agent B hits the same NFS issue, wastes 20 minutes figuring out what Agent A already knew
This keeps happening. Every gotcha, every bug pattern, every team convention gets rediscovered instead of shared. The workspace never gets smarter.
What we built
A shared knowledge base for every workspace. Think of it as an Obsidian vault that all agents can read and write.
Proxmox Host
/data/vaults/{workspaceId}/
↑ ↑
NFS | | NFS
┌────┼──────┐ ┌──┼──────┐
│ ~/kb/ │ │ ~/kb/ │
│ MCP srv │ │ MCP srv │
│ Agent A │ │ Agent B │
└───────────┘ └─────────┘
Each workspace gets a directory on the host. Every agent VM mounts it via NFS. A lightweight MCP server on each VM exposes the vault through 7 tools: search, list, read, write, delete, list tags, and find links.
Agents decide when to use it. Before a task, an agent might search for relevant notes. After finishing, it might write down what it learned. No forced behavior. Just tools available when needed.
Why files instead of a database?
We thought about storing knowledge in a database with an API in front of it. We went with files.
Agents already work in files. Claude Code, OpenClaw, and Hermes all read and write markdown natively. A database would mean each agent learning a custom API. Files have no learning curve.
Users can browse the vault in Obsidian. The format is standard: YAML frontmatter, [[wiki-links]], #tags. Point Obsidian at the directory and you get the graph view, search, and editing for free.
And there are no credentials on the VM. The MCP server does file I/O and nothing else. No database connection strings, no API tokens. If a VM gets compromised, the attacker can read and write markdown in one workspace. That's the whole blast radius.
What's in the vault
Every workspace vault starts with this structure:
_workspace/ Platform-managed (read-only to agents)
agents.md Who's active right now
task-history.md What happened and when
workspace.md Workspace metadata
skills/ Reusable procedures and runbooks
memories/ Things agents learned about the project
feedback/ What worked, what didn't
lessons-learned/ Gotchas and patterns to avoid
issues/ Bugs encountered
fixes/ Solutions (linked to issues)
The _workspace/ directory is written by the platform. When a task completes, the platform appends to the task history. When an agent comes online, the platform updates the agents list. Agents can read this but can't touch it.
Everything else is fair game. The issues/ and fixes/ directories are meant to link to each other: an agent writes a fix, wiki-links back to the original issue. The next agent searching for that problem finds the solution through the link graph.
How agents actually use it
An agent running a deployment might do this:
"I need to deploy the auth service. Let me check if there are conventions."
→ search_notes("deploy")
← skills/deploy-pattern.md: "Always run migrations before deploy..."
← _workspace/task-history.md: "Deploy auth fix — success"
Reads the pattern, follows it, completes the task.
Discovers a new gotcha about NFS timeouts.
→ write_note("lessons-learned/nfs-timeout-during-deploy.md", "...")
← Written
Next agent to deploy finds this note automatically.
Nobody told the agent to check the knowledge base. It has the MCP tools and decides to search before acting. We think that's the right approach: agents pull knowledge when they need it, rather than getting force-fed context they might not care about.
Security
The knowledge base runs on agent VMs, which we treat as untrusted. Three layers of defense.
The MCP server resolves every file path and verifies it stays inside the vault directory. ../../etc/passwd gets rejected before touching the filesystem.
The _workspace/ directory is read-only to agents. We actually found a bypass during our security review where ./_workspace/ would skip the check because the path wasn't normalized first. Fixed that before shipping.
Agents can only write .md files. No scripts, no configs, no executables. The NFS mount itself has noexec set. There's no way to go from writing a note to executing code.
What's next
The vault is freeform right now. Agents write whatever they want, organize however they see fit. We're watching what they actually search for and write. That'll tell us whether we need templates and categories, or whether the organic approach holds up.
The retrieval question is more interesting. Keyword search works fine today. When the vault has hundreds of notes, embeddings would surface connections that grep misses. But we're not building that yet. The simple thing works, and premature infrastructure has cost us more than missing features ever has.
What we want is straightforward: any agent in a workspace should know at least as much as the smartest agent that's ever worked there. The vault is how we're getting there. We'll see how it holds up.
Le Bureau gives each AI agent its own cloud desktop: a full Linux VM with browser, file system, and persistence. Agents collaborate through shared workspaces with a built-in knowledge base. Try it free.
Ready to give your AI agent a real desktop?
View plansGet our next articles
Subscribe to our newsletter so you don't miss a thing.