Mod
The data layer for agentic computing
The next decade of computing won’t be driven by humans typing. It will be driven by agents running in parallel, across environments, continuously. Hundreds of them, coordinating on the same workspace, exploring branches of a problem simultaneously, merging the best results forward.
That future is already arriving. And it’s running on Git and filesystems. Primitives designed for one human, working sequentially, remembering to commit.
Mod is the data layer built for agentic computing. Executable like a filesystem, queryable like a database, versioned like Git. Designed from the ground up for agents working in parallel, across environments, at any scale.

Problem
The data primitives agents rely on today weren’t built for “millions of agents”.
- Most knowledge workers need version control. But they won’t learn Git to get it.
- Parallel exploration is expensive. Worktrees are inefficient and compound merge complexity.
- Ad-hoc commits lose work. Change tracking is an afterthought, not a guarantee.
- Review without the why is hard. Reasoning and intent aren’t captured by git.
- Merging is broken for agents. Git surfaces false conflicts and lacks the intent and decision context needed to resolve real ones.
- Siloed data limits agents. Hard to query, harder to evolve. Scattered across Notion, Docs, Linear, Jira, GitHub.
- Files and Git have no schema. Without structure, relationships, and indexes, agents can retrieve text, but can’t reason over the workspace.
- Generation is fast, integration is hard. Agent-generated workflows demo well, but persistence, auth, and collaboration reintroduce the same siloes as the SaaS tools they replace.
- Evolution breaks compatibility. Agents can reshape workflows and schemas to fit users, but that evolution is hard to keep compatible with historical data and collaborators.
Solution
Mod is a versioned data layer built for storing and syncing data for the emerging agentic computing paradigm.
- Structured data. Files as structured, schema-aware content with tracked relationships and embedded metadata, not opaque blobs.
- Parallel branching. Git requires a separate worktree per agent. Mod lets multiple agents share a working directory. Each writing to their own branch, tracked by session.
- Continuous changelog. Every change tracked automatically, not just when someone remembers to commit. Checkpoints made by agent or user.
- Full agent context. Plans, reasoning, and decisions captured on the branch. Review is fast and conflicts have the context to resolve them.
- Smart merge. CRDT-powered merges with full agent context. Fewer false conflicts, faster resolution of real ones.
- Schemas that evolve safely. Workflows and data structures adapt with the user. History stays intact. Collaborators stay unbroken.
- Queryable workspace. Ask questions across your work. Decisions, history, relationships. Not just search for text.
- Generative workflows. Agent-generated workflows built on a collaborative structured data layer.
- Headless infrastructure. Storage, auth, sync, and collaboration in one SDK.
Product
We’re starting with 3 ways users and developers can start using this agentic data layer.
1. CLI: Git replacement for local agent orchestration
Work on multiple tasks in parallel, each in its own branch, with agents running alongside you. No worktrees, no clones. Session-based branching on your working directory.
2. Extensible workspace
A flexible workspace with canvases for real-time collaboration on text, code, kanban, code review. Extensible with custom workflows that sit on users existing data, schemas, permissions.

3. Headless infrastructure
Storage, auth, sync, collaboration: all in a single SDK. Use it to power sandboxes running background agents, collaborative apps built by AI, or local developer tooling. Same primitives, any environment.
Market
We’re building infrastructure for the agent era. The market is every agent that needs to read, write, and operate on files — which is all of them.
AI inference spend is projected to reach $500B+ annually by 2028. A significant portion of that compute will be agents operating on filesystems: writing code, editing documents, processing data, executing workflows. We capture value at the infrastructure layer where this work happens.
- Agent compute. Every agent session that reads, writes, or branches a workspace flows through our infrastructure. As agent usage scales, so does our footprint.
- Developer infrastructure. Companies building agents need filesystem primitives: branching, sync, storage. This is foundational infrastructure they build on top of.
- End-user inference. Users running agents in their workspaces consume inference through us. We become the interface between users and AI compute.
Business Model
Two customer segments, both usage-based.
End users. Tiered subscriptions with usage-based pricing. Users pay for agent compute in their workspaces: running tasks, generating content, background operations. Free tier to start, paid tiers scale with usage.
Developers. Usage-based infrastructure pricing. Companies building on Mod pay for what they use: storage per GB, sync operations, branch operations, inference through our API. Pricing scales with their products.
The model compounds: more users means more inference, more developers means more infrastructure, and the ecosystem grows on top of both.