← Back to Insights
Agentic AI

Conway's Law Meets Always-On Agents: What the Claude Code Leak Reveals About AI System Design

Joshua Garza

Neatly arranged bookshelves in a contemporary library setting for study and research. Photo by Pixabay on Pexels

The leaked Claude internal guidelines reveal that always-on AI agent architectures inevitably mirror the organizational structures that build and deploy them — Conway's Law applied to agentic AI. The principal hierarchies, trust propagation rules, and minimal-footprint principles visible in those documents aren't just engineering choices; they're organizational values rendered as system architecture. The implication for every team deploying agents: organizations that fail to intentionally design their human governance structures will produce ungovernable agents. Your org chart is your agent's blueprint, whether you planned it that way or not.

Introduction: What the Claude Code Leak Actually Revealed

In mid-2025, internal Anthropic documentation — system prompts, behavioral guidelines, architectural notes — circulated publicly. Most commentary fixated on guardrails and personality quirks. The structural patterns hiding in plain sight got far less attention.

This post isn't about what Claude was told to say. It's about how the system is organized to operate, and what that organization reveals about building governed always-on agents.

A scoping caveat: "Conway's Law" here is an analytical lens I'm applying to the leaked patterns, not a named internal Anthropic component. The framework fits remarkably well — and that's precisely the point.

A Quick Primer on Conway's Law

the inside of a car that is being displayed Photo by Robert Schwarz on Unsplash

In 1967, Melvin Conway observed that "organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations." This became Conway's Law.

The examples are everywhere in software: siloed teams ship siloed services, tightly coupled teams ship monoliths, distributed teams ship distributed architectures.

The key reframe: Conway's Law isn't a bug. It's a mirror. The question isn't whether your systems reflect your org chart — they will. The question is whether you designed that org chart intentionally.

This is why the "Inverse Conway Maneuver" — deliberately restructuring teams to produce a desired architecture — matters for agent governance. If your agents will inevitably mirror your organization, you'd better design the organization first.

With that lens in place, let's look at what the leaked documents actually reveal.

What the Leaked Guidelines Suggest About Claude's Architecture

Layered Principal Hierarchies

The leaked materials describe a chain of principals — Anthropic → operators → users — each with distinct trust levels and override capabilities. This mirrors Anthropic's published operator/user framework. Conway's Law made visible: the model's authority structure replicates the org chart that built it.

The "Minimal Footprint" Principle

What I didn't post on Instagram book on grey marble bookends Photo by Thought Catalog on Unsplash

Agents should take minimal, reversible actions and avoid accumulating resources beyond the immediate task. This is least-privilege access control encoded as behavioral policy.

Multi-Agent Trust Propagation

Claude cannot verify another model's identity. Trust propagates through operator context, not caller claims — the distributed systems authentication problem, now an AI governance problem.

These architectural patterns are elegant on paper. But what happens when agents run continuously, in production, without pause?

The "Always-On" Agent Problem: Where Conway's Law Gets Sharp

Always-on agents expose every gap in your organizational design. The patterns described above only hold if the human structures behind them are equally well-maintained.

Risk 1: Authority Without Accountability. If no human owns an agent's decision scope, the agent fills that vacuum with default behaviors. The org chart gap becomes a behavior gap. Mitigation: assign named human owners for every agent's behavioral scope, aligned with the NIST AI RMF accountability functions.

Risk 2: Communication Structure Debt. When teams don't coordinate on agent permissions, system prompts reflect that confusion. Mitigation: treat prompts as governed artifacts — version-controlled, reviewed, owned like API contracts.

Risk 3: Invisible Org Chart Changes. Agents don't retire when their prompt authors change roles. The agent persists under a departed employee's mental model. Mitigation: audit agent instructions on the same cadence as access control reviews.

These risks are real, but the leaked documents also show that thoughtful design can mitigate them.

What Anthropic Got Right (From a Governance Perspective)

Text from a book detailing the ark's construction. Photo by Brett Jordan on Unsplash

Credit where it's due — three design choices stand out:

Explicit behavioral documentation. Writing down structured, reviewable behavioral expectations for an AI system is baseline governance. The EU AI Act's transparency requirements point in exactly this direction. Most organizations deploying agents have nothing comparable.

Layered override structure. The operator/user distinction enables context-appropriate behavior without retraining — role-based access control for AI.

Proactive safety framing. Safety as internalized value rather than external constraint is architecturally significant, even if empirically unproven.

These are patterns worth emulating. The question is how to translate them into your own organization's practice.

The Practical Takeaway: A Three-Step Governance Exercise

Theory is useful. Checklists are better. Here's a governance exercise you can run this week:

Step 1: Map your agent's principal hierarchy. Who can override its behavior, in what order, under what conditions? If you can't answer in five minutes, your agent isn't governed.

Step 2: Audit your agent's footprint. What credentials, permissions, and persistent resources does it hold? What accumulates over time? Apply least-privilege principles now, not after the incident.

Step 3: Check your Conway alignment. Draw your org chart alongside your agent's decision scope. Every decision the agent can make should map to a named human owner. Gaps in that map are governance gaps — and they're where failures will emerge first.

Closing Thought

Conway's Law isn't new, and it isn't about AI. But every generation of system design rediscovers it in a new context. Always-on agents are that context for this decade.

The most durable lesson from the Claude leak isn't about Anthropic's specific choices. It's that system structure reflects organizational structure — both the building org and the deploying org. Those assumptions are inevitable, and they're visible to anyone who looks.

Design your org intentionally. Your agents will follow.

Let's talk about governing your agentic systems →