A response to The New Stack's excellent taxonomy. They got six right. Here's the pattern nobody's building yet, and a practical blueprint for how to build it.


If you've been following this newsletter, you know I've been making the case that governance is the missing layer in agentic AI. In previous posts, I explored how the biggest platforms (Amazon, LinkedIn, Google, Microsoft, and Salesforce) are all converging on the same conclusion from different directions, and I argued that without governance, AI agents are guessing with confidence.

The New Stack recently published a sharp piece that brings this discussion forward in an important way: "6 Agentic Knowledge Base Patterns Emerging in the Wild." It documents how real organizations are building knowledge infrastructure for AI agents, with concrete examples and on-the-ground practitioners describing what they've built. It's the kind of taxonomy the industry needs.

But in naming these six patterns, the article made the gap I've been describing even more visible. There's a seventh pattern that cuts across all six, and this time, instead of just arguing that it matters, I want to show you exactly what it looks like to build it.

The Six Patterns (Quick Summary)

The New Stack identifies six approaches organizations are taking:

  1. The Playbook for Coding Assistants: LinkedIn's CAPT system, encoding coding standards and debugging workflows as executable playbooks agents can act on — not just documents to search through.

  2. The Integration Knowledge Center: Adeptia's institutional integration patterns, teaching agents how systems connect in practice, so they produce more valid, less generic integrations.

  3. The Multi-Agent Home Base: R Systems' centralized knowledge base that gives every agent the same rules, voice, and playbook, so they don't "improvise policy on the fly."

  4. The Shared Well of Business Context: Epicor embedding ERP, financial, and implementation data into a knowledge base so agents can answer "Show me X metric for last quarter" without tickets, dashboards, or waiting.

  5. The Source of Truth for Data Intelligence: Amazon's BI engineer Anusha Kovi describing how semantic layers prevent the chaos of "three teams with three different SQL definitions of 'revenue.'"

  6. The MCP-Powered Capability Layer: Vendia's MCP gateway, letting agents access governed capabilities through a structured protocol, rather than prompt-stuffing with RAG.

All six are real, necessary, and shipping. Every organization building agents at scale is building at least three of them.

What All Six Share and What's Missing

Here's what I notice: across all six patterns, every organization is solving the same underlying problem in their own domain. How do we ensure agents operate from current, approved, authoritative organizational knowledge instead of stale, draft, or conflicting information?

R Systems' Abhyankar says it directly: "When a rule or template in the knowledge base is tweaked, the improvement shows up everywhere at once." Kovi at Amazon frames it as enforcement: "The knowledge base isn't there to help the agent be creative. It's there to keep it inside the lines."

These are governance statements. But they're embedded inside individual pattern implementations. None of the six patterns names governance as its own architectural concern: the cross-cutting layer that ensures all the knowledge, across all six patterns, stays current, owned, versioned, and enforced.

That's the missing seventh pattern: governed organizational context.

Why Calling It Out Matters

You might argue it's implicit. That any well-built knowledge base naturally includes versioning and ownership. But if you've actually deployed agents at scale, you know the difference between governance-by-accident and governance-by-design.

Here's what governance-by-accident looks like:

  • Marketing updates the brand voice in a Google Doc. The support agent still uses the old version because no one re-synced the prompt.
  • Pricing changes. Sales ops updates one system. Three agents across two platforms keep quoting the old tier for a week.
  • A compliance rule changes. Legal sends an email. Two of four agents get the update. The other two generate responses that violate the new regulation.
  • Something goes wrong in a customer interaction. Nobody can trace which version of which rule the agent was supposed to follow.

Every organization building the six patterns eventually hits these failure modes. And every time, the solution looks the same.

The 7th Pattern: A Practical Blueprint

Instead of describing this abstractly, here's what a governed organizational context layer looks like in practice — broken into five capabilities you can implement incrementally.

Capability 1: Structured Knowledge Domains With Ownership

The problem: Knowledge is scattered across prompts, docs, databases, and people's heads. No one knows who's responsible for what.

What to build: Define explicit knowledge domains (pricing, brand voice, compliance rules, product specifications, escalation procedures), each with a designated owner (person or team) and a defined scope.

In practice: This isn't a database schema. It's an organizational decision. Sales ops owns pricing. Marketing owns brand voice and approved messaging. Legal owns compliance boundaries. Engineering owns technical standards. Each owner maintains their domain's content and approves changes before they reach any agent.

Start here: Pick the one domain that has caused the most agent errors in the last 90 days. Define its scope, assign an owner, and centralize its content. You can expand from there.

Capability 2: Content Status and Versioning

The problem: There's no difference between a draft proposal and a board-approved policy. Agents retrieve whatever vector similarity scores highest.

What to build: Every piece of organizational knowledge gets a status: draft, in review, approved, deprecated, archived. Agents only receive content marked approved. When content is updated, the previous version is preserved with a timestamp, the author, and a changelog entry.

In practice: When your pricing team updates the enterprise tier, they change the status of the old pricing to deprecated and publish the new version as approved. Every agent, across every platform, immediately serves the new pricing. The old version exists in history but never reaches an agent.

What this prevents: The "stale pricing for a week" problem. The "which version was the agent using?" audit failure. The "I thought we changed that" conversation in the post-mortem.

Capability 3: Deterministic Access Control

The problem: Agents retrieve knowledge probabilistically. A well-tuned query finds the right content most of the time. An edge-case query surfaces deprecated docs, draft proposals, or content from the wrong domain.

What to build: A governance layer that sits between the knowledge store and the agent. The agent never queries the raw knowledge base directly. Instead, the governance layer serves only content that is (a) approved, (b) current, (c) within the agent's authorized scope. The wrong version doesn't reach the agent by design, not by retrieval quality.

In practice: This is the architectural pattern Amazon's AgentCore uses for policy enforcement: deterministic, not probabilistic. The enforcement happens outside the LLM reasoning loop. The agent doesn't evaluate whether it has the right rule. The governance layer ensures it does.

The key insight: This is where governed context diverges from the six knowledge base patterns. The six patterns are about what knowledge agents need. Deterministic access control is about ensuring agents can only receive the right version of that knowledge. It's an enforcement concern, not a content concern.

Capability 4: Validation Gates

The problem: Even when agents receive the right knowledge, they don't always follow it. A brand voice guideline in the context window doesn't guarantee the output matches the guideline.

What to build: Validation rules that check agent outputs against the active organizational rules before delivery. Does this email match our approved tone? Is this discount within the current pricing policy? Does this technical recommendation follow our standards?

In practice: The validation layer uses the same governance content that informed the agent, but applies it as a post-generation check. This creates a closed loop: the knowledge base both informs the agent and verifies the agent.

A practical note: Not every agent action needs validation. Start with the high-stakes outputs: customer-facing communications, pricing and contract terms, compliance-sensitive content. Expand as the system matures.

Capability 5: Change Propagation and Audit

The problem: Knowledge changes. Rules evolve. But there's no mechanism to ensure every dependent agent gets the update, and no trail to trace when something goes wrong.

What to build: Two things. First, a notification system: when a knowledge domain owner updates approved content, every agent (and every team) that depends on that domain is notified. The update propagates immediately, not when someone remembers to re-sync. Second, an audit trail: every agent interaction records which version of which knowledge it used. When a customer escalation happens, you can trace back to the exact rule version, when it was last updated, and who approved it.

In practice: This is how you answer the question that every compliance and risk team asks: "When this agent told the customer X, what rule was it following, and was that rule current?" Without this capability, the answer is a shrug. With it, the answer takes thirty seconds.

How the Seven Patterns Fit Together

Think of it as infrastructure layers:

Layer 1 — The Six Patterns (Content): What knowledge do agents need?

  • Coding playbooks, integration patterns, multi-agent rules, business context, data intelligence definitions, MCP capabilities

Layer 2 — The Seventh Pattern (Governance): How do we ensure agents always receive the current, approved, authoritative version of that knowledge?

  • Ownership, versioning, deterministic access, validation gates, change propagation, audit trails

The six patterns without the seventh gives you knowledge bases that work... until something changes, someone publishes a draft, or two agents get different versions of the same rule.

The seventh pattern without the six gives you governance infrastructure with nothing to govern.

You need both. The six patterns are the content. The seventh is what makes the content trustworthy.

A Maturity Model: Where Are You?

Level 0 (Ad Hoc): Knowledge lives in system prompts, uploaded docs, and individual heads. Each agent has its own version of reality. No ownership, no versioning, no audit trail.

Level 1 (Centralized): You've built some of the six patterns. Knowledge exists in structured, queryable form. But there's no governance layer, content may be stale, ownership is unclear, and there's no validation.

Level 2 (Owned): Each knowledge domain has a designated owner. Content has status (draft, approved, deprecated). Agents can query structured knowledge, and there's some version control.

Level 3 (Governed): Agents receive only approved content through a deterministic governance layer. Validation gates check outputs against active rules. Changes propagate automatically. Audit trails exist.

Level 4 (Continuous): The governance layer includes feedback loops. Quality metrics identify which knowledge domains are causing the most agent errors. Schema and rubric evaluation refine extraction quality over time. The system improves with use.

Most organizations are at Level 0 or Level 1. The companies I described in previous posts (LinkedIn, Amazon, Salesforce) are at Level 2 or 3 within their own platforms. Level 4 is where the industry needs to go.

What to Do This Week

If you're building agentic AI and you recognize yourself in the first two levels:

  1. Pick your highest-pain domain. Which type of organizational knowledge has caused the most agent errors? Pricing? Brand voice? Compliance rules? That's where you start.

  2. Assign an owner. One person or team responsible for maintaining and approving the content in that domain.

  3. Add status to your content. Even if it's a simple approved / deprecated flag in metadata, this alone prevents 80% of stale-content problems.

  4. Put a gate between your knowledge store and your agents. The agent should never query raw docs. Even a simple middleware that filters by status and domain is transformative.

  5. Log which content version each agent used. This is the audit trail. It doesn't need to be sophisticated yet. It needs to exist.

You don't need to build all five capabilities at once. Start with the domain that hurts most, implement capabilities 1 and 2, and expand from there. The governance layer grows with your agent ecosystem, not ahead of it.

The six agentic knowledge base patterns are the foundation. The seventh pattern — governed organizational context — is what makes them safe, scalable, and trustworthy. The New Stack documented what organizations are building. The question is whether you're also building the layer that keeps it all correct.


References

  • The New Stack: "6 Agentic Knowledge Base Patterns Emerging in the Wild" (Feb 2026): https://thenewstack.io/agentic-knowledge-base-patterns/
  • LinkedIn Engineering: "Contextual Agent Playbooks and Tools" (Jan 2026): https://www.zenml.io/llmops-database/contextual-agent-playbooks-and-tools-enterprise-scale-ai-coding-agent-integration
  • AWS: "Amazon Bedrock AgentCore Adds Quality Evaluations and Policy Controls" (Dec 2025): https://aws.amazon.com/blogs/aws/amazon-bedrock-agentcore-adds-quality-evaluations-and-policy-controls-for-deploying-trusted-ai-agents/
  • Salesforce: "Welcome to the Agentic Enterprise: Agentforce 360" (Oct 2025): https://www.salesforce.com/news/press-releases/2025/10/13/agentic-enterprise-announcement/
  • Microsoft: "Security and Governance in Copilot Studio" (2025): https://learn.microsoft.com/en-us/microsoft-copilot-studio/security-and-governance
  • Deloitte: "The State of AI in the Enterprise, 2026 AI Report": https://www.deloitte.com/global/en/issues/generative-ai/state-of-ai-in-enterprise.html
  • Singapore IMDA: "New Model AI Governance Framework for Agentic AI" (Jan 2026): https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/press-releases/2026/new-model-ai-governance-framework-for-agentic-ai