Introducing Reactive Composite Memory (RCM) - An Open Pattern for AI Agent Memory

Hi everyone,

We’ve been working on a challenge many of us face when building production AI agents: how to give them reliable, explainable, cost-effective memory.

The Problem

Today, most agents either:

  • Rebuild context on every request (expensive, slow, no audit trail)

  • Use static caches (stale data, no lineage, thundering herds)

  • Cobble together custom pipelines (fragile, hard to govern, each team reinvents)

When you ask “why did the agent recommend X?” or “what did it know at time T?”, there’s often no good answer. Costs scale with traffic instead of actual change. Privacy and retention policies apply inconsistently.

The Solution: Reactive Composite Memory (RCM)

  1. Reactive: Context updates automatically when sources change (not on polling or per-request assembly)
  2. Composable: Views layer over sources and other views - build complex contexts from simple, reusable pieces
  3. Versioned: Every state has a monotonic version - you can replay decisions and debug with confidence
  4. Governed: Policies for budgets, retention, classification, and access control apply uniformly through extension points
  5. Explainable: Every context artifact carries lineage - which sources, which transforms, which time window

Think of it as “Event Sourcing for reads” or “Redux for durable, multi-consumer memory” - but with time semantics, governance, and delivery contracts built in.

Key Benefits

:white_check_mark: Freshness you can measure: Track staleness via watermarks; set SLOs like “p95 lag < 2 seconds”
:white_check_mark: Explainability by design: Every frame cites its inputs and transform - audit trails come free
:white_check_mark: Predictable costs: Recompute on change (data velocity), not traffic (request volume)
:white_check_mark: Shared substrate: One set of contexts serves agents, dashboards, analytics - no parallel pipelines
:white_check_mark: Safe evolution: Test plan changes via replay; roll out declaratively without breaking consumers

What It Standardizes

  • Portable envelope format (identity, version, time window, provenance, TTL)

  • Event-time semantics with watermarks and deterministic window closure

  • At-least-once, per-key ordered delivery to subscribers

  • Extension hooks for governance (admission control, budgets, security) without coupling to core logic

Who It’s For

  • Architects evaluating memory patterns for production AI systems

  • Platform teams building shared context infrastructure

  • Framework developers (LangChain, Semantic Kernel, etc.) wanting governed memory substrate

  • Enterprises needing explainable, compliant agent systems

Status & Participation

RCM v1.0 is a Community Specification under CC-BY-4.0 (spec) / Apache 2.0 (code). It’s designed to complement—not replace—existing patterns like Event Sourcing, CQRS, and edge protocols like MCP.

:open_book: Full Spec: https://github.com/critical-insight-ai/rcm-spec
:speech_balloon: Discussion: GitHub Discussions (linked in repo)

We’re looking for:

  • Feedback on the pattern and spec clarity

  • Use cases and edge cases we should document

  • Implementers interested in building conformant systems

  • Contributors for test vectors and crosswalks

Vision

Our goal is to make memory a first-class, portable abstraction for intelligent systems - the way REST standardized web APIs or Reactive Streams standardized backpressure. When AI agents become core infrastructure, they need memory infrastructure that’s:

  • Trustworthy (lineage, replay, audit)

  • Scalable (reactive updates, governed resources)

  • Interoperable (vendor-neutral semantics)

Curious to hear your thoughts! Has anyone solved similar problems differently? What patterns are you using for agent memory today?

YouTube Video (7 minutes): https://youtu.be/Sf32vqAgLr8?si=lCPKvqJaDghscDwQ