Home » AI Agent Memory » Build with LangGraph

How to Build a Memory System with LangGraph

LangGraph provides built-in state persistence through its checkpointer system, which saves the full graph state after each node execution. This gives agents automatic recovery from interruptions, but it does not provide semantic search over past observations or cross-session memory retrieval. To build a complete memory system, combine LangGraph's checkpointer for execution state with an external memory store (vector database or memory API) for semantic retrieval of accumulated knowledge.

What LangGraph Gives You Out of the Box

LangGraph models agent execution as a directed graph where each node performs an operation and edges define the flow between operations. The graph state is a TypedDict that accumulates information as execution moves through nodes. The checkpointer feature serializes this state to durable storage after each node, so if the process crashes, you can resume from the last completed node.

This is powerful for within-session persistence. If your agent runs a 10-node graph and crashes after node 7, the checkpointer has saved the state through node 7 and the agent resumes from node 8. You get this by adding three lines of configuration, no custom checkpointing code needed.

What LangGraph does not give you is cross-session semantic memory. The checkpointer saves the state of a specific graph execution (identified by a thread ID). It does not search across all past executions to find relevant knowledge. If the agent learned something useful in thread A last week, that knowledge is not automatically available in thread B this week unless you build a mechanism to extract and store it in a searchable format.

Step-by-Step Implementation

Step 1: Set up LangGraph with a persistent checkpointer.
Replace the default in-memory checkpointer with SqliteSaver for single-machine deployments or PostgresSaver for distributed deployments. This is the foundation that gives you execution recovery.
from langgraph.graph import StateGraph, END from langgraph.checkpoint.sqlite import SqliteSaver from typing import TypedDict, Annotated import operator # For production, use PostgresSaver instead checkpointer = SqliteSaver.from_conn_string( "agent_memory.db" ) class AgentState(TypedDict): messages: Annotated[list, operator.add] task: str plan: list completed_steps: list discoveries: Annotated[list, operator.add] memory_context: str graph = StateGraph(AgentState) # ... add nodes and edges ... app = graph.compile(checkpointer=checkpointer)
Step 2: Define memory in your graph state.
Add fields to your state TypedDict that capture what the agent learns during execution. The discoveries field uses an Annotated list with operator.add so that each node can append findings without overwriting previous ones. The memory_context field holds retrieved memories from past sessions that inform the current execution.
class AgentState(TypedDict): messages: Annotated[list, operator.add] task: str plan: list completed_steps: list # Each node can append discoveries discoveries: Annotated[list, operator.add] # Retrieved from external memory at start memory_context: str # Current step index current_step: int
Step 3: Add a memory retrieval node.
Create a node that runs at the beginning of the graph (or before each decision point) that queries an external memory store for context relevant to the current task. This is where LangGraph's built-in checkpointer and external memory connect: the checkpointer handles within-execution state, and the memory retrieval node pulls in knowledge from all past executions.
from anthropic import Anthropic client = Anthropic() def retrieve_memory(state: AgentState) -> dict: """Query external memory for context.""" task = state["task"] # Search external memory for relevant past knowledge memories = memory_client.recall(task, top_k=10) context = "\n".join( f"[{m.metadata.get('source', 'unknown')}] " f"{m.content}" for m in memories ) return {"memory_context": context} graph.add_node("retrieve_memory", retrieve_memory)
Step 4: Add a memory storage node.
Create a node that runs after task completion (or after significant findings) that extracts key observations from the current execution and writes them to external memory. Use the LLM to extract structured findings rather than dumping the raw state, since selective storage produces a cleaner, more retrievable memory store.
def store_memories(state: AgentState) -> dict: """Extract and store key findings in external memory.""" discoveries = state.get("discoveries", []) for discovery in discoveries: # Check for duplicates before storing existing = memory_client.recall( discovery, top_k=1 ) if existing and existing[0].similarity > 0.92: continue memory_client.store( content=discovery, metadata={ "source": f"langgraph:{state['task'][:50]}", "agent": "task-executor", "confidence": 0.8 } ) return {} graph.add_node("store_memories", store_memories)
Step 5: Connect external memory for semantic search.
Wire the memory retrieval and storage nodes into the graph flow. The retrieval node should be one of the first nodes executed (before planning or reasoning), and the storage node should be one of the last (after the task completes or when significant findings emerge). The LLM nodes in between receive the memory context as part of their state and use it to inform their decisions.
# Define the reasoning node that uses memory context def reason(state: AgentState) -> dict: """Agent reasoning with memory context.""" system_prompt = f"""You are an AI agent with access to past knowledge. Relevant memories from past sessions: {state.get('memory_context', 'No previous memories.')} Use these memories to inform your approach. If memories contradict current observations, trust current observations but note the discrepancy.""" response = client.messages.create( model="claude-sonnet-4-6", max_tokens=4096, system=system_prompt, messages=state["messages"] ) # Extract any new discoveries new_discoveries = extract_findings( response.content[0].text ) return { "messages": [{"role": "assistant", "content": response.content[0].text}], "discoveries": new_discoveries } # Wire the graph graph.add_node("reason", reason) graph.set_entry_point("retrieve_memory") graph.add_edge("retrieve_memory", "reason") graph.add_edge("reason", "store_memories") graph.add_edge("store_memories", END) app = graph.compile(checkpointer=checkpointer)

When LangGraph's Built-In Memory Is Enough

For agents that work on short, independent tasks with no cross-session context needs, LangGraph's checkpointer alone may be sufficient. The checkpointer gives you crash recovery within a single graph execution, and if each execution is independent, there is nothing to carry across sessions.

External memory becomes necessary when: agents benefit from knowledge accumulated in past sessions, multiple agents need to share discoveries, tasks span multiple sessions (the user starts something today and continues tomorrow), or the agent needs to learn patterns from its own performance history.

Adaptive Recall integrates with LangGraph as the external memory layer. The retrieval node calls the recall tool, the storage node calls the store tool, and Adaptive Recall handles embedding, cognitive scoring, knowledge graph construction, and memory lifecycle management. This gives LangGraph agents production-grade memory without building a custom retrieval pipeline.

Add production memory to your LangGraph agents. Adaptive Recall provides the semantic search, cognitive scoring, and lifecycle management that LangGraph's checkpointer does not cover.

Get Started Free