How to Handle Memory Conflicts in Multi-Agent Systems
Why Conflicts Are Inevitable
In multi-agent systems, conflicts arise from four sources, and each requires a different resolution approach.
Temporal differences. Agent A checked the system at 2pm and found low latency. Agent B checked at 3pm during peak traffic and found high latency. Both observations are correct for their respective timestamps. Resolution: preserve both with timestamps and let the consuming agent weight by recency.
Scope differences. Agent A measures end-to-end latency including network transit. Agent B measures server-side processing time only. They report different numbers for "the same" metric because they are actually measuring different things. Resolution: enrich metadata with measurement context so the difference is visible.
Genuine disagreement. Agent A concludes that the root cause is a memory leak. Agent B concludes that the root cause is a misconfigured load balancer. One or both may be wrong. Resolution: preserve both with their supporting evidence and escalate for investigation.
Stale information. Agent A stored a fact last month that was true at the time. Agent B discovers the fact has changed. The old memory is not wrong per se, it is outdated. Resolution: update the old memory's temporal scope and confidence, and store the new observation as the current truth.
Step-by-Step Implementation
Before storing a new memory, search the memory store for existing memories about the same topic. If semantically similar memories exist, use an LLM evaluator to check whether the new information contradicts them. This is more reliable than simple string comparison because contradictions often involve different phrasing of incompatible claims.
from anthropic import Anthropic
client = Anthropic()
CONTRADICTION_CHECK = """Compare these two statements and
determine if they contradict each other.
Existing memory: {existing}
New observation: {new}
Respond with one of:
- CONSISTENT: statements agree or cover different aspects
- CONTRADICTORY: statements make incompatible claims
- UPDATE: new statement supersedes old (newer information)
- UNCLEAR: cannot determine relationship
Then briefly explain why."""
def check_contradiction(existing_text, new_text):
response = client.messages.create(
model="claude-haiku-4-5-20251001",
max_tokens=100,
messages=[{"role": "user",
"content": CONTRADICTION_CHECK
.replace("{existing}", existing_text)
.replace("{new}", new_text)}]
)
result = response.content[0].text
for label in ["CONTRADICTORY", "UPDATE",
"CONSISTENT", "UNCLEAR"]:
if label in result:
return label, result
return "UNCLEAR", resultWhen a contradiction is detected, store the new memory with a reference to the conflicting existing memory. Update the existing memory with a reference to the new conflict. Include the conflict type (temporal, scope, genuine, stale) and the evidence supporting each version. Never silently overwrite: the information that resolves the conflict may come from a third source later.
def store_with_conflict_tracking(memory_client, content,
metadata, agent_id):
# Find semantically similar existing memories
similar = memory_client.recall(content, top_k=5)
for mem in similar:
if mem.similarity < 0.80:
continue
label, explanation = check_contradiction(
mem.content, content
)
if label == "CONTRADICTORY":
# Store new memory with conflict reference
metadata["conflict"] = {
"conflicts_with": mem.id,
"type": "contradiction",
"detected_at": datetime.utcnow().isoformat(),
"explanation": explanation
}
new_id = memory_client.store(content,
metadata=metadata)
# Update existing memory with cross-reference
mem_meta = mem.metadata or {}
conflicts = mem_meta.get("conflicts", [])
conflicts.append({
"memory_id": new_id,
"agent": agent_id,
"detected_at": datetime.utcnow().isoformat()
})
memory_client.update(mem.id,
metadata={**mem_meta, "conflicts": conflicts})
return new_id
elif label == "UPDATE":
# New info supersedes old, adjust confidence
memory_client.update(mem.id, metadata={
**mem.metadata,
"superseded_by": content[:100],
"confidence": max(0.3,
mem.metadata.get("confidence", 0.5) - 0.3)
})
return memory_client.store(content, metadata=metadata)Choose a resolution strategy based on the stakes of the decision. For low-stakes decisions, confidence-weighted resolution (trust the higher-confidence memory) is fast and usually correct. For high-stakes decisions, surface the conflict to a human or a supervisor agent. For time-sensitive information, recency-based resolution (trust the newer observation) works well because the system state is more likely to match the most recent measurement.
def resolve_conflict(memory_a, memory_b, strategy="confidence"):
"""Resolve a conflict between two memories."""
if strategy == "confidence":
conf_a = memory_a.metadata.get("confidence", 0.5)
conf_b = memory_b.metadata.get("confidence", 0.5)
winner = memory_a if conf_a >= conf_b else memory_b
return winner, f"confidence {max(conf_a, conf_b):.2f}"
elif strategy == "recency":
time_a = memory_a.metadata.get("stored_at", "")
time_b = memory_b.metadata.get("stored_at", "")
winner = memory_a if time_a >= time_b else memory_b
return winner, f"more recent: {max(time_a, time_b)}"
elif strategy == "evidence":
# Use LLM to evaluate which has stronger evidence
response = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=200,
messages=[{"role": "user",
"content": f"Which statement has stronger "
f"evidence?\n\nA: {memory_a.content}\n"
f"B: {memory_b.content}\n\n"
f"Respond A or B with brief reasoning."}]
)
text = response.content[0].text
winner = memory_a if text.startswith("A") else memory_b
return winner, textRun a background process that periodically reviews unresolved conflicts. Conflicts that have been open for more than a configurable threshold get escalated to a reconciliation workflow that gathers additional evidence, checks which memory has been accessed more (indicating usefulness), and either resolves the conflict by adjusting confidence scores or flags it for human review.
When an agent retrieves memories and the results include conflicting entries, annotate the retrieval response to make the conflict visible. The consuming agent should know that two sources disagree so it can investigate further rather than treating one version as authoritative.
def recall_with_conflict_awareness(memory_client, query,
top_k=10):
"""Retrieve memories and flag any active conflicts."""
results = memory_client.recall(query, top_k=top_k)
annotated = []
for mem in results:
conflict_info = mem.metadata.get("conflict", None)
conflicts_list = mem.metadata.get("conflicts", [])
if conflict_info or conflicts_list:
mem.conflict_warning = (
"This memory has conflicting information "
"from another source. Review both before "
"relying on either."
)
annotated.append(mem)
return annotatedHow Adaptive Recall Handles Conflicts
Adaptive Recall implements conflict handling through its contradiction detection and confidence scoring systems. When a new memory contradicts an existing one, both are preserved with their respective confidence scores. The consolidation process reviews contradictions during its regular cycle and adjusts confidence based on corroborating evidence from other memories, access patterns (memories that are consistently retrieved and useful gain confidence), and temporal relevance (newer observations about changing state get a recency boost). The result is that conflicts resolve organically as evidence accumulates, without requiring manual intervention for most cases.
Let your memory system handle conflicts intelligently. Adaptive Recall detects contradictions, preserves evidence, and resolves conflicts through confidence scoring and consolidation.
Get Started Free