How to Score Memory Importance for Retention
Before You Start
Importance scoring requires three data sources that should already exist in a well-instrumented memory system: access history (timestamps of every retrieval event), confidence metadata (a score reflecting corroboration and contradiction history), and entity connections (the list of entities extracted from each memory that link it to others in the knowledge graph). If you have all three, you can build a robust importance scorer. If you only have access history, you can still build a useful scorer that uses access patterns alone, but adding confidence and entity data significantly improves the quality of retention decisions.
Step-by-Step Implementation
Every time a memory appears in a retrieval result that is used by the application or presented to a user, append the current timestamp to the memory's access history. This gives you a complete record of when the memory was useful. From this history, compute two values: the total access count (how many times the memory has been retrieved) and the recency of the last access (how recently the memory was useful). Both are strong indicators of importance, a memory retrieved 50 times over the past month is demonstrably more valuable than one retrieved once three months ago.
import time
def access_importance(access_times, max_count=50):
if not access_times:
return 0.0
count = len(access_times)
count_score = min(count / max_count, 1.0)
now = time.time()
last_access = max(access_times)
days_since = (now - last_access) / 86400
recency_score = 1.0 / (1.0 + days_since / 30.0)
return count_score * 0.6 + recency_score * 0.4Confidence in Adaptive Recall ranges from 0 to 10, starting at a default of 5.0 for new memories. Memories gain confidence when corroborated by independent sources and lose confidence when contradicted. A confidence score of 8.0 or above indicates well-established knowledge that has been confirmed multiple times across different contexts. Normalize the confidence score to a 0-1 range for use in the composite importance calculation.
def confidence_importance(confidence, max_confidence=10.0):
return confidence / max_confidenceEntity centrality captures how structurally important a memory is in the knowledge graph. A memory that connects to ten different entities, and through those entities to dozens of other memories, serves as a knowledge hub. Removing it would break graph connections and reduce spreading activation pathways for many related queries. Count the number of unique entities the memory is connected to and the number of other memories reachable through those entities. Memories with high centrality are valuable even if they have not been directly retrieved recently, because they support the retrieval of other memories through spreading activation.
def centrality_importance(memory, entity_graph, max_connections=20):
entities = memory.get('entities', [])
if not entities:
return 0.0
connected_memories = set()
for entity in entities:
neighbors = entity_graph.get(entity, [])
connected_memories.update(neighbors)
connected_memories.discard(memory['id'])
return min(len(connected_memories) / max_connections, 1.0)Weight the three signals and sum them into a single importance value between 0 and 1. Access patterns are the strongest signal because they directly reflect demonstrated utility. Confidence is the second strongest because it reflects verified accuracy. Entity centrality is the third because it captures structural value that access patterns might not directly reveal. A typical weighting gives access 50%, confidence 30%, and centrality 20%. Adjust these weights for your domain.
def importance_score(memory, entity_graph,
w_access=0.50, w_conf=0.30, w_central=0.20):
access = access_importance(memory.get('access_times', []))
confidence = confidence_importance(memory.get('confidence', 5.0))
centrality = centrality_importance(memory, entity_graph)
return (w_access * access +
w_conf * confidence +
w_central * centrality)Use the importance score to modify each memory's effective decay rate. A memory with an importance score of 1.0 (maximum importance) should decay at a fraction of the base rate, perhaps 20% of normal. A memory with an importance score of 0.0 should decay at the full base rate. This creates a spectrum where highly important memories persist for months or years while unimportant memories fade within weeks.
def adjusted_decay_rate(memory, entity_graph, base_decay=0.5):
imp = importance_score(memory, entity_graph)
# importance of 1.0 reduces decay to 20% of base
# importance of 0.0 uses full base decay
min_decay_fraction = 0.2
scale = 1.0 - imp * (1.0 - min_decay_fraction)
return base_decay * scaleEvidence-Gated Importance
A key principle in importance scoring is that importance must be earned, not assumed. A memory does not become important because someone marked it as important at creation time. It becomes important because the system has evidence of its value: it has been retrieved and used repeatedly, it has been corroborated by independent sources, and it connects to other valuable knowledge in the graph. This evidence-gated approach prevents the common failure mode where aggressively stored low-quality memories crowd out genuinely valuable knowledge by gaming manual importance tags.
Adaptive Recall implements this principle through its cognitive scoring model. New memories start at moderate confidence with no access history, giving them neutral importance. As they are used, corroborated, and connected, their importance rises naturally. Memories that are stored but never retrieved or corroborated fade through normal decay. The system learns what matters from usage patterns rather than relying on human annotation.
Handling Edge Cases
Some memories are important but rarely retrieved directly. A foundational architectural decision made once and never queried again is still critical context for understanding the codebase. These memories are protected by the entity centrality signal, because they tend to connect to many other memories through shared entities. Even without direct access, their structural importance in the knowledge graph preserves them.
Conversely, some memories are retrieved frequently but are not genuinely important, such as a common query that happens to match a generic memory with high text similarity. These can be identified by low confidence and low corroboration count: the memory gets retrieved because of text similarity, but it has never been confirmed as accurate or useful. Over time, consolidation will either corroborate the memory (increasing its importance) or identify it as generic noise and merge it with more specific alternatives.
Importance scoring, evidence-gated learning, and automatic retention all built into every Adaptive Recall account.
Get Started Free