Home » ACT-R Cognitive Architecture » How to Calculate Base-Level Activation

How to Calculate Base-Level Activation

Base-level activation is the core equation in ACT-R that determines how accessible a memory is based on when and how often it has been accessed. The calculation takes an array of access timestamps, computes a recency-weighted sum using power-law decay, and returns a single activation value. Higher values mean the memory is more likely to be retrieved.

The Base-Level Learning Equation

The equation comes directly from John Anderson's ACT-R theory. For a memory chunk with n prior accesses at times t1 through tn, the base-level activation B at the current time t is:

B(t) = ln( sum from i=1 to n of (t - ti)^(-d) )

Here, (t - ti) is the time elapsed since the i-th access, d is the decay parameter (typically 0.5), and ln is the natural logarithm. The equation captures two intuitions: memories accessed more recently are more active (recency), and memories accessed more often are more active (frequency). The logarithm compresses the scale so that the difference between 10 accesses and 100 accesses is smaller than the difference between 1 access and 10 accesses, which matches how human memory actually behaves.

Step-by-Step Calculation

Step 1: Collect the access history.
Every time a memory is stored, retrieved, updated, or referenced by another operation, record the timestamp. Store these as Unix timestamps (seconds since epoch) in an ordered array. The creation time counts as the first access. If a memory was created on April 15 and retrieved on April 18 and May 1, the access history is [1744710600, 1744983720, 1746090900].
Step 2: Choose the decay parameter.
The decay parameter d controls how quickly old accesses lose their contribution. The standard ACT-R value is d = 0.5, which produces a square-root decay curve validated against human memory experiments. Lower values (0.3) make memory longer-lasting, useful for stable knowledge domains like legal references. Higher values (0.7) make memory more transient, useful for rapidly changing information like customer support tickets. Start with 0.5 and adjust based on retrieval testing.
Step 3: Compute the recency contribution of each access.
For each access timestamp ti in the history, calculate the elapsed time in seconds (now - ti), and raise it to the power of negative d. This produces a value that is large for recent accesses and small for old ones. An access from 1 hour ago (3600 seconds) with d = 0.5 contributes 3600^(-0.5) = 0.0167. An access from 30 days ago (2,592,000 seconds) contributes 2592000^(-0.5) = 0.000621.
import math import time def compute_recency_contributions(access_times, decay=0.5): now = time.time() contributions = [] for t_access in access_times: age_seconds = now - t_access if age_seconds < 1.0: age_seconds = 1.0 contribution = age_seconds ** (-decay) contributions.append(contribution) return contributions
Step 4: Sum and take the logarithm.
Add all the recency contributions together and take the natural logarithm of the sum. The sum reflects total accumulated activation from all accesses, and the logarithm compresses it into a manageable range. A memory with three accesses (contributions 0.0167, 0.0084, 0.000621) has a sum of 0.0257 and a base-level activation of ln(0.0257) = -3.66. A negative value is normal and means the memory has moderate accessibility.
def base_level_activation(access_times, decay=0.5): now = time.time() if not access_times: return -float('inf') total = 0.0 for t_access in access_times: age = max(now - t_access, 1.0) total += age ** (-decay) return math.log(total)
Step 5: Handle edge cases.
Floor the time difference to 1 second to avoid division by zero when a memory was just created. If the access history is empty (which should not happen in practice since creation counts as an access), return negative infinity to indicate the memory is completely inaccessible. For very long access histories (thousands of entries), consider keeping only the most recent N entries to bound computation time, since very old accesses contribute negligibly to the sum.
def base_level_activation_optimized(access_times, decay=0.5, max_history=500): now = time.time() if not access_times: return -float('inf') # keep only the most recent accesses for performance recent = access_times[-max_history:] total = 0.0 for t_access in recent: age = max(now - t_access, 1.0) total += age ** (-decay) return math.log(total)
Step 6: Normalize for score blending.
Raw base-level activation values typically range from about -10 (very inactive) to +2 (very active). To blend with vector similarity scores (which range from 0 to 1), pass the activation through a sigmoid function. This maps any real-valued activation to the 0-1 range while preserving the relative ordering of memories.
def normalize_activation(bla): return 1.0 / (1.0 + math.exp(-bla))

Interpreting Activation Values

Raw activation values have specific meaning in the ACT-R framework. A value of 0 represents a memory at the retrieval threshold, one that has about a 50% chance of being successfully recalled. Positive values indicate above-threshold memories that are readily accessible. Negative values indicate below-threshold memories that are increasingly difficult to retrieve.

In practice, a memory accessed once an hour ago has a base-level activation of about -3.9. A memory accessed ten times over the past week sits around -1.5. A memory accessed fifty times over the past month with the most recent access an hour ago reaches about -0.3. These values become intuitive once you work with them: more negative means less accessible, and the scale is logarithmic so each unit represents a multiplicative change in retrieval probability.

Why Power-Law Decay Matters

The choice of power-law decay (t^(-d)) rather than exponential decay (e^(-lambda*t)) is not arbitrary. Decades of psychological research have shown that human forgetting follows a power law, not an exponential. The practical difference is that power-law decay has a long tail: old memories never fully reach zero activation, they just become progressively harder to retrieve. Exponential decay drops to effectively zero after a few time constants, which would cause old-but-important memories to vanish completely.

For AI memory systems, the long tail means that a critical piece of information stored months ago still has a small but nonzero activation. If the user queries about that topic and vector similarity pulls it into the candidate set, its activation, while low, is not zero. Combined with spreading activation from entity connections and a high confidence score, it can still rank well. With exponential decay, it would be gone.

Tuning the Decay Parameter

The decay parameter d is the single most impactful parameter in the base-level equation. Start with 0.5 and adjust based on your domain:

Test by running a set of queries and checking whether the expected results rank in the top positions. If old, irrelevant results appear too high, increase d. If useful historical context is missing from results, decrease d.

Adaptive Recall computes base-level activation automatically on every retrieval. No manual implementation needed.

Try It Free