How to Learn User Preferences Across Sessions
Before You Start
Cross-session learning builds on top of a preference storage system. If you do not already have one, start with the preference engine guide which covers schema design and confidence scoring. You also need a way to identify returning users across sessions, whether through authentication, device fingerprinting, or API keys. Without user identity persistence, cross-session learning is impossible because the system cannot associate observations from different sessions with the same person.
The approach described here works with any LLM backend and any memory store. The examples use Adaptive Recall because its memory lifecycle features (confidence scoring, consolidation, decay) handle much of the cross-session complexity automatically, but the architectural pattern applies regardless of your storage choice.
Step-by-Step Implementation
Cross-session learning depends on knowing when sessions begin and end. Add hooks at both boundaries. The session start hook loads the user's preference profile and injects it into the AI context. The session end hook summarizes what was learned during the session and persists new observations.
In practice, "session end" is not always a clean event. Users close browser tabs, network connections drop, and mobile apps get killed by the OS. Design your session end processing to be fault-tolerant: if the end hook does not fire, the next session start should check whether the previous session's observations were processed and handle them retroactively. A simple approach is to store raw session data temporarily and process it either at session end or at the next session start, whichever comes first.
class SessionManager {
async onSessionStart(userId) {
// Check for unprocessed previous session
const pending = await this.getPendingSession(userId);
if (pending) {
await this.extractAndStorePreferences(userId, pending);
await this.clearPendingSession(userId);
}
// Load current preference profile
const preferences = await this.loadPreferences(userId);
return this.formatPreferencesForContext(preferences);
}
async onSessionEnd(userId, conversationLog) {
// Store raw session for processing
await this.storePendingSession(userId, conversationLog);
// Attempt immediate processing
try {
await this.extractAndStorePreferences(userId, conversationLog);
await this.clearPendingSession(userId);
} catch (e) {
// Will be processed on next session start
}
}
}At session end (or next session start), process the conversation to extract preference signals. This is the most important step because the quality of your extraction determines the quality of everything downstream. Use the AI itself to analyze the conversation, looking for explicit preference statements, implicit behavioral patterns, corrections to the AI's behavior, and negative signals (rejected suggestions, ignored recommendations).
The extraction prompt should be specific about what to look for. A vague instruction like "find user preferences" produces inconsistent results. Instead, enumerate the preference categories you care about and ask the AI to score each observation by confidence.
const SESSION_EXTRACTION_PROMPT = `Analyze this conversation between an AI assistant and a user.
Extract preference observations in the following categories:
1. Communication: tone preferences, detail level, explanation style
2. Domain: technologies mentioned, expertise signals, project context
3. Behavioral: how they like to work (iterative vs one-shot, code vs explanation)
4. Negative: things they rejected, corrected, or explicitly asked to avoid
For each observation, provide:
- category: one of the four above
- key: the specific preference dimension
- value: what they prefer
- confidence: 0.0-1.0 (1.0 = explicitly stated, 0.5 = moderately implied, 0.2 = weak signal)
- evidence: the specific message or pattern that supports this observation
Return JSON array. Only include observations with confidence >= 0.3.
Omit obvious facts that are not preferences (e.g., "user asked about databases" is not a preference).
Conversation:
{conversation_log}`;When a new session begins, retrieve the user's accumulated preferences and format them for injection into the AI's context. The loader should retrieve preferences sorted by confidence, filter out low-confidence preferences that are not yet reliable enough to act on, group preferences by category for clean prompt formatting, and include relevant negative preferences to prevent repeating past mistakes.
The amount of context you spend on preferences depends on your application and model. A reasonable starting point is 200-400 tokens for preference injection, which accommodates 10-20 preferences. If you are using a model with a large context window, you can afford more. If context is tight, prioritize high-confidence preferences and negative preferences (which prevent bad experiences) over moderate-confidence preferences (which provide nice-to-have customization).
async function loadSessionPreferences(userId) {
// Retrieve all preferences above minimum confidence threshold
const allPrefs = await memoryStore.query({
userId: userId,
type: 'preference',
minConfidence: 0.4,
orderBy: 'confidence',
limit: 20
});
// Always include negative preferences even at lower confidence
const negativePrefs = await memoryStore.query({
userId: userId,
type: 'preference',
category: 'negative',
minConfidence: 0.25,
limit: 10
});
// Deduplicate and format
const combined = deduplicateById([...allPrefs, ...negativePrefs]);
return formatAsSystemPromptBlock(combined);
}Do not wait until session end to capture strong preference signals. When a user explicitly states a preference ("from now on, always use TypeScript") or explicitly corrects the AI ("I said no frameworks, just vanilla JS"), capture that observation immediately. This ensures the preference takes effect within the current session and is not lost if the session end hook fails.
Mid-session extraction should have a higher confidence threshold than session-end extraction. Only capture signals that are clearly and explicitly stated. Implicit patterns require the full session context to identify reliably and are better left to the session summary extractor.
async function checkForImmediatePreference(userId, userMessage) {
// Quick check: does this message contain a preference signal?
const indicators = [
'always', 'never', 'prefer', 'from now on', 'stop',
'don\'t', 'do not', 'instead of', 'rather than'
];
const hasIndicator = indicators.some(i =>
userMessage.toLowerCase().includes(i)
);
if (!hasIndicator) return null;
// Use a fast model to extract the preference
const extraction = await extractPreference(userMessage);
if (extraction && extraction.confidence >= 0.7) {
await memoryStore.store({
userId: userId,
type: 'preference',
...extraction,
source: 'mid-session-explicit'
});
return extraction;
}
return null;
}Individual session observations are raw data. The real value comes from analyzing patterns across multiple sessions. Run a periodic analysis (daily, or after every N sessions) that looks for consistent patterns, emerging trends, and confirmed preferences.
Pattern detection should identify observations that repeat across sessions and promote them to high-confidence preferences, observations that appeared once and never recurred and let them decay, observations that cluster around a common theme and consolidate them into a composite preference, and contradictions between sessions that indicate preference drift. If you are using Adaptive Recall, the consolidation pipeline handles much of this automatically. Repeated observations increase confidence through the standard activation model, and memories that are not reinforced decay naturally.
Users change. A developer who preferred React six months ago might be all-in on Svelte today. A user who wanted detailed explanations as a beginner now prefers concise answers as an intermediate. Your system needs to detect and accommodate these shifts without losing the ability to reference historical context when it is relevant.
The simplest approach to preference drift is temporal weighting: recent observations count more than old ones when calculating preference confidence. If a user's last ten interactions show TypeScript preference but their first twenty interactions showed Python preference, the TypeScript preference should dominate because it is more current. Adaptive Recall handles this through base-level activation decay, where memories (including preference memories) that are not accessed or reinforced gradually lose activation strength.
For applications that benefit from historical awareness, keep superseded preferences in an archived state rather than deleting them. An AI that says "I see you've switched from React to Svelte recently, should I adjust all my examples?" demonstrates useful historical awareness. An AI that never mentions the transition is less useful but also less likely to apply outdated preferences by accident. The right choice depends on your application.
Measuring Cross-Session Learning
Track three metrics to measure whether your cross-session learning is working. First, preference convergence: how many sessions does it take before the preference profile stabilizes (confidence scores stop changing significantly)? A healthy system converges within 5-10 sessions for most preference categories. Second, correction rate: how often does the user correct the AI's personalized behavior? This rate should decrease over time as the preference model improves. Third, session start quality: compare the AI's first response in session N to its first response in session 1. If cross-session learning is working, session N's first response should be noticeably more tailored to the user.
Adaptive Recall makes cross-session learning automatic. Store observations as memories, and cognitive scoring handles confidence, decay, and consolidation across sessions.
Start Building Free