Home » Memory-Powered Customer Service » Build Customer Preference Profiles

How to Build Customer Preference Profiles

Customer preference profiles are structured records of how each customer likes to interact, learned from their behavior across multiple conversations rather than from a single survey or settings page. A well-built preference profile tells the AI whether this customer wants concise or thorough answers, technical or plain language, email follow-ups or chat-only resolution, and proactive suggestions or only answers to direct questions. These profiles improve automatically over time as the system observes more interactions.

Before You Start

You need a memory system that stores interaction summaries linked to customer IDs, and at least three interactions per customer before preference learning produces reliable results. Preference profiles built from a single interaction are guesses, not learned preferences. The evidence-gated approach in this guide requires a minimum observation count before any preference is acted on, so plan for a learning period during which the system gathers signals without yet personalizing based on them.

Step-by-Step Implementation

Step 1: Define the preference dimensions to track.
Not every possible preference matters for customer service. Focus on dimensions that meaningfully change how the AI should respond. Each dimension should have a clear default value (used when no preference has been learned yet) and at least two distinct states that produce different AI behavior.
PREFERENCE_DIMENSIONS = { "response_detail": { "options": ["concise", "moderate", "thorough"], "default": "moderate", "min_signals": 3, "description": "How much detail the customer wants" }, "technical_depth": { "options": ["non_technical", "intermediate", "technical"], "default": "intermediate", "min_signals": 2, "description": "Technical vocabulary and code examples" }, "communication_tone": { "options": ["formal", "friendly", "direct"], "default": "friendly", "min_signals": 3, "description": "Conversational tone preference" }, "followup_channel": { "options": ["email", "chat", "none"], "default": "none", "min_signals": 2, "description": "Preferred follow-up method" }, "proactive_help": { "options": ["welcome", "only_when_asked"], "default": "welcome", "min_signals": 3, "description": "Whether to offer unsolicited suggestions" } }

Keep the list short. Five to eight dimensions is the practical maximum for preference profiles that the AI can meaningfully use. Each additional dimension adds complexity to the system prompt and increases the chance that preferences conflict with each other (for example, a customer who wants "concise" responses but also wants "thorough" technical detail). Start with the dimensions that have the largest impact on customer satisfaction and add more only when you have evidence that they improve outcomes.

Step 2: Extract preference signals from interactions.
Preference signals come in two forms: explicit statements where the customer tells you what they want, and implicit signals where their behavior reveals a preference. Explicit signals are stronger and should be weighted more heavily, but implicit signals are far more common because most customers do not articulate their preferences directly.
class PreferenceDetector: def extract_signals(self, conversation): signals = [] # Explicit signals (high confidence) explicit = self.detect_explicit_preferences(conversation) for signal in explicit: signal['confidence'] = 0.9 signals.append(signal) # Implicit signals (moderate confidence) implicit = self.detect_implicit_preferences(conversation) for signal in implicit: signal['confidence'] = 0.5 signals.append(signal) return signals def detect_explicit_preferences(self, conversation): signals = [] text = conversation['full_text'].lower() # Direct requests about communication style if 'keep it brief' in text or 'short answer' in text: signals.append({ "dimension": "response_detail", "value": "concise", "evidence": "Customer explicitly asked for brevity" }) if 'send me an email' in text or 'email summary' in text: signals.append({ "dimension": "followup_channel", "value": "email", "evidence": "Customer requested email follow-up" }) return signals def detect_implicit_preferences(self, conversation): signals = [] # Customer uses technical jargon consistently tech_terms = count_technical_terms(conversation) if tech_terms > 5: signals.append({ "dimension": "technical_depth", "value": "technical", "evidence": f"Used {tech_terms} technical terms" }) # Customer's messages are consistently short avg_length = average_message_length(conversation) if avg_length < 30: signals.append({ "dimension": "response_detail", "value": "concise", "evidence": f"Average message length {avg_length} " f"words suggests preference for brevity" }) return signals

The confidence weighting matters for the evidence-gating step. An explicit request ("please be more concise") carries 0.9 confidence because the customer directly stated their preference. An implicit signal (short message length) carries only 0.5 confidence because there could be other explanations, maybe the customer was busy, not necessarily preferring brevity in general. The evidence-gating step accumulates these weighted signals before committing to a preference.

Step 3: Apply evidence-gated thresholds.
Do not update a preference based on a single signal. Require multiple consistent signals before the system acts on a learned preference. This prevents the profile from thrashing between states based on individual interactions where the customer might be behaving atypically. Evidence gating also protects against misinterpretation of signals, where a single ambiguous behavior could be read multiple ways.
class PreferenceProfile: def update(self, customer_id, signals): profile = self.load_profile(customer_id) for signal in signals: dim = signal['dimension'] value = signal['value'] confidence = signal['confidence'] # Accumulate evidence if dim not in profile['evidence']: profile['evidence'][dim] = {} if value not in profile['evidence'][dim]: profile['evidence'][dim][value] = 0 profile['evidence'][dim][value] += confidence # Check if threshold is met dimension_config = PREFERENCE_DIMENSIONS[dim] threshold = dimension_config['min_signals'] if profile['evidence'][dim][value] >= threshold: old_value = profile['preferences'].get( dim, dimension_config['default'] ) profile['preferences'][dim] = value if old_value != value: self.log_preference_change( customer_id, dim, old_value, value, profile['evidence'][dim] ) self.save_profile(customer_id, profile)

The threshold is defined per dimension because some preferences are easier to detect reliably than others. Technical depth can be determined from two interactions (the vocabulary a customer uses is a strong signal), while response detail preference needs three interactions because message length and detail expectations vary based on the issue, not just the person. Logging preference changes creates an audit trail that helps you tune the thresholds: if preferences are changing too frequently, raise the thresholds; if customers are getting generic responses for too many interactions before personalization kicks in, lower them.

Step 4: Store preferences as updatable semantic memories.
Store the preference profile as a semantic memory linked to the customer ID. Unlike episodic memories that accumulate, the preference profile is a single memory that gets updated in place as new signals arrive. This keeps the retrieval clean, one profile per customer rather than dozens of preference observation fragments scattered across their memory history.
def save_preference_memory(customer_id, profile): preference_text = build_preference_summary(profile) # Example: "Customer prefers concise, technical responses # in a direct tone. Likes email follow-ups for complex # issues. Welcomes proactive suggestions about their # account." existing = memory_api.recall( query="preference profile", filter={ "customer_id": customer_id, "type": "preference_profile" }, limit=1 ) if existing: # Update the existing preference memory memory_api.update( memory_id=existing[0]['id'], text=preference_text, metadata={ "customer_id": customer_id, "type": "preference_profile", "last_updated": datetime.now().isoformat(), "signal_count": profile['total_signals'] } ) else: # Create a new preference memory memory_api.store({ "text": preference_text, "metadata": { "customer_id": customer_id, "type": "preference_profile", "created": datetime.now().isoformat(), "signal_count": profile['total_signals'] } })

The preference summary should be written in natural language that the AI can directly use in its system prompt, not in a structured format that requires parsing. "Customer prefers concise, technical responses in a direct tone" is immediately actionable for an LLM. A JSON object with dimension keys and value codes requires the system prompt to include instructions for interpreting those codes, which adds complexity without improving the AI's ability to personalize.

Step 5: Use preferences to shape responses.
When the customer contacts support, retrieve their preference profile along with their interaction history and inject both into the system prompt. The preference profile guides how the AI communicates, while the interaction history guides what the AI communicates about.
def build_system_prompt(customer_id, current_message): # Get preference profile preferences = get_preference_profile(customer_id) # Get interaction history history = memory_api.recall( query=current_message, filter={"customer_id": customer_id}, limit=8 ) prompt = f"""You are a support agent for Acme Corp. CUSTOMER PREFERENCES: {preferences['summary_text']} CUSTOMER HISTORY: {format_history(history)} Adjust your response style to match the customer's preferences above. Do not mention that you have a preference profile. Simply communicate in the way they prefer naturally. """ return prompt

The instruction "do not mention that you have a preference profile" is important. Customers should experience personalization as natural, attentive service, not as a system announcing that it has been profiling them. Saying "I see from your preference profile that you like concise answers" is unsettling. Just giving a concise answer is good service.

Maintaining Profile Accuracy

Preferences can change over time. A customer who was non-technical when they signed up might become more technically sophisticated after using your product for a year. A customer who preferred email follow-ups might switch to preferring chat once your chat interface improves. Allow preference profiles to decay slowly, reducing the evidence weight of old signals so that recent behavior has more influence than historical behavior. A reasonable decay rate is reducing signal confidence by 10 to 20% per quarter, so a preference learned a year ago needs to be reinforced by recent behavior to remain active.

Build customer profiles that learn from every conversation. Adaptive Recall's cognitive scoring and consolidation handle the evidence accumulation, so your AI gets smarter about each customer over time.

Try It Free