You built a beautiful AI agent in n8n, Make.com, or Zapier. It answers questions, generates content, calls APIs. Works great in testing.
Then a user asks: "Can you remember what we talked about last time?"
Your agent has no idea what they're talking about. It starts every run like it just woke up with amnesia.
This is the single most-upvoted pain point on the n8n community forum — and the same complaint echoes across the Make.com and Zapier communities. The root cause is simple: all three platforms are stateless by design.
Why Automation Platforms Are Stateless
n8n, Make.com, and Zapier run workflows as discrete executions. When a workflow finishes, all in-memory data disappears. The next run starts completely clean.
For most automation tasks — syncing data, sending emails, moving files — this is fine. But for AI agents that are supposed to learn and remember, it's a fundamental mismatch.
Think about what a real customer service AI needs:
- Remember what a user complained about last week
- Know that a user is on the Pro plan without asking every time
- Continue a multi-step task that started three runs ago
- Build a customer profile over dozens of interactions
None of this is possible with a stateless agent.
n8n's AI Agent node has a "memory" option — but it only persists within a single workflow execution. Make.com and Zapier have no built-in memory mechanism at all. Close the browser or trigger the workflow again, and everything is gone.
The Common (Bad) Workarounds
Before solving this properly, let's look at what people try:
Using n8n's built-in static data: Fragile, not queryable, resets on workflow edits. Not for production.
Writing to a Google Sheet: Works until you have 10,000 rows. No semantic search. Breaks on concurrent runs.
Storing in Airtable: Better, but expensive at scale. Still no vector search — you can't ask "find everything related to billing issues for this user."
Using a separate database: Correct approach, but you need to set up Postgres, write queries, handle embeddings, build a search API. Most people give up here.
The Right Solution: Vector Memory with a REST API
What you actually need is:
- Persistent storage — survives workflow restarts
- Vector search — find semantically similar memories, not just exact matches
- Session isolation — each user's memories stay separate
- Simple API — no code, just HTTP calls in any platform
This is exactly what retainr provides — and it works identically across n8n, Make.com, Zapier, or any direct API integration.
Setting Up retainr: Choose Your Platform
The pattern is the same everywhere: search before AI → store after AI. Here's how to implement it on each platform:
Step 1: Install the community node
In n8n, go to Settings → Community Nodes → Install → type n8n-nodes-retainr → Install.
For self-hosted n8n: npm install n8n-nodes-retainr then restart.
Step 2: Add your API credentials
Get a free API key at retainr.dev/dashboard. In n8n Credentials, create a new "retainr API" credential and paste your key.
Step 3: Store a memory after each interaction
Add a retainr → Store Memory node at the end of your AI conversation flow:
{
"operation": "storeMemory",
"scope": "user",
"user_id": "={{ $json.userId }}",
"content": "={{ $json.userMessage + ' -> ' + $json.aiResponse }}"
}Step 4: Recall relevant memories at the start
Before your AI node, add a retainr → Search Memory node:
{
"operation": "searchMemory",
"scope": "user",
"user_id": "={{ $json.userId }}",
"query": "={{ $json.userMessage }}",
"limit": 5
}Step 5: Inject memories into your AI prompt
Pass the search results into your AI node's system prompt:
You are a helpful assistant. Here is relevant context from past conversations:
{{ $json.memories.map(m => m.content).join('\n\n') }}
Current message: {{ $json.userMessage }}
Real-World Example: Customer Support AI
Here's a complete workflow pattern for a support bot:
- Webhook trigger — receives user message + user_id
- retainr Search Memory — finds past issues for this user
- HTTP Request → your CRM — fetches current plan, account status
- AI Agent node — generates response with memory context injected
- retainr Store Memory — saves the interaction for next time
- Respond to Webhook — sends reply back to user
The user never has to repeat themselves. Your AI remembers their billing issue from three weeks ago, knows they're on the Pro plan, and can reference the fix you suggested last time.
What Changes for Your Agent
Before retainr:
- Every conversation starts from zero
- Users repeat context constantly
- No personalization possible
- Can't build on previous interactions
After retainr:
- Agent recalls relevant past context automatically
- Vector search finds semantically related memories
- Each user's memory is isolated and private
- Works across unlimited workflow runs — on any platform
Give your AI agents a real memory
Free plan includes 1,000 memory operations/month. No credit card required.
Add memory to your AI agent →Memory Types in retainr
retainr supports two memory patterns:
Session memory — scoped to a workflow session. Useful for multi-step tasks where you want a clean slate per task.
User memory — persists forever for a user_id. This is what you want for customer-facing agents.
You can mix both in the same workflow. Store the current task context as session memory, while saving important user preferences as persistent user memory.
Performance Considerations
retainr uses pgvector with an HNSW index for approximate nearest-neighbor search. In practice, searching 100,000 memories takes under 50ms. For any workflow platform, the bottleneck is almost always the AI model, not the memory search.
The free plan supports 1,000 memory operations per month — enough for testing and small workflows. Builder plan (€29/month) covers 20,000 ops, which is enough for most production bots.
Frequently Asked Questions
Does this work on all three platforms? Yes. retainr has a native n8n community node, a Make.com app in the marketplace, and a simple REST API for Zapier (via Webhooks by Zapier) and any other platform.
How do I choose between n8n, Make.com, and Zapier? The memory capability is identical across all three. Choose your platform based on your existing workflow and team preferences — retainr works the same everywhere.
What's the difference between user memory and session memory? User memory (user_id) persists forever, great for customer relationships. Session memory (session_id) is temporary, great for multi-step tasks that should start fresh each time.
How does vector search work? retainr converts your stored content to embeddings (numerical representations of meaning). When you search, it finds memories that are semantically similar — not just exact keyword matches. "billing problem" will find memories about "invoice dispute" or "charge error."
Is memory per workspace or per user? Per workspace (your API key), with user-level isolation via the user_id field. All your users' memories are stored in one workspace but scoped so they can't access each other's data.