Make.com's OpenAI module is one of the best tools in the automation ecosystem. You can wire up GPT-4o to anything — Slack, Gmail, Airtable, webhooks, your CRM — without writing a single line of code.
But there's a catch that catches everyone eventually: it has no memory.
Every time your scenario runs, the OpenAI module sees only what you pass it in that single run. It doesn't know what happened last time. It doesn't remember the user. It starts completely fresh.
This tutorial shows you exactly how to fix that — with specific field values, module configurations, and a complete scenario blueprint for Make.com. Equivalent setups for n8n and Zapier are included.
The Root Cause of the Stateless Problem
Make.com scenarios are event-driven. A trigger fires, the scenario processes data, and it terminates. Any data created during the run exists only in that run's memory.
This design is intentional. It makes scenarios reliable, reproducible, and easy to debug. But it means the OpenAI module has no access to:
- What a user said in previous runs
- What responses the AI gave before
- Any context accumulated over time
The typical workaround — storing conversation history in a Google Doc and passing it all to the AI every time — collapses once history grows long. You'll hit OpenAI's context limits, slow down your scenario, and pay for tokens that aren't relevant to the current question.
The correct solution is retrieval-augmented memory: store interactions, then search for only the relevant ones.
What We're Building
A scenario that:
- Receives a user message via webhook
- Searches for relevant past interactions for that user
- Passes the relevant context to GPT-4o
- Generates a personalized, context-aware response
- Stores the new interaction for future use
- Responds to the webhook
This pattern works for customer support, sales assistance, personal AI assistants, content generation, project management bots — any AI use case.
Module-by-Module Setup
Module 1: Webhooks — Custom Webhook
Trigger type: Instant | Method: POST
Expected JSON structure:
{
"user_id": "string",
"message": "string",
"session_id": "string (optional)"
}Use a stable, unique identifier for user_id — the user's email address, their CRM ID, or a UUID you generate on first contact. Never use a temporary session token that changes per visit.
Module 2: retainr — Search Memory
Search "retainr" in the apps panel and connect with your API key from retainr.dev/dashboard.
Operation: Search Memory
| Field | Value | Notes |
|---|---|---|
| Query | [1.message] | The incoming user message |
| Scope | User | Scopes search to this user only |
| User ID | [1.user_id] | Scopes search to this user only |
| Limit | 5 | 3-5 is usually right |
| Tags Filter | (leave empty) | Optional — filter by tag |
Module 3: Tools — Set Variable (Format Memories)
Add a Set Variable module (under Tools):
Variable name: memoryContext
Variable value (formula editor):
{{if(2.memories.length > 0, "RELEVANT CONTEXT FROM PAST INTERACTIONS:\n\n" + join(map(2.memories, "content"), "\n\n---\n\n"), "No previous interaction history for this user.")}}
Module 4: OpenAI — Create a Chat Completion
Model: gpt-4o (or gpt-4o-mini for lower cost)
System Message:
You are a helpful assistant.
{{3.memoryContext}}
Use the above context to personalize your response. Reference specific past interactions when relevant.
User Message: [1.message]
Temperature: 0.7 | Max Tokens: 500
Do not set Max Tokens too high. Start at 500 and increase only if your use case requires longer outputs.
Module 5: retainr — Store Memory
| Field | Value | Notes |
|---|---|---|
| Content | User: [1.message]\n\nAssistant: [4.choices[].message.content] | Full exchange |
| Scope | User | Same as search |
| User ID | [1.user_id] | Same as search |
| Session ID | [1.session_id] | Optional — session-scoped memory |
| Tags | conversation | Add topic tags if known |
Module 6: Webhooks — Respond to Webhook
Status: 200 | Body type: JSON
{
"response": "{{4.choices[].message.content}}",
"memoriesUsed": "{{length(2.memories)}}"
}Complete scenario flow:
[Webhook] → [Search Memory] → [Format Memories] → [GPT-4o] → [Store Memory] → [Respond]
Total modules: 6. Build time: 20-30 minutes.
Real-World Configuration: Support Bot
You're building a support bot for a SaaS product. Adapt the configuration:
Webhook payload from your frontend:
{
"user_id": "[email protected]",
"message": "I keep getting a 403 error when trying to export reports",
"plan": "pro",
"accountAge": 47
}System prompt (Make.com Module 4, updated):
You are a support agent for Acme SaaS.
USER PLAN: {{1.plan}}
ACCOUNT AGE: {{1.accountAge}} days
{{3.memoryContext}}
When handling issues:
1. Check if this is a recurring problem (look at the history)
2. Reference specific past interactions if relevant
3. Give a direct fix, not a generic response
4. If you cannot resolve it, escalate with specific context
Memory content (Module 5, updated):
Issue: {{1.message}} | Plan: {{1.plan}} | Resolution: {{4.choices[].message.content}}
Now when this user reports the same 403 error three months later, your bot knows they've had this before and can skip straight to deeper troubleshooting.
Cost Analysis
For 1,000 conversations per month:
| Component | Monthly Cost |
|---|---|
| Make.com (Core plan) | ~€11 |
| OpenAI GPT-4o (avg 1,000 tokens/call) | ~$15 |
| retainr (Builder plan) | €29 |
| Total | ~€55 |
A single avoided support ticket pays for the month.
Give your AI agents a real memory
Free plan includes 1,000 memory operations/month. No credit card required.
Get your free retainr API key →Frequently Asked Questions
Do I need to install anything in Make.com? Just connect the retainr app from the Make.com app marketplace. No downloads required.
Will this work with Claude instead of GPT-4o? Yes. Swap Module 4 for the Anthropic module. Memory injection is model-agnostic — it's just text in the system prompt.
Does the same setup work in n8n and Zapier? Yes. See the n8n and Zapier tabs above. The same API key works across all three platforms.
How do I handle multiple AI assistants with the same users? Use a prefix in your userId: [email protected] vs [email protected]. This keeps memory pools separate even for the same underlying user.
Can I export or delete all memories for a user? Yes — DELETE /v1/memories with user_id as a filter parameter removes all memories for that user. Important for GDPR compliance.