Intermediate
How to build AI agents with persistence on n8n
Quick Answer
Build AI agents with persistence in n8n by combining AI nodes with memory storage solutions like PostgreSQL or Google Sheets. Use workflow variables and database connections to maintain conversation history and agent state across multiple interactions.
Prerequisites
- Basic n8n workflow knowledge
- OpenAI API key or similar AI service
- Understanding of JSON data structures
- Familiarity with HTTP requests
1
Set up the database for persistence
Create a new workflow and add a PostgreSQL or Google Sheets node for data storage. Configure the connection with your database credentials in Credentials → Add Credential. Create tables with columns for
session_id, message, timestamp, and agent_state to store conversation history and agent memory.Tip
Use PostgreSQL for high-volume applications or Google Sheets for simpler setups with easier data visualization.
2
Configure the webhook trigger
Add a Webhook node as your workflow trigger and set the HTTP Method to
POST. Enable Response Mode to Respond to Webhook and set Response Data to Last Node. This creates an endpoint where users can send messages to your AI agent and receive responses.Tip
Copy the webhook URL from the node - you'll need this to test your agent from external applications.
3
Retrieve conversation history
Add your database node after the webhook trigger and configure it to Execute Query with a SELECT statement:
SELECT * FROM conversations WHERE session_id = '{{$json.session_id}}' ORDER BY timestamp DESC LIMIT 10. This retrieves the last 10 messages for context. Use the IF node to handle cases where no history exists for new conversations.Tip
Limit the number of retrieved messages to control token usage and response time in your AI model.
4
Format context for the AI model
Use a Code node with JavaScript to format the conversation history and current message into a proper context string. Create a system prompt that includes the agent's role and combine it with previous messages:
const context = items[0].json.history.map(msg => `${msg.role}: ${msg.content}`).join('\n'). Structure this as the input for your AI model.Tip
Include clear role indicators (user/assistant) in your context to help the AI maintain conversation flow.
5
Process with AI model
Add an OpenAI node (or your preferred AI service) and configure it for Chat completion. Set the Model to
gpt-3.5-turbo or gpt-4, input your formatted context as the Messages, and configure Temperature between 0.3-0.7 for consistent responses. The AI will generate responses based on the persistent conversation history.Tip
Use lower temperature values (0.3-0.5) for more consistent agent behavior across conversations.
6
Save the interaction to database
Add another database node to insert both the user message and AI response into your persistence layer. Use an INSERT query:
INSERT INTO conversations (session_id, role, message, timestamp) VALUES ('{{$json.session_id}}', 'user', '{{$json.user_message}}', NOW()). Repeat for the AI response to maintain complete conversation history.Tip
Consider adding error handling with a Set node to ensure data is saved even if the AI request fails.
7
Implement agent state management
Use additional database columns or a separate table to store agent-specific state like user preferences, task progress, or learned information. Add a Code node to update agent state based on conversation context using conditions like
if (message.includes('remember')) { updateState('preference', extractedValue) }. This enables the agent to remember important details across sessions.Tip
Store state as JSON objects in your database for flexibility in adding new agent capabilities without schema changes.
8
Return formatted response
Add a final Respond to Webhook node to send the AI response back to the user. Structure the response with the AI message, session ID, and any relevant metadata:
{ "response": "{{$json.ai_message}}", "session_id": "{{$json.session_id}}", "timestamp": "{{$now}}" }. Set the Response Code to 200 and Content-Type to application/json.Tip
Include session management information in responses to help client applications maintain conversation continuity.
Troubleshooting
Database connection timeouts during high traffic
Implement connection pooling in your database settings and add Wait nodes with
1-2 second delays between database operations. Consider using Redis for session storage instead of PostgreSQL for better performance.AI responses are inconsistent across conversation sessions
Check your context formatting in the Code node and ensure conversation history is properly retrieved. Verify that your
session_id is consistent and that the ORDER BY clause includes timestamp ASC for chronological context.Webhook responses timing out
Add Error Trigger nodes to handle failures gracefully and implement async processing by storing requests in a queue table. Use the HTTP Request node to send responses via callback URLs for long-running AI operations.
Memory usage growing too large with conversation history
Implement automatic cleanup by adding a scheduled workflow that deletes conversations older than 30 days:
DELETE FROM conversations WHERE timestamp < NOW() - INTERVAL '30 days'. Limit context retrieval to essential recent messages only.Ready to get started with n8n?
Put this tutorial into practice. Visit n8n and follow the steps above.
Visit n8n →