This repository exists to run a dedicated n8n instance on Render for the AI agent project.
Use this repo when you want a stable, separate n8n deployment that receives webhook calls from the AI Agent API and executes your workflows in production.
In short:
ai-agent-n8nhandles the chat API and UI.n8n-renderhosts the automation/workflow engine (n8n).- The API sends requests to an n8n webhook URL, and n8n returns the processed response.
Architecture flow:
- User sends a message in the AI agent chat.
- The Node.js API (
ai-agent-n8n) receivesPOST /agent. - The service layer calls an n8n webhook endpoint hosted from this repo.
- n8n runs the workflow and returns JSON.
- The API responds back to the user.
Related app repo:
Project link:
Keeping n8n in its own repository makes it easier to:
- deploy/redeploy workflows independently from API code,
- isolate n8n environment variables and secrets,
- avoid mixing Node app runtime concerns with workflow runtime concerns.
Dockerfile: uses the officialn8nio/n8nimage and exposes port5678..env.example: sample environment variables used to configure n8n.
Main variables used here:
N8N_HOSTN8N_PORTGENERIC_TIMEZONEN8N_BASIC_AUTH_ACTIVEN8N_BASIC_AUTH_USERN8N_BASIC_AUTH_PASSWORDN8N_SECURE_COOKIEN8N_ENCRYPTION_KEYWEBHOOK_URL
For production, set strong values for auth password and encryption key.
- Create a new Web Service from this repository in Render.
- Use Docker deployment (Render will build from
Dockerfile). - Set environment variables from
.env.example. - Save and deploy.
- Copy your n8n webhook URL and configure it in
ai-agent-n8n.
To reduce first-message failures on Render free tier:
- Create a dedicated warmup workflow in n8n:
- Trigger:
Webhook(methodGET, pathagent-warmup) - Action:
Respond to Webhookwith200and a simple JSON body.
- Trigger:
- Keep your main agent workflow in a separate webhook path.
- In
ai-agent-n8nset:N8N_WARMUP_URL=https://your-n8n.onrender.com/webhook/agent-warmupN8N_WARMUP_METHOD=GETN8N_REQUEST_TIMEOUT_MS=45000(or higher if needed)
- Optionally use an external uptime ping on the warmup endpoint every 5 minutes.
This keeps n8n warm more often and lets the API gate requests until workflow readiness.
- Set
N8N_PROTOCOL=https - Set
N8N_PROXY_HOPS=1 - Set
N8N_EDITOR_BASE_URLto your Render n8n URL - Set
WEBHOOK_URLto the same public URL - Keep
N8N_ENCRYPTION_KEYstable across redeploys - Do not commit real
.envcredentials into Git
This repository is infrastructure support for the AI agent project, not the chat API itself.