Back to Blog
n8nself-hostedmemorycrashfix

n8n Out of Memory: Why It Happens and How to Fix It

n8nautomation TeamApril 7, 2026
TL;DR: n8n out-of-memory crashes are almost always caused by execution history bloat, large binary file storage, or running too many concurrent workflows on an undersized server. Enable execution pruning, limit binary data retention, and right-size your server to fix it permanently.

n8n is a Node.js application. Like all Node.js processes, it runs inside a single process with a fixed memory ceiling. When that ceiling is breached — usually because of accumulated execution data or large file processing — the process is killed by the OS. If you have no process manager or Docker restart policy, the instance goes down and stays down until someone manually restarts it.

Why n8n Runs Out of Memory

There are four main causes, roughly in order of frequency:

  • Execution history bloat: n8n stores every execution's input and output data in its database. Without pruning, this grows indefinitely. A busy instance can accumulate gigabytes of execution data within weeks.
  • Binary file storage: Workflows that download, process, or convert files store binary data in n8n's storage volume. Each file is kept per execution. Hundreds of executions means hundreds of copies of the same files.
  • Memory leak in long-running workflows: Workflows with Wait nodes or very long loops hold data in memory for the duration of the execution. Many concurrent long-running workflows add up fast.
  • Undersized server: The default 1GB RAM DigitalOcean droplet is borderline for a single-user n8n instance. For teams running 10+ active workflows, it is not enough.

Fix 1: Enable Execution Pruning

This is the most important fix and should be enabled on every production n8n instance. Add these environment variables to your n8n configuration:

EXECUTIONS_DATA_PRUNE=true
EXECUTIONS_DATA_MAX_AGE=336

EXECUTIONS_DATA_MAX_AGE=336 sets the retention period in hours — 336 hours is 14 days. Executions older than 14 days are automatically deleted. Adjust this to your needs: set it lower (72–48 hours) if memory is tight, higher if you need longer history for debugging.

Note: Pruning removes execution data from the database but does not remove binary files stored in the storage volume. You need Fix 2 as well to reclaim disk space used by file-processing workflows.

Fix 2: Limit Binary Data Retention

If your workflows process files — downloads, PDFs, images, spreadsheets — n8n stores a copy of each file per execution in /home/node/.n8n/storage/. This directory can grow to tens of gigabytes without any warning.

Two ways to address this:

  • Add a cleanup step to your workflow: After processing a file, use a Code node or HTTP Request node to delete it. This is the cleanest solution — files are removed immediately after they are no longer needed.
  • Set binary data TTL: Add N8N_BINARY_DATA_TTL=60 to your environment variables. This sets a time-to-live in minutes for binary data. Files older than 60 minutes are automatically deleted by n8n's internal cleanup process.
Warning: Setting a binary data TTL means workflows that reference binary data after the TTL window will fail. Only set this if your workflows process and discard files within a short time window.

Fix 3: Right-Size Your Server

Minimum server recommendations for self-hosted n8n in 2026:

  • Personal / low volume (<5 workflows, <1,000 executions/day): 1GB RAM, 1 vCPU — just about workable with pruning enabled
  • Small team (5–20 workflows, <10,000 executions/day): 2GB RAM, 2 vCPU — recommended minimum
  • Production / high volume: 4GB RAM, 2–4 vCPU — required for workflows with large payloads or concurrent executions

If you are on managed hosting, resizing is usually a one-click operation from your dashboard. On n8nautomation.cloud, you can resize your instance without data loss or downtime from the Instances page.

Fix 4: Use Queue Mode for High Volume

By default, n8n runs workflows in the main process. Under heavy load, this means all workflow execution competes for the same memory pool as n8n's core. Queue mode separates execution into worker processes, giving you better isolation and horizontal scalability.

Queue mode requires Redis and PostgreSQL (not SQLite). It is the right architecture for teams running hundreds of executions per hour. For most small teams, Fixes 1–3 are sufficient.

How to Monitor Memory Usage

If you have SSH access to your server, check current memory usage with:

free -h
docker stats n8n-n8n-1 --no-stream

The first command shows total and available system memory. The second shows real-time memory usage of the n8n container specifically. If the container is consistently above 80% of its memory limit, a resize is overdue.

For ongoing monitoring without SSH, use n8n's built-in health endpoints/healthz returns a 200 when n8n is running and a 502 when it has crashed.

Tired of managing memory, disk, and server size yourself?

n8nautomation.cloud handles server sizing, execution pruning config, and monitoring — so your workflows just run.

Get Managed n8n →

Ready to automate with n8n?

Get affordable managed n8n hosting with 24/7 support.