java.lang.OutOfMemoryError: Java heap spaceis the single most common production failure we see in Mirth Connect deployments — by a significant margin. It is also one of the most misdiagnosed. Teams often blame "a memory leak" when the real cause is an under-sized heap, a single misbehaving channel processing an unusually large message, or a transformer holding references it shouldn't. Correctly identifying which of these is the cause matters, because the fix is different in each case.
This guide walks through the full diagnostic procedure, the four most common root causes with evidence-based fixes, emergency recovery when you need the engine up in 15 minutes, and the long-term prevention practices that stop the problem recurring. It is written by engineers who have debugged this exact error more than 100 times in production. If you need live help while reading, our 24/7 Mirth helpdesk has an under-15-minute response SLA for emergencies.
1. What the Error Actually Means
Mirth Connect runs on the Java Virtual Machine (JVM), which allocates a fixed pool of memory — the heap — for Java objects. Every HL7 message, every transformer variable, every channel configuration, every queued message waiting to be delivered lives in the heap.
When the heap fills up and the garbage collector cannot free enough space to satisfy a new allocation, the JVM throws java.lang.OutOfMemoryError: Java heap space. Depending on which thread hit the limit, the consequences range from a single failed message (best case) to a completely crashed Mirth Connect server (worst case).
The error itself is never the real problem. It is a symptom. The real problem is one of four underlying conditions, covered in sections 4–7 below.
What the log looks like
You'll typically see this in /logs/mirth.log or wherever your log aggregation lives:
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3236)
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:119)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)
at com.mirth.connect.server.util.TemplateValueReplacer.replaceValues(...)
...The stack trace tells you where the last allocation failed, which is a clue — but rarely the cause. The cause is earlier: what was running at that moment, what has been accumulating for the last hour, what has the JVM been doing under pressure.
2. Emergency Recovery Procedure
If production is down right now, do these steps in order.
Step 1 — Restart the Mirth Connect service (2 minutes)
# On Linux (systemd):
sudo systemctl restart mirth-connect
# Or, if installed as a standalone service:
sudo /opt/mirthconnect/mcservice restartA restart clears the heap and in 90% of cases restores service. If Mirth doesn't recover after restart, you have a configuration or disk issue, not just an OOM.
Step 2 — Verify the service is up and channels are running (1 minute)
Open the Administrator. Check the Dashboard. Any channels still red? Start them manually.
Step 3 — Check downstream system backlog (2 minutes)
If Mirth was down for any meaningful time, upstream senders may now be hammering Mirth with retries, and downstream consumers may be backlogged. Expect elevated queue depths and proactively monitor for the next hour.
Step 4 — Increase heap allocation immediately (5 minutes)
Edit the JVM arguments in Mirth's startup configuration. On Linux:
sudo nano /opt/mirthconnect/mcserver.vmoptionsIncrease the max heap size by at least 2x. For example, if currently:
-Xms512m
-Xmx1024mChange to:
-Xms2048m
-Xmx4096mSave and restart the service again. This is a temporary mitigation, not a fix — you still need to identify the root cause. But it gives you breathing room to diagnose without another outage.
Step 5 — Snapshot evidence before it is lost (5 minutes)
Copy these files to a safe location (they are often overwritten on restart):
/logs/mirth.log(last 1000+ lines)/logs/mirth.log.1,.2etc. if rotation is recent- Any heap dump files (typically named
java_pidXXXXX.hprof) in the Mirth install directory or/tmp/ - GC logs if enabled (
/logs/gc.logor similar)
With the service back up and evidence in hand, proceed to diagnosis.
3. Collecting Diagnostic Evidence
Before guessing at the cause, collect data. The more evidence you have, the faster you'll find the real problem.
3.1 Check current heap configuration
# Find the Mirth process
ps -ef | grep mirthconnect
# Check JVM flags
jcmd <PID> VM.flags | grep -i heapLook for -Xms (initial heap) and -Xmx (max heap). A production Mirth should have -Xmx of at least 2 GB for small deployments, 4–8 GB for mid-size, and 16+ GB for high-throughput reference labs and HIEs.
3.2 Enable GC logging if not already on
Add to mcserver.vmoptions:
-Xlog:gc*:file=/opt/mirthconnect/logs/gc.log:time,uptime:filecount=10,filesize=100M(For older JVMs using -XX:+PrintGCDetails — same idea, newer syntax preferred on Java 11+.)
This lets you see the shape of memory pressure over time: are heap usage curves climbing steadily (leak), or spiking occasionally (large messages), or consistently at 95%+ (undersized heap)?
3.3 Capture a heap dump
Two ways to get a heap dump for analysis:
Automatic on OOM (recommended for production):
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/opt/mirthconnect/logs/Add these flags and restart. The next OOM will leave a .hprof file behind.
On-demand from a running JVM:
jcmd <PID> GC.heap_dump /opt/mirthconnect/logs/mirth-$(date +%s).hprof3.4 Analyze the heap dump
Open the .hprof in Eclipse Memory Analyzer Tool (MAT) — free, widely used, and purpose-built for this.
Look at:
- Dominator tree — which single objects retain the most memory.
- Leak Suspects report — MAT's automated analysis of what's holding memory.
- Histogram — class-level memory usage.
If one class is retaining hundreds of megabytes of java.lang.String or byte[], you likely have a transformer issue. If hundreds of small objects add up to GB, you may have a queue backlog.
3.5 Look at channel statistics
From the Mirth Administrator Dashboard, check:
- Queued — messages sitting in source/destination queues. Large numbers (10K+) indicate backlog.
- Error — channels with elevated error counts may be retrying large messages repeatedly.
- Avg msg/sec throughput — a channel suddenly running 10x its normal rate can exhaust heap.
4. Root Cause 1 — Under-Sized Heap
The simplest cause and the most common. Mirth was installed with default JVM settings (typically 256MB or 512MB max heap), then gradually added more channels, higher volumes, or bigger messages — until the heap couldn't keep up.
Symptoms
- OOMs occur during peak traffic hours.
- GC logs show heap hitting
-Xmxceiling repeatedly. - Restart temporarily resolves the problem, only to recur.
- Heap dump shows no single dominator — memory is spread across legitimate working-set objects.
Fix
Increase -Xms and -Xmx to appropriate values. Our production sizing guide:
| Deployment | Recommended -Xmx |
|---|---|
| Development / testing | 1–2 GB |
| Small hospital, <20 channels | 4 GB |
| Mid-size hospital, 20–60 channels | 8 GB |
| Large hospital, 60–200 channels | 16 GB |
| Reference lab / high throughput | 16–32 GB |
| Large HIE | 32+ GB, consider clustering |
Always set -Xms equal to -Xmx in production. This avoids heap-growth pauses and keeps GC behavior predictable.
Also increase
- Metaspace —
-XX:MaxMetaspaceSize=512m(Java 8+). Mirth has many classes; default is often too small. - Code cache —
-XX:ReservedCodeCacheSize=256mfor heavily-used deployments.
5. Root Cause 2 — Large Individual Messages
A single HL7 message with a base64-embedded PDF, or a CDA document with hundreds of OBX segments, or an oversized batch file can exceed the working heap needed for normal traffic. Mirth processes the message, transformers build intermediate copies, the heap fills in one massive spike.
Symptoms
- OOMs occur at unpredictable times, sometimes during off-hours.
- One or two specific channels consistently appear in the stack trace.
- Heap dump shows a single giant
Stringorbyte[]retaining hundreds of megabytes. - Channel statistics show an intermittent very large message size.
Find the offending channel
-- Search Mirth's database for large messages
-- (Example PostgreSQL query — adapt to your schema version)
SELECT channel_id, message_id, received_date, LENGTH(content) AS size
FROM d_mc<channel_id>
ORDER BY size DESC
LIMIT 10;Look for messages larger than 1 MB. In HL7 that is almost always a base64-encoded attachment in an OBX or ED segment.
Fixes
Short-term: Increase heap size as in Root Cause 1 and monitor.
Medium-term: Configure per-channel queuing thresholds and apply message size limits on source connectors. MLLP listeners in Mirth support max message size configuration.
Long-term: Restructure how your team handles large payloads:
- Offload binary content — write attachments to object storage (S3, Azure Blob) and keep only references in the HL7 message.
- Stream, don't buffer— for large file-based inputs, use Mirth's file-reading with batch splitting rather than loading entire files into memory.
- Reject-and-alert on oversize messages at the MLLP listener rather than processing them.
For broader channel optimization patterns, see our companion Mirth Connect performance tuning guide.
6. Root Cause 3 — Transformer Memory Leaks
A transformer (JavaScript or Groovy) holds references to objects in a way that prevents garbage collection. Over hours or days, heap usage creeps up. Eventually OOM.
Symptoms
- Heap usage grows steadily during operation, with GC unable to recover it.
- Restart fixes the problem for exactly as long as it takes to leak up again — often 4 hours to several days.
- Heap dump shows many retained objects that look like they should have been freed.
- The leak typically traces to one specific channel or one custom transformer.
Common leak patterns
1. Global variables that grow unbounded:
// BAD — this stays in the global map forever
globalMap.put(msg['MSH']['MSH.10'].toString(), msg.toString());2. Closures capturing channel-scope data:
// BAD — callback holds reference to large msg object permanently
someAsyncHelper.register(function() { return msg; });3. Custom Java helpers with static caches:
If a custom Java library is dropped into /custom-lib/ with a static Map or List that grows, it leaks heap forever.
Fix
Step 1 — Identify the leaking channel: Capture a heap dump during peak leak pressure. In MAT, use the Dominator Tree and Leak Suspects report. Look for object trees rooted in a specific channel name or transformer.
Step 2 — Review the transformer code:
- Do you put anything in
globalMap,globalChannelMap,sourceMap, orconnectorMapthat grows over time? - Are you holding references in long-lived closures?
- Are you using custom static data structures?
Step 3 — Refactor:
- Never put message-scoped data in
globalMap. UsesourceMapfor single-message state. - Bound any cache with an LRU eviction policy.
- Release large objects explicitly when done (
msg = null;at end of transformer).
Step 4 — Validate the fix: Run with the fix in staging for at least 24 hours under load. Observe heap via GC logs. Heap baseline should remain stable between major GC cycles.
For the full transformer hygiene guide, see Mirth Connect Groovy vs JavaScript transformers.
7. Root Cause 4 — Queue Backlog
A downstream destination is slow or unavailable, so messages accumulate in the channel's destination queue. Each queued message holds memory. Eventually the queue alone consumes multiple gigabytes of heap.
Symptoms
- OOMs occur after a known downstream incident (LIS went down, EHR slowed, partner API timed out).
- Channel statistics show Queued counts in the tens of thousands.
- Heap dump shows many thousands of similar objects — typically queued message payloads.
- Dashboard may show "Channel slow / queue growing" warnings.
Fix
Short-term:
- Stop the affected channel to halt queue growth while downstream recovers.
- Allow downstream to catch up or manually drain/replay queued messages.
- Restart Mirth to clear heap once queue is at safe size.
Medium-term:
- Configure queue size thresholds per destination. Mirth can reject or dead-letter new messages once a queue hits a limit, rather than accumulating indefinitely.
- Enable destination queue retry limitswith backoff — don't infinitely retry a broken downstream.
Long-term:
- Move from in-memory queueing to durable external queuing(Kafka, RabbitMQ, SQS) for high-volume or business-critical flows. This decouples Mirth's heap from downstream availability.
- Implement circuit breakers on destination connectors — stop sending after N consecutive failures and alert.
- Monitor queue depth as a first-class alerting metric, not an afterthought.
For cloud-native queueing patterns, see the deployment patterns in our Mirth Connect complete guide.
8. JVM and GC Tuning for Production
Beyond sizing the heap correctly, production Mirth deployments should use modern JVM flags to avoid GC-induced pauses and maximize throughput.
Recommended baseline (Java 11+)
# Heap sizing
-Xms8g
-Xmx8g
# Garbage collector — G1 is the right default for most Mirth workloads
-XX:+UseG1GC
-XX:MaxGCPauseMillis=200
# Metaspace
-XX:MetaspaceSize=256m
-XX:MaxMetaspaceSize=512m
# Code cache
-XX:ReservedCodeCacheSize=256m
# Automatic heap dump on OOM
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/opt/mirthconnect/logs/
# GC logging
-Xlog:gc*,safepoint:file=/opt/mirthconnect/logs/gc.log:time,uptime:filecount=10,filesize=100M
# Graceful OOM handling
-XX:+ExitOnOutOfMemoryErrorNotes
-Xms == -Xmxis non-negotiable in production — it keeps the heap at a fixed size and avoids expensive growth operations during peak load.- G1 GC is the default in Java 11+ and is the right choice for most Mirth workloads. ZGC or Shenandoahare worth evaluating for very large heaps (>32GB) where pause times become critical.
-XX:+ExitOnOutOfMemoryErrormakes the JVM exit on OOM rather than running in a partially broken state. Paired withsystemd'sRestart=always, this gives you clean auto-recovery on OOM.- Avoid
-XX:+UseParallelGCand other legacy GC choices unless you have a specific reason — G1 is better for Mirth's mixed workload.
For deeper performance work, see our Mirth Connect performance tuning guide.
9. Long-Term Prevention Checklist
A deployment that never OOMs has all of these in place:
- ✓Right-sized heap for current traffic plus 50% headroom
- ✓-Xms equals -Xmx (no dynamic heap growth in production)
- ✓G1 GC with reasonable pause target (MaxGCPauseMillis=200)
- ✓GC logging enabled and aggregated to your observability platform
- ✓Heap dump on OOM enabled
- ✓Monitoring on heap utilization — alert when sustained >75% for >15 minutes
- ✓Monitoring on queue depth — alert when any destination queue >1000 (adjust per volume)
- ✓Channel retry limits configured so stuck destinations don't accumulate infinitely
- ✓Max message size configured on source connectors
- ✓Binary content offloaded to object storage where possible
- ✓Transformer code reviewed for globalMap abuse and unbounded caches
- ✓Monthly review of channel statistics for outlier patterns
- ✓Quarterly load testing against expected peak-plus-50%
- ✓Incident playbook for OOM recovery documented and drilled
- ✓Runbook for heap dump analysis — every on-call engineer should know the procedure
For the full production hardening checklist, see Mirth Connect security HIPAA checklist (covers security and reliability controls) and common Mirth Connect issues and fixes for the broader catalog of problems and resolutions.
10. When to Escalate to Expert Support
Diagnose OOMs yourself when you have time and the problem recurs on a known schedule. Escalate when:
- Production is down right now and you cannot get Mirth back to stable running within 30 minutes.
- The OOM recurs after restart in under an hour — something is consuming heap unusually fast and you need another pair of eyes before traffic fully recovers.
- Heap dump analysis is neededand your team doesn't have experience with Eclipse MAT.
- The problem traces to a specific channelbut your team isn't confident modifying production transformers under pressure.
- Clinical workflows depend on the failing feed and every minute of downtime has clinical consequences.
Our Mirth Connect helpdesk has an under-15-minute response SLA for emergencies and includes engineers with 12+ years of HL7 and JVM debugging experience. Emergency engagements are covered under NDA by default, so you can call for help without procurement overhead. For ongoing operations, our broader services team covers preventive Mirth tuning, load testing, and channel refactoring to stop OOMs before they happen — alongside our HL7 integration services across the USA.
11. Frequently Asked Questions
What causes java.lang.OutOfMemoryError: Java heap space in Mirth Connect?
Four common causes, in rough order of frequency: (1) under-sized JVM heap for current traffic, (2) a single large message (often base64-embedded PDF or image) exceeding working memory, (3) a transformer memory leak holding references it shouldn't, (4) a queue backlog accumulating memory when a downstream destination is slow or down.
How much heap does Mirth Connect need?
For production, 4 GB is a reasonable minimum for small hospital deployments, 8 GB for mid-size, 16 GB or more for large hospital and high-throughput lab workloads. Always set -Xms equal to -Xmx in production to avoid heap growth pauses.
How do I increase Mirth Connect's heap size?
Edit mcserver.vmoptions in the Mirth install directory. Change -Xms and -Xmx to the desired size (for example -Xms8g -Xmx8g for 8 GB). Restart the Mirth service. Verify with jcmd <PID> VM.flags.
How do I tell if I have a memory leak vs an under-sized heap?
Enable GC logging. Over a 24-hour period: steadily climbing heap usage that the garbage collector cannot recover is a leak. Consistent high usage with occasional OOMs during peaks is an under-sized heap. Sharp spikes are typically large individual messages.
What is the best GC for Mirth Connect?
G1 GC (-XX:+UseG1GC) is the right default for most Mirth Connect workloads in 2026, and is the default in Java 11+. For very large heaps (>32 GB), ZGC or Shenandoah may give better pause-time characteristics. Avoid ParallelGC unless you have a specific throughput-oriented reason.
Can I recover from an OOM without restarting?
Usually no. Once the JVM throws OutOfMemoryError, the process is in an unpredictable state — some threads may have died, data structures may be inconsistent, and message processing may have partially failed. Set -XX:+ExitOnOutOfMemoryError and let systemd (or your service manager) restart the process cleanly.
What if my transformer puts data in globalMap?
Don't. globalMap persists across all messages and channels, and anything you put there lives until Mirth restarts. Use sourceMap for single-message scope, channelMap for single-channel scope, or globalChannelMap for cross-message channel-scoped data — not globalMap.
How do I analyze a Mirth Connect heap dump?
Open the .hprof file in Eclipse Memory Analyzer Tool (MAT). Run the Leak Suspects report first — it's often accurate. If not, use the Dominator Tree to find the largest retained objects, and the Histogram view for class-level analysis. Look for large strings, large byte arrays, or classes from specific Mirth channels.
Why does my Mirth OOM happen at night?
If you run scheduled file-based channels (nightly reconciliation, overnight ELR batches, enrollment exports), they often process large files whose message size dwarfs daytime HL7 traffic. Check your file-reader channels for batch size and memory behavior.
Related Reading
- Mirth Connect: The Complete Guide
- Common Mirth Connect Issues & Fixes
- Mirth Connect Performance Tuning
- Mirth Connect Groovy vs JavaScript Transformers
- Mirth Connect Installation Guide
- Mirth Connect Security & HIPAA Checklist
- Mirth Connect MLLP Connection Refused
- Mirth Connect Channel Not Starting
- HL7 Integration: The Complete Guide