Connection refused, Connection timed out, and Connection reset by peer are the three variants of the most common networking failure in production Mirth Connect environments. They all look identical on the surface — an HL7 interface stops moving messages — but the causes are different and the fixes are different. Getting the diagnosis right saves hours.
MLLP (Minimum Lower Layer Protocol) is the transport that moves 85% of clinical HL7 traffic in US hospitals. It is simple, reliable, and unforgiving. A firewall change, a TLS cipher mismatch, a listener misconfiguration, or even a DNS tweak in the wrong place will break MLLP flow silently — downstream systems stop getting messages, and the sender has no idea anything is wrong until an ACK timeout fires.
This guide is the complete diagnostic procedure we use in production to identify and resolve MLLP connection failures, written by engineers who have debugged this pattern more than 150 times. If production is down and you need help now, our Mirth Connect helpdesk responds in under 15 minutes with engineers who have done this work for more than 100 US healthcare organizations.
1. MLLP in 60 Seconds
Before debugging MLLP, it helps to know exactly what is on the wire. MLLP is not a protocol in the elaborate sense — it is a framing convention layered on top of a long-lived TCP connection. Every HL7 message is wrapped in two control characters and sent over TCP:
<VT> HL7 message content <FS><CR>Where:
<VT>is the hex byte0x0B(vertical tab — the start-of-block marker).<FS>is hex0x1C(file separator — the end-of-block marker).<CR>is hex0x0D(carriage return — the terminator).
The sender opens a TCP socket to the receiver, pushes a wrapped message, and waits for a wrapped ACK coming back. When the sender sees the <FS><CR> sequence, it knows a complete message (or ACK) has arrived.
MLLP can run:
- Plaintext on trusted internal networks (common historically; increasingly discouraged for HIPAA).
- MLLP over TLS (often called MLLPS) — the modern default.
- MLLP through VPN or ExpressRoute — for cross-organization links.
A "connection refused" in any of these contexts means the TCP layer failed before MLLP framing was ever attempted. That is a useful starting point: you are almost never debugging HL7 when you see connection refused. You are debugging networking, TLS, or service state. For broader context on MLLP and HL7 transport, see MLLP protocol explained in our HL7 integration guide.
2. Emergency Recovery Procedure
If an HL7 feed is broken right now, do these in order.
Step 1 — Confirm the direction of failure (30 seconds)
Is Mirth the listener (inbound, receiving from an EHR) or the sender (outbound, pushing to a downstream system)?
- Mirth as listener — the sender (EHR, LIS, etc.) is seeing "connection refused" reaching Mirth.
- Mirth as sender — Mirth's own logs show "connection refused" to a destination.
The direction determines everything else in the procedure.
Step 2 — Is the Mirth service even running? (30 seconds)
# Linux (systemd)
sudo systemctl status mirth-connect
# Or the standalone init script
sudo /opt/mirthconnect/mcservice statusIf Mirth is down, start it:
sudo systemctl start mirth-connectStep 3 — Is the listener port actually open on Mirth's host? (1 minute)
If Mirth is the listener on port 6661 (for example):
# Check port is bound
sudo ss -tlnp | grep 6661
# Or if ss isn't available
sudo netstat -tlnp | grep 6661You should see LISTEN on 0.0.0.0:6661 or the bound address. If nothing is listening, the channel or listener is down. Open the Mirth Administrator, find the channel, and start it. Check Dashboard for a red status indicator.
Step 4 — Network test from the sender's perspective (1 minute)
From the sending host (EHR, LIS, whatever):
# Basic TCP reachability
nc -vz mirth-host 6661
# Or with telnet on older systems
telnet mirth-host 6661- "Connected" → TCP is fine; the problem is higher up the stack (TLS, framing, authentication).
- "Connection refused" → something on the network path is rejecting the TCP handshake.
- "Connection timed out" → the packets aren't reaching the destination at all; likely firewall silently dropping traffic.
Step 5 — Quick firewall sanity check (2 minutes)
On the Mirth host:
# Check iptables / nftables
sudo iptables -L -n | grep 6661
sudo nft list ruleset | grep 6661
# Check firewalld
sudo firewall-cmd --list-allIf the rule isn't there and it used to be, someone changed it. That is often the whole fix — re-add the rule, test, done.
Step 6 — Check the Mirth log for the specific exception (1 minute)
tail -n 200 /opt/mirthconnect/logs/mirth.logLook for stack traces containing MllpConnector, SocketException, SSLHandshakeException, or Connection refused. The exception class tells you which root cause category to focus on — sections 5–9 cover them in detail.
Step 7 — If still down after 10 minutes, escalate
An MLLP connection that doesn't recover from the first six steps usually has a less obvious cause — certificate expiration, a misrouted packet path after an infrastructure change, or a deeper Mirth configuration drift. If clinical workflow depends on this feed, escalate now rather than continuing to debug alone.
3. Three Error Variants and What They Each Mean
"Connection refused" is the question people search for, but three related errors have different meanings. Identify which one you actually have.
3.1 Connection refused
On the wire: The TCP SYN packet reached the destination host, but the host replied with RST — meaning no service is listening on that port.
Likely causes: Mirth service down. Channel/listener stopped. Listener configured on the wrong interface (e.g., bound to 127.0.0.1 only while sender is remote). Port conflict with another process. Container restart in progress.
3.2 Connection timed out
On the wire: The TCP SYN packet was sent but no response of any kind came back within the timeout.
Likely causes: Firewall silently dropping packets. DNS resolving to the wrong host. Routing table change. VPN tunnel down. Destination host unreachable.
3.3 Connection reset by peer
On the wire: The TCP handshake completed, but the connection was then abruptly terminated by the other side.
Likely causes: TLS handshake failure (the receiver accepted TCP then hung up when TLS negotiation failed). Load balancer draining. Max connections exceeded on the receiver. Application-level error causing the receiver to close the connection.
Writing down which of the three you are seeing — based on the exact error message in the sender's logs or in tcpdump — narrows the problem to a specific layer and saves meaningful time.
4. Layer-by-Layer Diagnosis
Once you know you have a connectivity failure, work up the stack in this order:
Layer 3 / IP — Is the destination IP reachable?
ping mirth-host
traceroute mirth-hostIf ping/traceroute fails, the issue is network routing or host-down. Skip everything else until this works.
Layer 4 / TCP — Is the port reachable?
nc -vz mirth-host 6661If this times out, firewall. If it says refused, service state. If it connects, go up the stack.
Layer 6 / TLS (if using MLLPS) — Does the handshake complete?
# Probe the TLS endpoint
openssl s_client -connect mirth-host:6661 -showcertsLook for Verify return code: and certificate chain errors.
Layer 7 / MLLP framing — Can you actually send a test message and get an ACK back?
A quick manual test using nc with a printf-built frame:
printf '\x0bMSH|^~\\&|TEST|TEST|TEST|TEST|20260417000000||ADT^A01|12345|P|2.5\rEVN|A01|20260417000000\x1c\r' | nc mirth-host 6661If the message is accepted and an ACK comes back (wrapped in <VT>...<FS><CR>), the entire path is healthy.
For routine MLLP health checks in production, we recommend scripting layers 3, 4, and 7 into your monitoring system.
5. Root Cause 1 — Firewall Rules
By a significant margin, the most common cause of "connection refused" and "connection timed out" for MLLP in production.
Symptoms
- Change started suddenly after a security update, cloud migration, network project, or routine change-control window.
- TCP probes from the sender time out or are refused.
- The Mirth host itself shows the listener is up and bound correctly.
- You can telnet to the port from the Mirth host (localhost) but not from the sender.
Diagnosis
Run from the sending host:
nc -vz mirth-host 6661If this fails, verify whether traffic is being dropped silently (timeout) or actively refused (RST). Timeouts almost always mean a firewall or security group is silently blocking.
On the Mirth host, check OS-level firewalls and any edge firewalls in the network path:
# OS firewall — Linux
sudo iptables -L -n | grep 6661
sudo firewall-cmd --list-all | grep 6661
# Cloud security groups (AWS example)
aws ec2 describe-security-groups --group-ids sg-xxxxxxxxx
# Windows
netsh advfirewall firewall show rule name=all | findstr 6661Fix
Add or restore the needed rule. Example for Linux firewalld:
sudo firewall-cmd --permanent --add-port=6661/tcp
sudo firewall-cmd --reloadFor AWS security groups, add an inbound rule allowing the sender's CIDR to the port.
Verify the fix from the sender's perspective, not from the Mirth host — a rule that opens localhost won't help a remote sender.
Prevention
- Document every MLLP port in a central firewall inventory. Include source/destination CIDR, port, protocol, and business owner.
- Include MLLP ports in change-control review — so a network or security change that would break them gets caught before deployment.
- Monitor port reachability from the actual sending host, not just from Mirth's host.
6. Root Cause 2 — Listener Not Started / Port Conflict
A Mirth channel's MLLP listener failed to start, or another process is holding the port.
Symptoms
- Mirth service is up but the specific channel is stopped or errored.
ssornetstatshows nothing listening on the expected port, or shows a different process.- Mirth log contains
java.net.BindException: Address already in usearound the channel's start time.
Diagnosis
# What is listening (if anything)?
sudo ss -tlnp | grep 6661
# Any other process bound to that port?
sudo lsof -i :6661Check the Mirth Administrator Dashboard — is the channel marked "Stopped" or in error? Look at channel logs in the Administrator for a specific error message.
Fix
If a conflicting process is holding the port:
Identify it (lsof -i :6661) and either kill it or move Mirth's listener to a different port. Two processes cannot share a TCP listen port on the same interface.
If the channel won't start:
- Verify the port number in the channel configuration.
- Ensure the bind address matches the interface you want to expose (for remote access, bind to
0.0.0.0or a specific external IP, not127.0.0.1). - Check the Mirth user has permission to bind the port (privileged ports <1024 require root/admin or capabilities).
- Look for a TLS misconfiguration that prevents startup (see Root Cause 3).
If the channel was working and suddenly isn't: Check whether anyone edited the channel recently. Sometimes a seemingly unrelated edit (a transformer change) causes the channel to redeploy and fail to bind if configuration drift has happened.
For broader issues around channels not starting at all, see our companion troubleshooting article on Mirth Connect channel not starting.
7. Root Cause 3 — TLS / Certificate Mismatch
MLLP over TLS (MLLPS) is the dominant mode in 2026, and TLS mismatches are the second most common cause of MLLP failures we see in production.
Symptoms
- Plain-text MLLP works but TLS-enabled MLLP fails.
- Sender sees
Connection reset by peerrather than refused. - Mirth log contains
javax.net.ssl.SSLHandshakeException,SSLException, orPKIX path building failed. - Failure started suddenly on a specific date — usually the date of a certificate renewal or expiration.
Diagnosis
Probe the TLS endpoint from the sender:
openssl s_client -connect mirth-host:6661 -showcertsThe output shows:
- The complete certificate chain being presented.
- Protocol version negotiated (or failure reason).
- Certificate expiration date.
- Signature algorithm.
Look for:
Verify return code: 0 (ok)→ TLS is working from this probe's perspective.Verify return code: 10 (certificate has expired)→ the certificate is past itsnotAfterdate.Verify return code: 19 (self signed certificate in certificate chain)→ missing intermediate CA on the sender's trust store.Verify return code: 20 (unable to get local issuer certificate)→ CA trust problem.
Check Mirth's own log:
grep -i 'ssl\|tls\|handshake' /opt/mirthconnect/logs/mirth.log | tail -n 50Fix
Certificate expired:
Renew or replace. If using a keystore, import the new cert into Mirth's keystore and restart the channel:
keytool -importcert -alias mllp-cert -file new-cert.pem -keystore /opt/mirthconnect/conf/keystore.jksUpdate the channel's TLS configuration to reference the new keystore or alias. Verify with another openssl s_client probe.
Cipher suite mismatch:
If the sender and receiver don't share any common cipher suite, the handshake fails. Check both sides' allowed protocols and ciphers:
- Mirth uses the JVM's default protocols and ciphers unless overridden in the channel config.
- Modern defaults (TLS 1.2+, modern ciphers) may reject older senders — and older Mirth versions may not support the newest TLS versions without JVM tuning.
The fix is usually to bring both sides to TLS 1.2 with a shared modern cipher suite, and update whichever side is behind.
Client certificate authentication misconfigured:
Some MLLPS setups use mutual TLS (mTLS), where the sender must present a certificate that Mirth verifies. Ensure:
- Mirth's truststore contains the CA that issued the sender's client certificate.
- The sender actually sends a client certificate (it's a common misconfiguration to enable mTLS in Mirth but forget to configure the sender).
JVM trust store missing the CA:
Import the issuing CA into Mirth's truststore:
keytool -importcert -alias sender-ca -file sender-ca.pem -keystore /opt/mirthconnect/conf/truststore.jksRestart the channel. Test again.
Prevention
- Monitor certificate expiration — every MLLPS listener and sender cert should be in a calendar with alerts 90 days before expiry, not the day of.
- Document your TLS posture — minimum protocol version, allowed ciphers, client-cert requirements, trust-chain structure — per interface.
- Test certificate renewals in non-production before the expiration date arrives. Many teams discover only at renewal time that their automation doesn't cleanly update the keystore.
For the broader TLS hardening guide, see our companion Mirth Connect SSL/TLS hardening and security HIPAA checklist.
8. Root Cause 4 — Host or Network Routing Changes
Less common than firewalls but frequent enough to check early: something about the network path changed.
Symptoms
- TCP probe from sender times out.
- Mirth host is up, port is listening, local telnet works.
traceroutefrom sender reveals an unexpected path or drops at a specific hop.- Change coincides with a cloud migration, IP reassignment, DNS change, or VPN/IPsec tunnel event.
Diagnosis
From the sender:
# Does DNS resolve to the right IP?
dig mirth-host +short
# Expected: the IP you deployed Mirth on
# Does the route go where you expect?
traceroute mirth-host
# Is there a VPN/tunnel that should be up?
sudo ip route
sudo wg show # If using WireGuardCheck for mismatches between what you expect and what you see.
Fix
- Wrong DNS answer — typically a DNS cache or a DNS record that wasn't updated after a migration. Fix the record and wait for propagation / flush the sender's resolver cache.
- Routing dropping at a hop — engage the network team; this is above the Mirth layer.
- VPN/tunnel down — check the tunnel endpoint, reauthenticate if needed, verify keys rotated cleanly.
- Host migration — ensure the new IP has been communicated to all senders and their configurations updated. Old IP-based allowlists at the sender side are a common trap.
Prevention
- Prefer DNS hostnames over IPs in sender configurations so IP changes require only a DNS update, not a sender change.
- Test network path end-to-end in change windows — before the old path is retired, confirm the new path works for every active interface.
9. Root Cause 5 — MLLP Framing and Encoding
This is the least common category but worth knowing — when the TCP connection succeeds and TLS is clean, but messages still fail silently, you are no longer debugging "connection refused" in the networking sense. You are debugging the MLLP protocol itself.
Symptoms
- TCP connects, TLS completes, but messages never seem to arrive or ACKs never come back.
- Receiver logs show partial/malformed messages.
- Connection appears to hang or get reset mid-message.
Common causes
Missing or wrong framing bytes. Some sender implementations use <FS> without <CR>, or use <CR> only. MLLP is specific — it must be <VT>message<FS><CR>.
Character encoding mismatch. Sender sends UTF-8; receiver assumes Latin-1. Messages look okay until a special character causes truncation or corruption.
Batch vs single message confusion. Some senders bundle multiple HL7 messages; receivers expect one at a time. The framing gets interpreted inconsistently.
MLLP keepalive missing. Long-lived connections without activity may be dropped by stateful firewalls or load balancers. Both sides should either keep the connection busy or support reconnection.
Fix
Capture the actual bytes on the wire with tcpdump:
sudo tcpdump -i any -w mllp-capture.pcap 'port 6661'
# Play back in Wireshark — inspect the actual bytes sent vs receivedCompare what is on the wire with what both ends expect. The fix is usually to adjust one side's configuration (framing bytes, character set, batch handling) to match the other.
For the broader HL7 transport discussion, see MLLP protocol explained and common HL7 integration errors.
10. Long-Term Prevention Checklist
A production environment that rarely sees MLLP incidents has the following in place:
- ✓Documented inventory of every MLLP interface — ports, hostnames, direction, TLS posture, certificate ownership
- ✓Certificate expiration alerting — 90, 60, 30, 7, 1 day warnings before every cert expires
- ✓Automated port-reachability monitoring from the actual sender's network perspective, not just Mirth's host
- ✓End-to-end MLLP heartbeat tests — scheduled synthetic messages flowing through each interface
- ✓Channel state alerting — any listener that stops unexpectedly fires a page
- ✓Firewall rule change review — MLLP ports flagged in change-control reviews
- ✓Documented TLS posture per interface — minimum protocol, allowed ciphers, mTLS requirements
- ✓Runbook for every common failure mode, rehearsed at least quarterly
- ✓Fallback plan for critical interfaces — a known-good route to resume traffic when primary is broken
- ✓On-call engineer training covering MLLP basics and this diagnostic ladder
For the broader operational maturity program, see our Mirth Connect issues and fixes catalog and the performance tuning guide.
11. When to Escalate
Handle MLLP incidents in-house when you have time and a repeatable playbook. Escalate when:
- A clinical workflow is affected and every minute of downtime has patient-care consequences.
- The failure mode doesn't match any of the five root causes above — indicating a deeper configuration, network, or infrastructure issue.
- Certificates expired in production unexpectedly and your team doesn't have a rehearsed replacement procedure.
- mTLS or complex network architectures are involved and your integration team is comfortable with application-layer debugging but not with TLS / PKI troubleshooting.
- The incident is the second or third recurrence and the team is firefighting rather than fixing root causes.
Our Mirth Connect helpdesk has an under-15-minute emergency SLA and includes engineers with deep HL7, MLLP, and TLS debugging experience. Our broader services team covers preventive hardening, automation, and infrastructure reviews to stop these incidents before they happen — as part of our HL7 integration services across the USA.
12. Frequently Asked Questions
What does "connection refused" mean in Mirth Connect MLLP?
The TCP SYN packet from the sender reached the destination host, but the host replied with RST because no process is listening on that port. The most common causes are: the Mirth service is down, the specific channel/listener is stopped, the listener is bound to the wrong interface, or another process is holding the port.
What is the difference between "connection refused" and "connection timed out"?
Connection refused means the destination host actively rejected the TCP handshake — no service listening. Connection timed out means no response came back at all — typically a firewall silently dropping packets or a routing problem.
What port does MLLP use?
MLLP has no standardized port — it can use any TCP port. Common choices in production are 6661, 6662 (MLLPS), 9999, and various application-specific ports agreed between the sender and receiver. Whatever port you use, document it in your interface inventory.
How do I test an MLLP listener manually?
From the sender's host, send a properly framed HL7 message via nc or telnet and check for an ACK. The framing must be <VT>message<FS><CR> — hex 0x0B, content, 0x1C, 0x0D. If the response arrives wrapped in those framing bytes, the full path is working.
Why does my MLLPS connection work one day and fail the next?
Most commonly, a certificate expired. Next most commonly, a firewall rule changed in a change-control window overnight. Third most commonly, a JVM or TLS library update changed the default cipher suite on one side without the other being updated.
Can I use plaintext MLLP in production?
Technically yes, but for HIPAA-covered environments transmitting PHI, you should use MLLPS (MLLP over TLS 1.2+) or run the MLLP session inside a VPN or encrypted tunnel. Plaintext MLLP over an untrusted network is effectively a breach of the HIPAA Security Rule's transmission security requirements.
How do I debug MLLP framing issues?
Capture the actual bytes on the wire with tcpdump or Wireshark. Compare what is sent vs what is expected. The three framing bytes must be exactly <VT> (0x0B), <FS> (0x1C), <CR> (0x0D), in that order.
Why does my Mirth listener say "Address already in use"?
Another process is already bound to the port. Find it with lsof -i :PORT. Either kill the other process or change Mirth's listener to a different port. Two processes cannot share a TCP listen port on the same bind address.
What happens if my MLLP listener goes down?
Upstream senders will start seeing connection refused (or timeout) errors. How they react depends on their implementation — well-behaved senders queue messages locally and retry with backoff; poorly-behaved senders may lose messages silently. This is why monitoring listener health and downstream queue depth is essential.
Related Reading
- Mirth Connect: The Complete Guide
- HL7 Integration: The Complete Guide
- MLLP Protocol Explained
- Common HL7 Integration Errors
- Mirth Connect SSL/TLS Hardening
- Mirth Connect Security & HIPAA Checklist
- Mirth Connect Java Heap Space Error
- Mirth Connect Channel Not Starting
- Mirth Connect Performance Tuning
- Common Mirth Connect Issues & Fixes
- Mirth Connect Installation Guide