Unix Domain Sockets in Node.js: how localhost is quietly taxing your app

Youve got two services running on the same box. They talk to each other over localhost:3000. It works. You ship it. But heres what nobody tells junior devs: every single one of those internal requests is paying a full TCP/IP stack overhead on localhost — for data that never once leaves your servers RAM. Thats not a minor inefficiency. Thats architectural debt dressed up as a default setting.
Unix Domain Sockets in Node.js solve this at the kernel level — no headers, no handshake, no checksum, no loopback interface. Just a direct pipe between two processes through kernel space. This article breaks down why TCP loopback is slow, what UDS actually does differently, and when you should stop using ports altogether. By the end, youll have the mental model — and the numbers — to justify a real architectural change to your lead.


TL;DR: Quick Takeaways

  • TCP loopback is a full network stack round-trip, even when both processes live on the same CPU
  • Unix Domain Sockets bypass the network layer entirely — kernel-space pipe, no headers, no checksum, no handshake
  • UDS consistently delivers 20–40% lower latency on local inter-process calls compared to TCP loopback
  • Nginx + Node.js over a .sock file is the right default for high-traffic entry points — a port is a workaround

Why TCP Loopback slows down your microservices

TCP was designed for unreliable networks. Networks where packets get lost, arrive out of order, and get duplicated in transit. So the protocol does a lot of work to compensate: it adds headers to every segment, computes checksums, manages sequence numbers, and requires an explicit three-way handshake before any data moves. On a real network, that overhead is absolutely worth it. On localhost? Youre running the same ceremony for data thats moving between two memory regions on one machine. The TCP/IP stack overhead on localhost is real, measurable, and completely unnecessary for intra-host communication.

Heres the mechanical cost. Every TCP request on loopback triggers multiple transitions between user space and kernel space: socket(), connect(), send(), recv(), close() — and thats just the client side. Context switching and system calls at this frequency arent free. At a few hundred RPS, you wont notice. At 5,000–10,000 RPS on a single host, that overhead starts showing up in your p99 latency and your CPU flamegraphs. The libuv thread pool in Node.js handles I/O efficiently, but it cant make the kernel skip work its been told to do.

Unix Domain Sockets performance Node.js

A Unix Domain Socket is not a network socket. Its an IPC mechanism defined in the POSIX standard that uses a filesystem path as its address instead of an IP and port. When process A writes to a UDS, the data goes directly into a kernel buffer. Process B reads from that same buffer. No TCP headers. No checksum. No SYN-ACK round-trip. The kernel just copies memory between two processes — which is about as fast as local communication gets. The Unix Domain Sockets performance in Node.js improvement is not marginal; its structural, because youre removing an entire protocol layer from the path.

Related materials
Why Modern Web Apps...

Performance Forensics: Cracking the V8 Engine and the Pixel Pipeline Barrier This article is written for engineers hitting the performance ceiling, not for CRUD apps.   Most developers treat the browser as a black box...

[read more →]

The Node.js net module supports UDS natively. The API is nearly identical to TCP — you replace the port number with a filesystem path, and everything else: stream.Duplex semantics, backlog configuration, error handling — stays the same. The Node.js net module vs http module performance gap also applies here: net gives you raw stream access without HTTP framing overhead, which matters if you control both ends of the connection. File descriptors are the underlying primitive in both cases, but with UDS youre staying entirely in local kernel space — no loopback interface, no packet routing.

// server.js — Unix Domain Socket server with Node.js net module
const net = require('net');
const fs  = require('fs');

const SOCK_PATH = '/run/myapp/service.sock';

// Clean up orphaned socket file on startup
if (fs.existsSync(SOCK_PATH)) fs.unlinkSync(SOCK_PATH);

const server = net.createServer((socket) => {
  socket.on('data', (data) => {
    socket.write(JSON.stringify({ ok: true, echo: data.toString() }));
  });
  socket.on('error', (err) => console.error('Socket error:', err));
});

server.listen(SOCK_PATH, () => {
  // Lock down permissions immediately after bind
  fs.chmodSync(SOCK_PATH, '660');
  console.log(`Listening on ${SOCK_PATH}`);
});

server.on('error', (err) => console.error('Server error:', err));

The fs.unlinkSync before server.listen is not optional. If your process crashes — and processes crash — the .sock file stays on disk and your next startup hits EADDRINUSE. Deleting it on boot is the correct pattern, not a hack. More on this in the FAQ.

Benchmarking UDS vs TCP

Theory is one thing. The following table reflects typical benchmark results for a Node.js echo server under sustained local load: 1,000 concurrent connections, 100k total requests, no application logic — pure transport overhead. Hardware and kernel version affect absolute numbers, but the ratio between TCP loopback and UDS holds consistently across environments. This is why scaling Node.js on a single host almost always involves moving internal traffic off TCP ports.

Metric TCP Loopback Unix Domain Socket Delta
Latency p50 0.41 ms 0.28 ms −32%
Latency p99 1.9 ms 1.1 ms −42%
Throughput (req/s) 38,400 51,200 +33%
CPU syscall share ~18% ~9% −50%
Memory alloc per req Higher (TCP buffers) Lower Structural

The p99 gap is the important number. Your median latency looks fine either way. But under real traffic spikes, that tail latency difference is what your upstream services actually feel — and its what blows your SLA. The CPU syscall reduction is the root cause: fewer kernel transitions means more headroom for actual application work.

Security and Unix domain socket permissions

Open a TCP port on localhost — say, 127.0.0.1:6379 — and any process running on that server can connect to it. No credentials required by default. Thats why exposed Redis instances get compromised: developers assume its only on localhost means its safe, but every other process on the box, every misconfigured app, every piece of injected code has the same loopback access. Unix domain socket permissions work differently. A .sock file is a filesystem object, governed by standard Linux file permissions: chmod/chown control exactly which users and groups can connect. Set it to 660 with the right group, and only processes running as that user or group can touch it.

Related materials
Data Oriented Design Performance...

The Silicon Ceiling: Engineering for Data Oriented Design Performance Modern software development has a massive blind spot: we are still writing code for processors that existed twenty years ago. We obsess over O(n) algorithmic complexity...

[read more →]

Thats not just a convenience — its a fundamentally different security model. With TCP ports, youre relying on the application layer or firewall rules to enforce access control. With UDS, the kernel enforces it at the filesystem level before a connection is even established. IPC via sockets with tight chown and chmod gives you defense in depth that iptables rules alone dont. An attacker who gains code execution as a different user simply cannot open that socket — its not a policy, its a permission check the kernel performs on every connect() call.

Running Node.js with Nginx via sockets

Most Nginx + Node.js setups youll find in tutorials use a port: proxy_pass http://127.0.0.1:3000. It works, but its slower than it needs to be. For any high-traffic entry point, the right configuration is a .sock file shared between Nginx and Node.js. Nginx speaks to the Node process over UDS — no TCP overhead on the proxy hop, tighter permissions, and one less port to manage. This is standard practice at companies running serious Node traffic, and its trivially easy to set up.

# nginx.conf — proxy to Node.js via Unix socket
upstream nodejs_app {
    server unix:/run/myapp/node.sock;
    keepalive 64;
}

server {
    listen 80;
    server_name example.com;

    location / {
        proxy_pass         http://nodejs_app;
        proxy_http_version 1.1;
        proxy_set_header   Upgrade    $http_upgrade;
        proxy_set_header   Connection "upgrade";
        proxy_set_header   Host       $host;
        proxy_set_header   X-Real-IP  $remote_addr;
    }
}

In Docker, sharing a socket between containers is a clean pattern that Middle devs should have in their toolkit. Mount a named volume at the same path in both the Node container and the Nginx container. Both processes talk over UDS without exposing any port between containers. Its more secure than inter-container TCP, and it removes a layer of NAT translation from the data path. The volume mount is the only piece of configuration you need — no changes to the application code.

# docker-compose.yml — shared socket volume between Nginx and Node
version: "3.9"
services:
  node:
    build: ./app
    volumes:
      - sock_vol:/run/myapp

  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
    volumes:
      - sock_vol:/run/myapp
      - ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
    depends_on:
      - node

volumes:
  sock_vol:

FAQ: Optimizing local microservices latency

What is the real Unix Domain Sockets performance gain in Node.js?

The gain is primarily in latency, not raw throughput. UDS eliminates the TCP handshake, header processing, and checksum computation on every request. In practice you see 25–40% lower p50 latency and up to 50% lower syscall overhead compared to TCP loopback. The effect is most pronounced at high concurrency — above ~2,000 req/s on a single host — because thats where kernel transition costs start stacking. For low-traffic internal APIs, the difference is real but not the first thing to optimize.

How to handle orphaned Unix socket files in Node.js?

When a Node process crashes, the .sock file it created stays on disk. The next startup attempt hits EADDRINUSE because the path already exists — even though nothing is listening on it. The correct fix is to delete the file before calling server.listen(): check with fs.existsSync() and remove with fs.unlinkSync(). An alternative is to wrap server.listen() in a try/catch and unlink on EADDRINUSE specifically, then retry — this is safer in environments where two instances might start simultaneously. Never leave cleanup to the OS or assume the file wont be there.

Related materials
Mono Loop in Java

How to Create a Conditional Loop with Mono in Java: Project Reactor In Project Reactor, you can create a conditional loop with Mono to repeat an action until a condition is met. Using Mono.defer and...

[read more →]

Are Unix domain socket permissions safer than firewall rules?

For intra-host communication, yes — and for a structural reason. Firewall rules like iptables operate at the network layer: they filter packets based on source, destination, and port. Theyre effective for inter-host traffic. On a single host, they provide weak isolation because many processes share the same loopback interface. chmod/chown on a .sock file is enforced by the kernel at the connect() syscall, before any data moves. A process running as the wrong user simply cannot open the socket — theres no packet to filter. Used together theyre complementary; used alone for localhost isolation, filesystem permissions are the stronger guarantee.

When should I choose the Node.js net module vs http module?

Use the http module when you need HTTP semantics — request/response framing, headers, status codes, and middleware compatibility. Use the Node.js net module when you control both ends of the connection and dont need HTTP overhead: internal microservice communication, IPC between worker processes, or high-frequency data pipelines between local services. The net module exposes raw stream.Duplex streams — you define your own framing protocol, which adds a small implementation cost but removes all HTTP parsing overhead. For RPC-style internal calls where you own the protocol, net + UDS is the leaner choice.

Does UDS work in containerized environments like Docker or Kubernetes?

In Docker: yes, cleanly — share a named volume mounted at the same path in both containers, as shown in the docker-compose example above. No port exposure, no inter-container TCP. In Kubernetes: UDS only works between containers in the same Pod, since they share a filesystem namespace. Across Pods — even on the same node — you need TCP or a shared hostPath volume, which is more complex to manage. If youre co-locating a sidecar proxy (Envoy, Nginx) in the same Pod as your Node process, UDS is the correct transport for that sidecar-to-app link.

What happens to UDS performance under heavy backlog pressure?

The backlog parameter in server.listen(path, backlog, callback) sets the maximum number of pending connections queued before the kernel starts rejecting new ones. For UDS, the default backlog in Node is 511. Under very high connection rates, a too-small backlog causes ECONNREFUSED on the client side before your application code ever runs. Tune this to match your expected burst concurrency — values between 512 and 2048 are common for high-throughput services. Unlike TCP, theres no SYN queue to worry about, so backlog tuning is simpler and more predictable.

Written by: