Why Your Node.js Code Runs in the Wrong Sequence
You write clean async code, run it, and the callbacks fire in an order that makes zero sense. Not a bug in your logic — a gap in your mental model of how Node.js actually schedules work. Understanding Node.js async execution order means understanding the queue system underneath, not just async/await syntax. The event loop has phases, each phase has a queue, and the priority rules are strict — once you see it, the “weird” behavior becomes obvious.
TL;DR: Quick Takeaways
process.nextTickfires before any Promise microtask — both drain completely before the event loop moves to the next phase.setImmediatereliably runs beforesetTimeout(fn, 0)when called inside an I/O callback — outside I/O, the order is non-deterministic.- Recursive
process.nextTickcalls starve I/O — the poll phase never gets reached, and your server stops handling requests. - ES module execution wraps top-level code differently than CommonJS — the same async pattern can resolve in a different order depending on module type.
How Node.js Event Loop Phases Work
Node.js runs on a single thread but delegates I/O to libuv, a C library that manages an internal thread pool and OS-level async APIs. What developers call “the event loop” is actually libuv’s loop — a structured sequence of phases, each responsible for a specific type of deferred work. The V8 engine executes JavaScript synchronously; libuv decides what JavaScript runs next. These are two separate concerns, and confusing them is where most mental models break down.
// Simplified event loop phase sequence (libuv)
// Each tick of the loop runs through these phases in order:
// 1. timers — executes setTimeout / setInterval callbacks
// 2. pending I/O — I/O errors from previous iteration
// 3. idle/prepare — internal libuv use only
// 4. poll — retrieve new I/O events; block here if queue empty
// 5. check — setImmediate callbacks run here
// 6. close — close event callbacks (socket.on('close', ...))
// Between EVERY phase transition: nextTick queue + microtask queue drain
The critical detail most articles skip: microtask queues don’t live inside any phase. They drain between phases — and also between individual callbacks within a phase in Node.js 11+. This is not a minor implementation quirk; it changes observable execution order in real applications.
Libuv Event Loop Phases Order in Node.js
The loop starts at the timers phase every iteration. If any setTimeout or setInterval callbacks are ready (their delay has elapsed), they execute here. Then pending I/O, then idle/prepare (you’ll never touch these), then poll. The poll phase is where libuv blocks waiting for I/O — if nothing is ready and no timers are pending, the loop sits here. Once I/O arrives or a timer threshold passes, it unblocks and continues to check and close phases. This cycle repeats until the event queue is empty and the process exits.
What Happens Between Event Loop Phases in Node.js
Between every phase transition — and in Node.js 11+ between every individual macrotask callback — the runtime drains two queues in strict order: first the nextTick queue (all of it), then the microtask queue (all of it). Neither queue gets “one callback and move on.” They run to empty. If draining one queue adds more items to the same queue, those get processed before anything else. This is the source of most async execution surprises, and it’s intentional design — not a bug.
Poll Phase in Node.js: How It Works
The poll phase has two jobs: process I/O callbacks that are ready right now, and calculate how long to block waiting for new I/O. If the setImmediate queue is non-empty, the poll phase won’t block — it processes ready callbacks and moves immediately to check. If there are pending timers, it blocks only until the earliest timer threshold. In a server with no incoming requests and no timers, this is where the process spends most of its time — parked at poll, waiting. Understanding poll explains why setImmediate consistently beats setTimeout(fn, 0) inside I/O callbacks: when an I/O callback fires, you’re already past timers, you’re in or past poll, so the check phase (setImmediate) comes next.
process.nextTick vs Promise.then Execution Order
Both process.nextTick and resolved Promise callbacks are microtasks — deferred work that runs before the event loop moves on. But they live in separate queues with a strict hierarchy: nextTick queue always drains before the Promise microtask queue. This is a Node.js-specific design choice; the browser has no equivalent of process.nextTick. In practice, this means any callback you schedule with process.nextTick will always execute before a resolved Promise.then, even if the promise was resolved earlier in the same synchronous block.
Promise.resolve().then(() => console.log('1 — promise microtask'));
process.nextTick(() => console.log('2 — nextTick'));
console.log('3 — synchronous');
// Output:
// 3 — synchronous
// 2 — nextTick ← nextTick queue drains first
// 1 — promise microtask
The synchronous code runs first (V8 call stack). Then the loop prepares to advance — at that point it checks the nextTick queue before checking the microtask queue. The nextTick callback fires. Then the Promise callback fires. If the nextTick callback schedules another nextTick, that also runs before any Promise. You can build an infinite nextTick loop that permanently starves your Promise-based code — which is a real footgun in production.
Node.js Async Hooks Deep Dive: When Your Request ID Vanishes Mid-Fligh You've traced the bug for two hours. The request ID is there at the controller, gone by the time you hit the database logger....
[read more →]Why process.nextTick Runs Before Promise.then
This is an explicit Node.js design decision documented since v0.9. The nextTick queue was the original “run after this tick” mechanism before Promises were part of the language. When Promise support was added, nextTick retained its higher priority to avoid breaking existing behavior. The Node.js docs themselves flag this as potentially surprising and warn that abusing process.nextTick can prevent the event loop from reaching the poll phase. It’s a power tool, not a default async primitive.
When Does a Promise Callback Execute in Node.js
A resolved Promise’s .then callback enters the microtask queue the moment the promise settles — but it only executes when the call stack is empty and the nextTick queue is also empty. In an async function, every await suspension point is effectively a microtask checkpoint: execution resumes as a microtask after the awaited value settles. This means two consecutive await calls create two separate microtask queue entries, and between them, any higher-priority nextTick callbacks can slip in.
setImmediate vs setTimeout Node.js Difference
These two look interchangeable for “run something soon” — and that’s the trap. setTimeout(fn, 0) schedules work in the timers phase. setImmediate(fn) schedules work in the check phase, which comes after poll. The execution order difference depends entirely on where in the loop you call them. Inside an I/O callback, setImmediate always wins — you’re past the timers phase, so the callback sits in check and fires at the end of the current iteration. Outside of I/O context (e.g., at the top level of your script), the order is non-deterministic, because it depends on OS timer resolution and process startup time.
// Inside I/O callback — order is DETERMINISTIC
const fs = require('fs');
fs.readFile(__filename, () => {
setTimeout(() => console.log('timeout'), 0);
setImmediate(() => console.log('immediate'));
});
// Always prints: immediate → timeout
// Top-level — order is NON-DETERMINISTIC
setTimeout(() => console.log('timeout'), 0);
setImmediate(() => console.log('immediate'));
// Could print either order — depends on OS timer resolution
The mini-analysis here matters for architecture: if you’re deferring work that must happen after an I/O operation completes (like responding to a file read), setImmediate is the correct tool. Using setTimeout(fn, 0) for the same purpose is technically incorrect — it produces the right result most of the time but fails under load when timer resolution drifts.
setImmediate Inside I/O Callback: Why It’s Always Faster
When your fs or network callback fires, the event loop is in the poll phase processing I/O. After your callback completes, the loop drains microtasks, then moves to the check phase — where setImmediate callbacks live. Timers don’t get re-checked until the next iteration starts back at the timers phase. So from inside I/O, check comes before timers in the current loop iteration. This is a structural guarantee, not a speed difference — it’s about phase ordering, not execution overhead.
Node.js 11 Execution Order Changes for setTimeout
Before Node.js 11, microtasks drained only between phases — not between individual callbacks within a phase. This meant that multiple setTimeout callbacks scheduled for the same timer tick would all fire before any microtask ran. Node.js 11 aligned with browser behavior: microtasks now drain after each individual macrotask callback. This is a breaking behavioral change if your code relied on batched microtask execution across same-tick timers. Code that mixed Promise resolution with multiple timers at the same delay value may behave differently on Node.js 10 vs 11+. Check your Node.js version assumptions in production.
Macrotask vs Microtask Queue Node.js Architecture
The macrotask queue is a loose term covering callbacks scheduled by setTimeout, setInterval, setImmediate, and I/O handlers — work that gets queued in specific event loop phases. The microtask queue (plus nextTick queue) sits above all of this: it interrupts between macrotasks and between phases to drain completely before any macrotask runs. This two-level architecture is how Node.js balances responsiveness with throughput. Microtasks let you chain async logic without yielding to unrelated I/O. Macrotasks let I/O, timers, and external events interleave.
| Queue Type | Populated By | Drains When | Starvation Risk |
|---|---|---|---|
| nextTick queue | process.nextTick() | Before each phase transition, before microtasks | High — recursive calls block everything |
| Microtask queue | Promise.then(), queueMicrotask() | After nextTick queue empties, before next phase | Medium — infinite chains can delay I/O |
| Timers (macrotask) | setTimeout(), setInterval() | Timers phase — when delay threshold elapsed | Low — delayed by microtask overload |
| Check (macrotask) | setImmediate() | Check phase — after poll | Low — delayed by microtask overload |
Microtask Starvation in Node.js: What Causes It
Starvation happens when microtasks keep adding more microtasks faster than the queue can drain — or rather, it never drains because the queue is always non-empty. In practice this looks like: a recursive process.nextTick that schedules another nextTick, or a Promise chain where each resolution queues another Promise. The event loop phases never advance. The poll phase never gets reached. Incoming I/O sits ignored. In a server context, this means your HTTP handler fires once, enters a deep async chain that never yields to macrotasks, and your server effectively hangs — while appearing to be running.
V8 Serialization: When JSON.stringify Finally Lets You Down V8 serialization isn't something most Node.js developers reach for on day one. You've got JSON.stringify, it works, life goes on. Then one day you're passing a Map...
[read more →]queueMicrotask API in Node.js: When to Use It
queueMicrotask(fn) is the standard Web API equivalent of Promise.resolve().then(fn) — it queues a microtask without creating a Promise object. It’s available in Node.js 11+ and in all modern browsers, making it the correct choice when you want cross-environment microtask scheduling without the Promise overhead. Unlike process.nextTick, it goes into the standard microtask queue (lower priority than nextTick). Use it when you need deferred execution that doesn’t require I/O, but you want predictable cross-platform behavior rather than Node.js-specific nextTick semantics.
Node.js Async Execution Order: CommonJS vs ES Modules
The same async code can produce different execution order depending on whether it runs in a CommonJS or ES module context — and most developers don’t expect this. The root cause is how the module systems initialize: CommonJS runs synchronously, while ES modules evaluate top-level await expressions and wrap their execution model around the microtask queue. A module that loads fine in CommonJS and exhibits correct async order may silently reorder operations when ported to ESM, especially if it uses top-level async patterns.
// commonjs-entry.cjs
const mod = require('./async-module');
console.log('after require');
// "after require" prints BEFORE any async work in the module
// esm-entry.mjs
import './async-module.mjs';
console.log('after import');
// If async-module.mjs has top-level await,
// "after import" may print AFTER the awaited expression resolves
// because ESM loading itself becomes async and microtask-scheduled
The practical implication: if you’re migrating a Node.js codebase from CommonJS to ESM and you have initialization code that depends on execution order relative to module loading, test explicitly. Don’t assume the behavior is equivalent — it isn’t. The ESM module graph is resolved asynchronously, and any top-level await in a dependency suspends the entire dependent graph at a microtask boundary.
Why ESM Wraps Code in the Microtask Queue
ES modules support top-level await, which means the module loader has to treat any module as potentially async. The spec requires that module evaluation be asynchronous-capable, so the evaluation itself is scheduled as a microtask-like operation within the module graph resolution. When a module has no top-level await, this overhead is invisible. When it does, the suspension propagates upward through all importing modules — each parent has to wait for the child’s async evaluation before it can complete its own. This is architecturally correct behavior, but it means ESM startup sequencing is fundamentally different from CommonJS.
Race Condition Caused by Wrong Queue in Node.js: Production Example
A real production bug pattern: a service initializes a connection pool inside a module, using a Promise-based connect call. The module exports a query function. Another module imports it and immediately calls query at the top level — before the connection Promise has resolved. In CommonJS this occasionally works because require is synchronous and connection setup sometimes completes within the same tick. In ESM, the timing is different, and the race becomes consistent. The fix is never assuming a module’s async initialization is complete at import time — always expose a ready signal (a Promise, an event, or an explicit init function) and await it before use.
Node.js Task Scheduling Priority: Production Implications
In production Node.js services, getting task scheduling priority wrong is a latency bug that’s hard to diagnose. It doesn’t crash — it just adds tail latency, causes request pileups under load, or produces subtly incorrect state in shared data structures. The async callback scheduling order in Node.js is deterministic once you know the rules; the problem is that most developers operate on a simplified mental model and only discover the gap when something breaks in production under real concurrency.
// Correct priority order — memorize this
// (higher in list = runs first)
// 1. Synchronous code (V8 call stack)
// 2. process.nextTick callbacks (nextTick queue, full drain)
// 3. Promise.then / queueMicrotask (microtask queue, full drain)
// 4. setImmediate (check phase — same loop iteration, after poll)
// 5. setTimeout / setInterval (timers phase — next iteration or current)
// 6. I/O callbacks (poll phase)
// 7. close callbacks (close phase)
// Steps 2–3 repeat between every macrotask (Node.js 11+)
Burn this order into your mental model. Every async debugging session in Node.js starts with “where in this queue hierarchy does this callback land?” — and works backward from there.
How Node.js Decides Which Callback Runs Next
After each synchronous block completes, the runtime checks: is the nextTick queue non-empty? If yes, drain it entirely. Then check the microtask queue — drain it entirely. Then check which event loop phase is active and execute the next callback from that phase’s queue. After that single macrotask callback completes, repeat: nextTick queue, microtask queue, next macrotask. This is the core scheduling algorithm. The “runs next” question always has the same answer: synchronous first, nextTick second, microtasks third, then whatever macrotask phase is current.
Node.js Event Loop Lag in Production Systems Your Node.js server is alive. CPU at 12%, memory stable, no errors. But API response times quietly climb from 40ms to 400ms over a busy afternoon. No crash,...
[read more →]Async Callback Scheduling in Node.js Internals
Internally, libuv maintains per-phase handle and request queues. Timer callbacks are stored in a min-heap sorted by expiration. I/O completions are reported via OS-specific APIs (epoll on Linux, kqueue on macOS, IOCP on Windows) and placed in the poll queue. The JavaScript layer sits on top: V8 executes the callbacks, and Node.js’s binding layer bridges libuv events to JavaScript function calls. The nextTick and microtask queues live entirely in the JavaScript layer — libuv doesn’t know they exist. This is why they drain between phases: Node.js checks them every time control returns from a libuv callback to the JS layer.
FAQ
Does process.nextTick run before or after promise.then?
In the Node.js hierarchy, process.nextTick is basically a VIP pass. Its a common rookie mistake to think all microtasks are equal, but the nextTick queue priority over microtasks is absolute. The runtime will drain every single “tick” callback before it even glances at your microtask queue (Promises).
If you dont actually need to jump the line, stick to queueMicrotask() or a standard Promise.resolve(). Using nextTick is like cutting in line at a club—it works, but if everyone does it, the whole system breaks. In Node.js, nextTick wins the gold every time; its not a bug, its just how the engine is wired.
Why does setImmediate run before setTimeout inside I/O?
This boils down to where you “dropped the hook.” If youre calling these inside a file read or a network callback, youre already sitting in the poll phase. Once your callback finishes and the microtasks are cleared, the event loop moves to the very next stop: the check phase, which is the home of setImmediate.
The poll phase vs check phase logic means that setImmediate is the next logical step in the loop’s current lap. Meanwhile, setTimeout lives in the timers phase, which was three stops ago. To hit that timer, the loop has to finish the entire lap and start over. However, if you run them in a plain script outside of any I/O, its a total “non-deterministic” coin flip depending on how fast your CPU woke up that morning.
What happens if you call process.nextTick recursively?
Congrats, youve just invented a synchronous loop that htop wont even warn you about. Because process.nextTick is so aggressive, the event loop blocking is real. It wont hand control back to libuv until the queue is bone dry. If you keep adding to it recursively, that queue never empties.
Your server turns into a brick. It looks “alive” because the process is running, but the poll phase is never reached. Incoming HTTP requests will just pile up until they timeout, and your DB calls will never return. Its the ultimate I/O starvation scenario—all because you didn’t give the loop a chance to breathe.
Why does async/await order differ in ESM vs CommonJS?
Welcome to the era of ES module async loading. In the old-school CommonJS days, require() was a synchronous sledgehammer—nothing moved until that module was loaded. But ESM supports top-level await, which turns module initialization into an asynchronous mission.
When you import an ESM module, the whole startup sequence is wrapped in a microtask. This shifts the timings: things that fired “instantly” in CJS might now wait for the module graph to resolve. If your code relies on a hyper-specific init sequence, expect some nasty race conditions when you flip "type": "module" in your package.json.
Can microtask queue cause performance issues in production?
Absolutely. Microtask starvation under load is a silent killer in high-throughput services (think 10k+ RPS). If your request handler spawns massive Promise chains without yielding, the event loop phases stop advancing. Your timers will start drifting, and your p99 latency will spike through the roof.
The fix? Youve got to “let go” sometimes. Throwing a strategic setImmediate yield in the middle of heavy logic gives the loop a chance to clear macro tasks and answer incoming traffic before diving back into your endless .then() callbacks.
What is the difference between microtask queue and callback queue?
Think of the callback queue (macrotasks) as the heavy lifting: timers, I/O, and closing sockets. These are managed by libuv and respect the phases of the loop. On the other hand, the microtask queue is the V8 “special forces” that cut in between everything else.
The macro task vs micro task priority rule is simple: microtasks are cleared to zero after every single macrotask and between every phase transition. If you pile heavy work into a microtask thinking its “background work,” youre wrong—its actually foreground work that stops the loop from doing its real job (I/O). Microtasks are selfish; macrotasks are team players.
Written by: