5 Goroutine Mistakes That Will Get You Roasted in a Go Code Review

Go makes concurrency look stupidly easy. You slap a go keyword in front of a function call, and suddenly you feel like a distributed systems wizard. The problem? The runtime doesnt care about your feelings, and it will silently eat your goroutines, corrupt your maps, and deadlock your entire program while you stare at the terminal wondering what went wrong. These are the five goroutine mistakes that instantly reveal you havent spent enough time getting burned by the scheduler.


TL;DR: Quick Takeaways

  • Go runtime wont wait for your goroutines — main() exits and takes everyone with it
  • Loop variable closures still bite in codebases older than Go 1.22
  • Maps are not thread-safe and the race detector will catch what your tests missed
  • An unread channel and a never-closed channel are two different ways to deadlock yourself
  • You cant kill a goroutine from outside — cooperative cancellation via context is the only contract

Goroutine Not Executing Before Main Exits

This is mistake number one because it doesnt even crash — it just silently does nothing. You launch a goroutine, run the program, see zero output, and assume theres a bug in your logic. There isnt. The goroutine not executing before main exits is not a bug in Go, its the runtime working exactly as documented. The fire and forget pattern only works if something is actually holding the process alive long enough for the goroutine to do its job.

package main

import "fmt"

func sendReport() {
    fmt.Println("report sent")
}

func main() {
    go sendReport()
    // main returns here, process exits, goroutine never runs
}

Run this. Youll get blank output every time. The goroutine is scheduled but the OS process is already gone before the scheduler gets a chance to run it. Some developers patch this with time.Sleep(time.Second) at the bottom of main, which is, to put it politely, cargo cult programming. Youre not fixing the problem, youre just betting the goroutine finishes within one second. On a loaded CI server at 3am, that bet loses.

The actual fix is sync.WaitGroup. Call wg.Add(1) before launching the goroutine, call wg.Done() inside it with a defer, and call wg.Wait() in main before returning. Now main blocks until every tracked goroutine signals completion. The runtime knows what it owes you, and you know what you owe the runtime.

package main

import (
    "fmt"
    "sync"
)

func sendReport(wg *sync.WaitGroup) {
    defer wg.Done()
    fmt.Println("report sent")
}

func main() {
    var wg sync.WaitGroup
    wg.Add(1)
    go sendReport(&wg)
    wg.Wait()
}

Now the output is predictable every single run, regardless of machine load or scheduler mood. sync.WaitGroup is not advanced Go — its the minimum viable concurrency hygiene. If youre using time.Sleep to synchronize goroutines in anything other than a toy script, thats the first thing a reviewer will flag.

Related materials
Go Allocation Rate

How Go Allocation Rate Drives GC Pressure and Latency at Scale Stop guessing. Run go tool pprof -alloc_objects to find where your app actually bleeds memory before touching any knobs. Kill heap-escaping pointers. If escape...

[read more →]

Go Closure Inside For Loop Goroutine (The Pointer Trap)

This one has ended careers. Not literally, but its the kind of bug that ships to production, sits quietly for six months, and then causes a data processing pipeline to apply the same operation to every item as if they were all the last element in the slice. The go closure inside for loop goroutine trap is rooted in how Go closures capture variables — by reference, not by value. The goroutine doesnt capture the value of i at launch time. It captures the variable i itself, the memory address. By the time the goroutines actually run, the loop has finished, and i is sitting at its final value.

package main

import (
    "fmt"
    "sync"
)

func main() {
    items := []string{"a", "b", "c"}
    var wg sync.WaitGroup
    for _, item := range items {
        wg.Add(1)
        go func() {
            defer wg.Done()
            fmt.Println(item) // captures &item, not item's value
        }()
    }
    wg.Wait()
}

Expected output: a, b, c in some order. Actual output on Go < 1.22: probably c, c, c. All three goroutines share the same item variable, and they all read it after the loop sets it to its last value. The fix before Go 1.22 was to shadow the variable inside the loop body: item := item — yes, that line is valid Go and it creates a new variable scoped to that iteration.

Now, Go 1.22 changed loop variable semantics so each iteration gets its own copy, making this bug go away automatically. But heres why you still need to know this cold: interviewers ask it because it tests your understanding of closures and memory, not just Go syntax. More importantly, a huge chunk of production Go code is running on 1.19, 1.20, 1.21. You pull a dependency that hasnt been updated in two years, you fork a service from an older repo, youre on an enterprise team that doesnt upgrade runtimes quickly — youre back in pointer trap territory. Understanding the old behavior is how you debug the old code you will absolutely encounter.

// Works correctly on ALL Go versions
for _, item := range items {
    item := item // new variable per iteration
    wg.Add(1)
    go func() {
        defer wg.Done()
        fmt.Println(item)
    }()
}

One extra line. Zero ambiguity. Works on 1.18 and 1.24 alike. Write it anyway even on newer Go versions — it signals to the reader that you know exactly what youre doing, not that you got lucky with the version.

Fatal Error Concurrent Map Writes Fix

Go maps are not thread-safe. This isnt a quirk or an oversight — its a deliberate performance trade-off. Protecting every map read and write with a lock by default would slow down single-threaded map access for the vast majority of programs that never touch a map from multiple goroutines. So the runtime says: your problem. The moment you write to a map from two goroutines simultaneously, you get a hard crash: fatal error concurrent map writes. Not a panic you can recover from. The process dies.

package main

import "sync"

func main() {
    m := make(map[int]int)
    var wg sync.WaitGroup
    for i := 0; i < 100; i++ {
        wg.Add(1)
        go func(n int) {
            defer wg.Done()
            m[n] = n // concurrent writes = fatal crash
        }(i)
    }
    wg.Wait()
}

You wont always see the crash in development. Sometimes the goroutines happen to not overlap. The bug hides until production load or a specific timing window opens up. This is exactly what go test -race is for. The go test race detector example is running your tests with the -race flag: go test -race ./.... The race detector instruments memory accesses at runtime and reports data races with a stack trace showing exactly which goroutines conflicted and on which line. Use it. Make it part of CI. It catches concurrent map writes before they catch you.

Related materials
Goroutine Orchestration Patterns

Why Your Goroutine Orchestration Breaks Under Real Load The goroutine orchestration patterns most mid-level devs reach for look fine in toy examples and fall apart the second a prod service hits real concurrency. Three microservices,...

[read more →]

For the actual fatal error concurrent map writes fix, you have two clean options. First is a sync.RWMutex wrapping your map — use RLock/RUnlock for reads, Lock/Unlock for writes. Second is sync.Map, which is built into the standard library and optimized for cases where keys are written once and read many times. Neither is complicated. Both are correct. Picking between them is a performance question, not a correctness question — and correctness always comes first.

All Goroutines Are Asleep Deadlock Golang

The runtime message is honest to a fault: all goroutines are asleep — deadlock! It means every goroutine in the program is blocked waiting for something that will never happen. This is the all goroutines are asleep deadlock golang situation, and it usually comes from misunderstanding how channels block. An unbuffered channel block is not a queue — its a handshake. The sender blocks until a receiver is ready. If theres no receiver, the sender waits forever. If the sender is the only goroutine, the whole program is now frozen.

package main

func main() {
    ch := make(chan int) // unbuffered
    ch <- 42            // blocks forever, nobody is reading
    // runtime: all goroutines are asleep - deadlock!
}

The second variant of this bug is a goroutine leak channel not closed scenario. You launch a goroutine that ranges over a channel waiting for work. You send all your work items. You forget to close the channel. The goroutine finishes processing all items and then sits there, forever, waiting for the next value that will never come.

package main

import "fmt"

func main() {
    ch := make(chan int, 3)
    ch <- 1
    ch <- 2
    ch <- 3
    // close(ch) -- forgot this

    for v := range ch { // blocks after reading 3 items
        fmt.Println(v)
    }
}

The fix is boring and obvious once you know it: close the channel when youre done sending. The range loop over a channel exits cleanly when the channel is closed and drained. The rule is simple — whoever sends, closes. Dont close from the receiver side, dont close from a third goroutine unless youve coordinated it explicitly. And if youre working with fan-out patterns sending from multiple goroutines, thats what sync.WaitGroup plus a dedicated closer goroutine is for.

How to Stop Goroutine When Context Is Done

You cannot kill a goroutine from outside. There is no goroutine ID, no Kill() method, no signal you can send to stop it. If you launch a goroutine with an infinite loop and no exit condition, that goroutine runs until the process dies. This is not a limitation you work around — its the design. The solution is cooperative cancellation, and the standard mechanism is context.Context. Knowing how to stop goroutine when context is done is one of those things that separates a developer who writes concurrent code from one who maintains it.

// Bad: goroutine runs forever, ignores cancellation
go func() {
    for {
        doWork()
    }
}()

// Good: goroutine checks ctx.Done() on each iteration
go func() {
    for {
        select {
        case <-ctx.Done():
            return
        default:
            doWork()
        }
    }
}()

The select block checks ctx.Done() on each iteration. When the context is cancelled — whether by timeout, deadline, or an explicit cancel() call — the channel closes, the case fires, and the goroutine returns cleanly. No leaked goroutines sitting in memory consuming resources and holding onto database connections. Pass the context down the call chain. Let every long-running operation respect it. Thats the contract.

Related materials
Goroutine Leak Patterns

Goroutine Leak Patterns That Kill Your Service Without Warning A goroutine leak is a goroutine that was spawned and never terminated — it holds stack memory, blocks on a channel or syscall, and the Go...

[read more →]

Conclusion

The pattern across all five of these mistakes is the same: junior developers think about making code run. Senior developers think about making code stop — stop cleanly, stop on time, stop without corrupting shared state. Goroutines are not magic background threads that clean up after themselves. Theyre contracts between your code and the runtime, and the runtime is not forgiving when you ignore the fine print. Get the WaitGroup, close the channel, lock the map, respect the context. Everything else is just syntax.

Page author: Krun Dev GOJ

FAQ

Why is my goroutine not executing before main exits even though I can see its being launched?

Because Gos main() function doesnt implicitly wait for any background goroutines. When main returns, the runtime shuts down the process immediately regardless of what else is running. The goroutine gets scheduled but the OS reclaims the process before it gets CPU time. Use sync.WaitGroup to block main until your goroutines finish — thats the only reliable solution.

Does Go 1.22 completely fix the go closure inside for loop goroutine bug?

For new code compiled with Go 1.22 and above, yes — loop variables are now per-iteration, so closures in goroutines capture independent copies. But production codebases and dependencies targeting older Go versions still carry this bug. Interviewers ask about it because it tests your understanding of how closures capture references, which is a fundamental concept regardless of language version.

Whats the actual fatal error concurrent map writes fix I should use in production?

Wrap your map with a sync.RWMutex — use read locks for reads and write locks for writes. Alternatively, use sync.Map from the standard library, which is optimized for read-heavy workloads with stable keys. Run go test -race ./... on your codebase regularly; the race detector catches concurrent map access before it reaches production.

How do I debug all goroutines are asleep deadlock golang in a large codebase?

The runtime error prints a full stack trace of every goroutine at the point of deadlock. Read it carefully — it shows exactly which goroutine is blocked and on what operation. Typical culprits are an unbuffered channel with no receiver, a channel that was never closed causing a range loop to hang, or a sync.Mutex locked twice in the same goroutine.

Is there any way to force-stop a goroutine from outside without cooperative cancellation?

No. Go has no goroutine handles, no kill signals, and no preemptive cancellation mechanism for user goroutines. The only way to stop a goroutine is to have it check for a stop condition itself — a context.Done() channel is the standard pattern. Design your goroutines to respect context from the start, not as an afterthought.

What counts as a goroutine leak and how do I detect one?

A goroutine leak is any goroutine thats still running or blocked after it has no more useful work to do — typically because a channel was never closed or a context was never cancelled. Over time, leaked goroutines accumulate, hold resources, and degrade performance. The goleak package by Uber is the standard tool for detecting goroutine leaks in tests; it checks that no unexpected goroutines are running after a test completes.

Written by: