Articles

When Time Complexity Lies (And What Actually Matters)

7 min read

Big‑O is useful, but it often hides the real factors that dominate performance in production. This article is a pragmatic guide to why asymptotic complexity can mislead you, and what to pay attention to instead.

1) Constant Factors Can Dominate

Two algorithms with the same Big‑O can be wildly different in practice.

  • O(n log n) with high constants can be slower than a tight O(n^2) for small or medium n.
  • Allocation patterns, branching, and vectorization can dwarf the asymptotic term.

2) The Input Range Is Everything

Big‑O assumes n → ∞. Your n may never get large.

Examples:

  • Word lengths are almost always under ~12–15 characters, so edit‑distance DP is fast despite being O(n*m).
  • Request fan‑out is often capped (e.g., ≤ 20 services), so O(k^2) isn’t scary.

3) Cache Effects Beat Asymptotics

An O(n^2) loop over a small contiguous array can beat an O(n log n) algorithm with cache misses and pointer chasing.

What matters:

  • Memory layout (contiguous vs scattered)
  • Access patterns (sequential vs random)
  • Branch predictability

4) Allocation and GC Are “Hidden Big‑O”

You can “lose” big‑O wins if you allocate aggressively:

  • Allocation can be O(n) in practice due to GC pressure.
  • Avoiding allocations often yields larger speedups than improving asymptotic complexity.

5) Latency Is Often a Max, Not a Sum

In distributed systems, latency is often bounded by the slowest component.

  • 99th percentile dominates perceived performance.
  • Parallelism changes your effective complexity from sum → max.

Why p99/p99.99 dominates

Users remember the slowest experiences, not the median. Tail latency matters because:

  1. Fan‑out amplifies tails: if a request touches multiple services or shards, the slowest response dominates.
  2. Queueing effects: small bursts can cause long tails even if averages look fine.
  3. SLO math: to meet 99.9% availability/latency targets, you must control p99/p99.99, not p50.

Rule of thumb: if your request fans out to N independent calls, the tail gets worse with N because you take the max. The practical result is that p99 becomes product‑defining, while p50 often hides the pain.

6) The Real Complexity Is the Product of Constraints

Performance is usually a product of:

code
work_per_item * item_count * cache_penalty * allocation_penalty * concurrency_overhead

Big‑O only captures item_count.

7) Measure Where It Hurts

The most reliable performance improvement is: measure → identify bottleneck → fix.

If you can only do one thing, do this:

  1. Find your p95/p99 hotspot.
  2. Reduce work or improve locality there.
  3. Re‑measure.

Summary

Big‑O is a good first lens. Real performance is dominated by:

  • input ranges
  • constants
  • memory behavior
  • concurrency

The correct question is often not “what’s the Big‑O?” but “what’s the size of my real n, and what dominates the inner loop?”