Load Average Interpreter

What those three numbers from uptime actually mean for your server.

Run nproc or sysctl -n hw.ncpu on macOS

What load average actually measures

Load average is not CPU percentage. It's the average number of processes that are either running on a CPU or waiting for one (plus, on Linux, processes waiting for disk I/O). The three numbers represent the average over 1, 5, and 15 minutes.

On a 4-core system, a load average of 4.0 means every core is busy but nothing is waiting. A load of 8.0 means every core is busy and 4 processes are queued. A load of 1.0 means 75% of your capacity is idle.

The number that matters is load relative to your core count. A load of 16.0 is fine on a 32-core machine. A load of 2.0 might be a problem on a single-core VPS.

How to find your CPU core count

OS Command
Linux nproc
Linux (alt) grep -c ^processor /proc/cpuinfo
macOS sysctl -n hw.ncpu
FreeBSD sysctl -n hw.ncpu

These return logical CPUs (including hyper-threads). That's the right number to use for load average comparison.

Physical cores vs vCPUs: a machine with 2 physical cores and hyper-threading reports 4 vCPUs, but it's not the same as 4 physical cores. Hyper-threading shares execution units between threads and typically adds 15-30% throughput, not 100%. A load of 4.0 on a 2-core/4-thread machine is under more real pressure than the same load on a true 4-core. The scheduler treats each thread as a unit, so nproc is still the right denominator, just know that maxing out vCPUs hits harder when half of them are hyper-threads.

Load average vs CPU utilization

High load, low CPU

Processes are waiting for I/O, not CPU. Typical cause: slow disks, NFS mounts, or swapping. The CPUs are mostly idle but the system feels sluggish because everything is blocked on disk. Check iostat and vmstat for wait states.

Low load, high CPU

A few processes are using a lot of CPU but the run queue is short. Single-threaded workloads on multi-core systems. The system isn't overloaded, it's just one core doing all the work.

High load, high CPU

The system is genuinely busy. This is fine if load is at or below core count. Above core count means processes are queuing. Check if it's sustained or a burst.

Linux-specific: unlike other Unix systems, Linux includes processes in uninterruptible sleep (D state) in load average. This means disk-bound workloads inflate the number. A load of 8.0 on a 4-core Linux box might just mean heavy I/O, not CPU saturation.

What to do when load is high

1.

Check what's running. top sorted by CPU (press P) or htop. Look for runaway processes, stuck workers, or unexpected cron jobs piling up.

2.

Check I/O wait. iostat -x 1 or look at wa% in top. High I/O wait with low CPU means the disk is the bottleneck, not the processor.

3.

Check memory and swap. free -h. If the system is swapping heavily, load will spike because disk I/O is orders of magnitude slower than RAM. Fix the memory problem first.

4.

Look at the trend, not just the snapshot. A spike to 10x capacity for 30 seconds during a deploy is different from sustained 2x capacity for hours. The 15-minute average tells you if this is persistent.

See it over time. fivenines.io tracks load average continuously and alerts you to trends before they become problems. A slow upward drift over days is easy to miss in a terminal, obvious on a graph.

Track load average trends, not just snapshots

Start monitoring with fivenines.io