Memory pressure describes how often Linux pauses work to reclaim memory, which typically shows up as latency spikes long before an out-of-memory event.

Reclaim happens when the kernel needs free pages for new allocations, dropping page cache first and swapping anonymous memory when necessary. Pressure Stall Information (PSI) exposes reclaim-induced stalls via /proc/pressure/memory, providing time-windowed averages that track real slowdowns more directly than a single “free RAM” snapshot.

Short bursts are normal during cache churn, builds, or restarts. Sustained non-zero PSI averages combined with swap-in activity often indicates RAM contention, an oversized workload, or an enforced memory limit in a cgroup.

Steps to check memory pressure with pressure stall information and vmstat in Linux:

  1. Review overall memory availability plus swap status.
    $ free -h
                   total        used        free      shared  buff/cache   available
    Mem:            23Gi       1.2Gi        20Gi        13Mi       1.7Gi        21Gi
    Swap:          1.0Gi          0B       1.0Gi

    available reflects reclaimable cache; consistently low available is more meaningful than low free.

  2. Sample swap-in plus swap-out rates over a short interval.
    $ vmstat 1 5
    procs -----------memory---------- ---swap-- -----io---- -system-- -------cpu-------
     r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st gu
     3  0      0 21579248 330560 1442780    0    0     3   256  526    0  1  0 99  0  0  0
     0  0      0 21586696 330560 1442788    0    0     0     0 1281 1137  3  1 95  0  0  0
     0  0      0 21586696 330560 1442788    0    0     0     0  141  137  0  0 100  0  0  0
     0  0      0 21586696 330560 1442788    0    0     0     0  113   89  0  0 100  0  0  0
     0  0      0 21586696 330560 1442788    0    0     0     0  244  258  0  0 100  0  0  0

    The first report line is an average since boot; focus on the interval lines that follow.

    Sustained non-zero si indicates swap-in, which commonly correlates with reclaim stalls.

  3. Read memory Pressure Stall Information metrics.
    $ cat /proc/pressure/memory
    some avg10=0.00 avg60=0.00 avg300=0.00 total=7
    full avg10=0.00 avg60=0.00 avg300=0.00 total=6

    some tracks time when at least one task is stalled on reclaim; full tracks time when all runnable tasks are stalled.

    avg10, avg60, and avg300 represent the percentage of wall time spent stalled in each window; total is cumulative stall time in microseconds.

  4. Watch PSI averages during an observed slowdown.
    $ timeout 2 cat /proc/pressure/memory
    some avg10=0.00 avg60=0.00 avg300=0.00 total=7
    full avg10=0.00 avg60=0.00 avg300=0.00 total=6

    Non-zero full pressure indicates system-wide stalls; interactive latency typically becomes obvious at that point.

  5. Locate cgroup v2 memory pressure files when troubleshooting containers or systemd services.
    $ find /sys/fs/cgroup/system.slice -maxdepth 3 -name memory.pressure | head
    /sys/fs/cgroup/system.slice/cron.service/memory.pressure
    /sys/fs/cgroup/system.slice/ssh.socket/memory.pressure
    /sys/fs/cgroup/system.slice/memory.pressure
    /sys/fs/cgroup/system.slice/system-modprobe.slice/memory.pressure
    /sys/fs/cgroup/system.slice/systemd-journald.service/memory.pressure
    /sys/fs/cgroup/system.slice/ssh.service/memory.pressure
    /sys/fs/cgroup/system.slice/rsyslog.service/memory.pressure
    /sys/fs/cgroup/system.slice/dbus.service/memory.pressure
    /sys/fs/cgroup/system.slice/systemd-timesyncd.service/memory.pressure
    /sys/fs/cgroup/system.slice/systemd-logind.service/memory.pressure

    Read any listed memory.pressure file the same way as /proc/pressure/memory for a per-cgroup view.

  6. Confirm pressure returns near zero during idle periods.
    $ cat /proc/pressure/memory
    some avg10=0.00 avg60=0.00 avg300=0.00 total=7
    full avg10=0.00 avg60=0.00 avg300=0.00 total=6