Skip to content
This repository has been archived by the owner on Oct 31, 2024. It is now read-only.

Commit

Permalink
mm/mglru: only clear kswapd_failures if reclaimable
Browse files Browse the repository at this point in the history
commit b130ba4 upstream.

lru_gen_shrink_node() unconditionally clears kswapd_failures, which can
prevent kswapd from sleeping and cause 100% kswapd cpu usage even when
kswapd repeatedly fails to make progress in reclaim.

Only clear kswap_failures in lru_gen_shrink_node() if reclaim makes some
progress, similar to shrink_node().

I happened to run into this problem in one of my tests recently.  It
requires a combination of several conditions: The allocator needs to
allocate a right amount of pages such that it can wake up kswapd
without itself being OOM killed; there is no memory for kswapd to
reclaim (My test disables swap and cleans page cache first); no other
process frees enough memory at the same time.

Link: https://lkml.kernel.org/r/[email protected]
Fixes: e4dde56 ("mm: multi-gen LRU: per-node lru_gen_folio lists")
Signed-off-by: Wei Xu <[email protected]>
Cc: Axel Rasmussen <[email protected]>
Cc: Brian Geffon <[email protected]>
Cc: Jan Alexander Steffens <[email protected]>
Cc: Suleiman Souhlal <[email protected]>
Cc: Yu Zhao <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
  • Loading branch information
weixugc authored and gregkh committed Oct 22, 2024
1 parent 5456008 commit bdccc3f
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions mm/vmscan.c
Original file line number Diff line number Diff line change
Expand Up @@ -4940,8 +4940,8 @@ static void lru_gen_shrink_node(struct pglist_data *pgdat, struct scan_control *

blk_finish_plug(&plug);
done:
/* kswapd should never fail */
pgdat->kswapd_failures = 0;
if (sc->nr_reclaimed > reclaimed)
pgdat->kswapd_failures = 0;
}

/******************************************************************************
Expand Down

0 comments on commit bdccc3f

Please sign in to comment.