Skip to content
Snippets Groups Projects
  1. May 18, 2019
    • Mel Gorman's avatar
      mm/compaction.c: correct zone boundary handling when isolating pages from a pageblock · 60fce36a
      Mel Gorman authored
      syzbot reported the following error from a tree with a head commit of
      baf76f0c ("slip: make slhc_free() silently accept an error pointer")
      
        BUG: unable to handle kernel paging request at ffffea0003348000
        #PF error: [normal kernel read fault]
        PGD 12c3f9067 P4D 12c3f9067 PUD 12c3f8067 PMD 0
        Oops: 0000 [#1] PREEMPT SMP KASAN
        CPU: 1 PID: 28916 Comm: syz-executor.2 Not tainted 5.1.0-rc6+ #89
        Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
        RIP: 0010:constant_test_bit arch/x86/include/asm/bitops.h:314 [inline]
        RIP: 0010:PageCompound include/linux/page-flags.h:186 [inline]
        RIP: 0010:isolate_freepages_block+0x1c0/0xd40 mm/compaction.c:579
        Code: 01 d8 ff 4d 85 ed 0f 84 ef 07 00 00 e8 29 00 d8 ff 4c 89 e0 83 85 38 ff
        ff ff 01 48 c1 e8 03 42 80 3c 38 00 0f 85 31 0a 00 00 <4d> 8b 2c 24 31 ff 49
        c1 ed 10 41 83 e5 01 44 89 ee e8 3a 01 d8 ff
        RSP: 0018:ffff88802b31eab8 EFLAGS: 00010246
        RAX: 1ffffd4000669000 RBX: 00000000000cd200 RCX: ffffc9000a235000
        RDX: 000000000001ca5e RSI: ffffffff81988cc7 RDI: 0000000000000001
        RBP: ffff88802b31ebd8 R08: ffff88805af700c0 R09: 0000000000000000
        R10: 0000000000000000 R11: 0000000000000000 R12: ffffea0003348000
        R13: 0000000000000000 R14: ffff88802b31f030 R15: dffffc0000000000
        FS:  00007f61648dc700(0000) GS:ffff8880ae900000(0000) knlGS:0000000000000000
        CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
        CR2: ffffea0003348000 CR3: 0000000037c64000 CR4: 00000000001426e0
        Call Trace:
         fast_isolate_around mm/compaction.c:1243 [inline]
         fast_isolate_freepages mm/compaction.c:1418 [inline]
         isolate_freepages mm/compaction.c:1438 [inline]
         compaction_alloc+0x1aee/0x22e0 mm/compaction.c:1550
      
      There is no reproducer and it is difficult to hit -- 1 crash every few
      days.  The issue is very similar to the fix in commit 6b0868c8
      ("mm/compaction.c: correct zone boundary handling when resetting pageblock
      skip hints").  When isolating free pages around a target pageblock, the
      boundary handling is off by one and can stray into the next pageblock.
      Triggering the syzbot error requires that the end of pageblock is section
      or zone aligned, and that the next section is unpopulated.
      
      A more subtle consequence of the bug is that pageblocks were being
      improperly used as migration targets which potentially hurts fragmentation
      avoidance in the long-term one page at a time.
      
      A debugging patch revealed that it's definitely possible to stray outside
      of a pageblock which is not intended.  While syzbot cannot be used to
      verify this patch, it was confirmed that the debugging warning no longer
      triggers with this patch applied.  It has also been confirmed that the THP
      allocation stress tests are not degraded by this patch.
      
      Link: http://lkml.kernel.org/r/20190510182124.GI18914@techsingularity.net
      
      
      Fixes: e332f741 ("mm, compaction: be selective about what pageblocks to clear skip hints")
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Reported-by: default avatar <syzbot+d84c80f9fe26a0f7a734@syzkaller.appspotmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Qian Cai <cai@lca.pw>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: <stable@vger.kernel.org> # v5.1+
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      60fce36a
  2. May 15, 2019
  3. May 14, 2019
  4. Apr 04, 2019
    • Qian Cai's avatar
      mm/compaction.c: abort search if isolation fails · 5b56d996
      Qian Cai authored
      Running LTP oom01 in a tight loop or memory stress testing put the system
      in a low-memory situation could triggers random memory corruption like
      page flag corruption below due to in fast_isolate_freepages(), if
      isolation fails, next_search_order() does not abort the search immediately
      could lead to improper accesses.
      
      UBSAN: Undefined behaviour in ./include/linux/mm.h:1195:50
      index 7 is out of range for type 'zone [5]'
      Call Trace:
       dump_stack+0x62/0x9a
       ubsan_epilogue+0xd/0x7f
       __ubsan_handle_out_of_bounds+0x14d/0x192
       __isolate_free_page+0x52c/0x600
       compaction_alloc+0x886/0x25f0
       unmap_and_move+0x37/0x1e70
       migrate_pages+0x2ca/0xb20
       compact_zone+0x19cb/0x3620
       kcompactd_do_work+0x2df/0x680
       kcompactd+0x1d8/0x6c0
       kthread+0x32c/0x3f0
       ret_from_fork+0x35/0x40
      ------------[ cut here ]------------
      kernel BUG at mm/page_alloc.c:3124!
      invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC KASAN PTI
      RIP: 0010:__isolate_free_page+0x464/0x600
      RSP: 0000:ffff888b9e1af848 EFLAGS: 00010007
      RAX: 0000000030000000 RBX: ffff888c39fcf0f8 RCX: 0000000000000000
      RDX: 1ffff111873f9e25 RSI: 0000000000000004 RDI: ffffed1173c35ef6
      RBP: ffff888b9e1af898 R08: fffffbfff4fc2461 R09: fffffbfff4fc2460
      R10: fffffbfff4fc2460 R11: ffffffffa7e12303 R12: 0000000000000008
      R13: dffffc0000000000 R14: 0000000000000000 R15: 0000000000000007
      FS:  0000000000000000(0000) GS:ffff888ba8e80000(0000)
      knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      CR2: 00007fc7abc00000 CR3: 0000000752416004 CR4: 00000000001606a0
      DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      Call Trace:
       compaction_alloc+0x886/0x25f0
       unmap_and_move+0x37/0x1e70
       migrate_pages+0x2ca/0xb20
       compact_zone+0x19cb/0x3620
       kcompactd_do_work+0x2df/0x680
       kcompactd+0x1d8/0x6c0
       kthread+0x32c/0x3f0
       ret_from_fork+0x35/0x40
      
      Link: http://lkml.kernel.org/r/20190320192648.52499-1-cai@lca.pw
      
      
      Fixes: dbe2d4e4 ("mm, compaction: round-robin the order while searching the free lists for a target")
      Signed-off-by: default avatarQian Cai <cai@lca.pw>
      Acked-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
      Cc: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      5b56d996
    • Mel Gorman's avatar
      mm/compaction.c: correct zone boundary handling when resetting pageblock skip hints · 6b0868c8
      Mel Gorman authored
      Mikhail Gavrilo reported the following bug being triggered in a Fedora
      kernel based on 5.1-rc1 but it is relevant to a vanilla kernel.
      
       kernel: page dumped because: VM_BUG_ON_PAGE(PagePoisoned(p))
       kernel: ------------[ cut here ]------------
       kernel: kernel BUG at include/linux/mm.h:1021!
       kernel: invalid opcode: 0000 [#1] SMP NOPTI
       kernel: CPU: 6 PID: 116 Comm: kswapd0 Tainted: G         C        5.1.0-0.rc1.git1.3.fc31.x86_64 #1
       kernel: Hardware name: System manufacturer System Product Name/ROG STRIX X470-I GAMING, BIOS 1201 12/07/2018
       kernel: RIP: 0010:__reset_isolation_pfn+0x244/0x2b0
       kernel: Code: fe 06 e8 0f 8e fc ff 44 0f b6 4c 24 04 48 85 c0 0f 85 dc fe ff ff e9 68 fe ff ff 48 c7 c6 58 b7 2e 8c 4c 89 ff e8 0c 75 00 00 <0f> 0b 48 c7 c6 58 b7 2e 8c e8 fe 74 00 00 0f 0b 48 89 fa 41 b8 01
       kernel: RSP: 0018:ffff9e2d03f0fde8 EFLAGS: 00010246
       kernel: RAX: 0000000000000034 RBX: 000000000081f380 RCX: ffff8cffbddd6c20
       kernel: RDX: 0000000000000000 RSI: 0000000000000006 RDI: ffff8cffbddd6c20
       kernel: RBP: 0000000000000001 R08: 0000009898b94613 R09: 0000000000000000
       kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000100000
       kernel: R13: 0000000000100000 R14: 0000000000000001 R15: ffffca7de07ce000
       kernel: FS:  0000000000000000(0000) GS:ffff8cffbdc00000(0000) knlGS:0000000000000000
       kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
       kernel: CR2: 00007fc1670e9000 CR3: 00000007f5276000 CR4: 00000000003406e0
       kernel: Call Trace:
       kernel:  __reset_isolation_suitable+0x62/0x120
       kernel:  reset_isolation_suitable+0x3b/0x40
       kernel:  kswapd+0x147/0x540
       kernel:  ? finish_wait+0x90/0x90
       kernel:  kthread+0x108/0x140
       kernel:  ? balance_pgdat+0x560/0x560
       kernel:  ? kthread_park+0x90/0x90
       kernel:  ret_from_fork+0x27/0x50
      
      He bisected it down to e332f741 ("mm, compaction: be selective about
      what pageblocks to clear skip hints").  The problem is that the patch in
      question was sloppy with respect to the handling of zone boundaries.  In
      some instances, it was possible for PFNs outside of a zone to be examined
      and if those were not properly initialised or poisoned then it would
      trigger the VM_BUG_ON.  This patch corrects the zone boundary issues when
      resetting pageblock skip hints and Mikhail reported that the bug did not
      trigger after 30 hours of testing.
      
      Link: http://lkml.kernel.org/r/20190327085424.GL3189@techsingularity.net
      
      
      Fixes: e332f741 ("mm, compaction: be selective about what pageblocks to clear skip hints")
      Reported-by: default avatarMikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
      Tested-by: default avatarMikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
      Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
      Cc: Qian Cai <cai@lca.pw>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      6b0868c8
  5. Mar 06, 2019
    • Andrey Ryabinin's avatar
      mm/compaction: pass pgdat to too_many_isolated() instead of zone · 5f438eee
      Andrey Ryabinin authored
      too_many_isolated() in mm/compaction.c looks only at node state, so it
      makes more sense to change argument to pgdat instead of zone.
      
      Link: http://lkml.kernel.org/r/20190228083329.31892-3-aryabinin@virtuozzo.com
      
      
      Signed-off-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarRik van Riel <riel@surriel.com>
      Acked-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: William Kucharski <william.kucharski@oracle.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5f438eee
    • Andrey Ryabinin's avatar
      mm: remove zone_lru_lock() function, access ->lru_lock directly · f4b7e272
      Andrey Ryabinin authored
      We have common pattern to access lru_lock from a page pointer:
      	zone_lru_lock(page_zone(page))
      
      Which is silly, because it unfolds to this:
      	&NODE_DATA(page_to_nid(page))->node_zones[page_zonenum(page)]->zone_pgdat->lru_lock
      while we can simply do
      	&NODE_DATA(page_to_nid(page))->lru_lock
      
      Remove zone_lru_lock() function, since it's only complicate things.  Use
      'page_pgdat(page)->lru_lock' pattern instead.
      
      [aryabinin@virtuozzo.com: a slightly better version of __split_huge_page()]
        Link: http://lkml.kernel.org/r/20190301121651.7741-1-aryabinin@virtuozzo.com
      Link: http://lkml.kernel.org/r/20190228083329.31892-2-aryabinin@virtuozzo.com
      
      
      Signed-off-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: William Kucharski <william.kucharski@oracle.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f4b7e272
    • Mel Gorman's avatar
      mm, compaction: capture a page under direct compaction · 5e1f0f09
      Mel Gorman authored
      Compaction is inherently race-prone as a suitable page freed during
      compaction can be allocated by any parallel task.  This patch uses a
      capture_control structure to isolate a page immediately when it is freed
      by a direct compactor in the slow path of the page allocator.  The
      intent is to avoid redundant scanning.
      
                                           5.0.0-rc1              5.0.0-rc1
                                     selective-v3r17          capture-v3r19
      Amean     fault-both-1         0.00 (   0.00%)        0.00 *   0.00%*
      Amean     fault-both-3      2582.11 (   0.00%)     2563.68 (   0.71%)
      Amean     fault-both-5      4500.26 (   0.00%)     4233.52 (   5.93%)
      Amean     fault-both-7      5819.53 (   0.00%)     6333.65 (  -8.83%)
      Amean     fault-both-12     9321.18 (   0.00%)     9759.38 (  -4.70%)
      Amean     fault-both-18     9782.76 (   0.00%)    10338.76 (  -5.68%)
      Amean     fault-both-24    15272.81 (   0.00%)    13379.55 *  12.40%*
      Amean     fault-both-30    15121.34 (   0.00%)    16158.25 (  -6.86%)
      Amean     fault-both-32    18466.67 (   0.00%)    18971.21 (  -2.73%)
      
      Latency is only moderately affected but the devil is in the details.  A
      closer examination indicates that base page fault latency is reduced but
      latency of huge pages is increased as it takes creater care to succeed.
      Part of the "problem" is that allocation success rates are close to 100%
      even when under pressure and compaction gets harder
      
                                      5.0.0-rc1              5.0.0-rc1
                                selective-v3r17          capture-v3r19
      Percentage huge-3        96.70 (   0.00%)       98.23 (   1.58%)
      Percentage huge-5        96.99 (   0.00%)       95.30 (  -1.75%)
      Percentage huge-7        94.19 (   0.00%)       97.24 (   3.24%)
      Percentage huge-12       94.95 (   0.00%)       97.35 (   2.53%)
      Percentage huge-18       96.74 (   0.00%)       97.30 (   0.58%)
      Percentage huge-24       97.07 (   0.00%)       97.55 (   0.50%)
      Percentage huge-30       95.69 (   0.00%)       98.50 (   2.95%)
      Percentage huge-32       96.70 (   0.00%)       99.27 (   2.65%)
      
      And scan rates are reduced as expected by 6% for the migration scanner
      and 29% for the free scanner indicating that there is less redundant
      work.
      
      Compaction migrate scanned    20815362    19573286
      Compaction free scanned       16352612    11510663
      
      [mgorman@techsingularity.net: remove redundant check]
        Link: http://lkml.kernel.org/r/20190201143853.GH9565@techsingularity.net
      Link: http://lkml.kernel.org/r/20190118175136.31341-23-mgorman@techsingularity.net
      
      
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5e1f0f09
    • Mel Gorman's avatar
      mm, compaction: be selective about what pageblocks to clear skip hints · e332f741
      Mel Gorman authored
      Pageblock hints are cleared when compaction restarts or kswapd makes
      enough progress that it can sleep but it's over-eager in that the bit is
      cleared for migration sources with no LRU pages and migration targets
      with no free pages.  As pageblock skip hint flushes are relatively rare
      and out-of-band with respect to kswapd, this patch makes a few more
      expensive checks to see if it's appropriate to even clear the bit.
      Every pageblock that is not cleared will avoid 512 pages being scanned
      unnecessarily on x86-64.
      
      The impact is variable with different workloads showing small
      differences in latency, success rates and scan rates.  This is expected
      as clearing the hints is not that common but doing a small amount of
      work out-of-band to avoid a large amount of work in-band later is
      generally a good thing.
      
      Link: http://lkml.kernel.org/r/20190118175136.31341-22-mgorman@techsingularity.net
      
      
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Signed-off-by: default avatarQian Cai <cai@lca.pw>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      [cai@lca.pw: no stuck in __reset_isolation_pfn()]
        Link: http://lkml.kernel.org/r/20190206034732.75687-1-cai@lca.pw
      
      
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e332f741
    • Mel Gorman's avatar
      mm, compaction: sample pageblocks for free pages · 4fca9730
      Mel Gorman authored
      Once fast searching finishes, there is a possibility that the linear
      scanner is scanning full blocks found by the fast scanner earlier.  This
      patch uses an adaptive stride to sample pageblocks for free pages.  The
      more consecutive full pageblocks encountered, the larger the stride
      until a pageblock with free pages is found.  The scanners might meet
      slightly sooner but it is an acceptable risk given that the search of
      the free lists may still encounter the pages and adjust the cached PFN
      of the free scanner accordingly.
      
                                           5.0.0-rc1              5.0.0-rc1
                                    roundrobin-v3r17       samplefree-v3r17
      Amean     fault-both-1         0.00 (   0.00%)        0.00 *   0.00%*
      Amean     fault-both-3      2752.37 (   0.00%)     2729.95 (   0.81%)
      Amean     fault-both-5      4341.69 (   0.00%)     4397.80 (  -1.29%)
      Amean     fault-both-7      6308.75 (   0.00%)     6097.61 (   3.35%)
      Amean     fault-both-12    10241.81 (   0.00%)     9407.15 (   8.15%)
      Amean     fault-both-18    13736.09 (   0.00%)    10857.63 *  20.96%*
      Amean     fault-both-24    16853.95 (   0.00%)    13323.24 *  20.95%*
      Amean     fault-both-30    15862.61 (   0.00%)    17345.44 (  -9.35%)
      Amean     fault-both-32    18450.85 (   0.00%)    16892.00 (   8.45%)
      
      The latency is mildly improved offseting some overhead from earlier
      patches that are prerequisites for the rest of the series.  However, a
      major impact is on the free scan rate with an 82% reduction.
      
                                      5.0.0-rc1      5.0.0-rc1
                               roundrobin-v3r17 samplefree-v3r17
      Compaction migrate scanned    21607271            20116887
      Compaction free scanned       95336406            16668703
      
      It's also the first time in the series where the number of pages scanned
      by the migration scanner is greater than the free scanner due to the
      increased search efficiency.
      
      Link: http://lkml.kernel.org/r/20190118175136.31341-21-mgorman@techsingularity.net
      
      
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4fca9730
    • Mel Gorman's avatar
      mm, compaction: round-robin the order while searching the free lists for a target · dbe2d4e4
      Mel Gorman authored
      As compaction proceeds and creates high-order blocks, the free list
      search gets less efficient as the larger blocks are used as compaction
      targets.  Eventually, the larger blocks will be behind the migration
      scanner for partially migrated pageblocks and the search fails.  This
      patch round-robins what orders are searched so that larger blocks can be
      ignored and find smaller blocks that can be used as migration targets.
      
      The overall impact was small on 1-socket but it avoids corner cases
      where the migration/free scanners meet prematurely or situations where
      many of the pageblocks encountered by the free scanner are almost full
      instead of being properly packed.  Previous testing had indicated that
      without this patch there were occasional large spikes in the free
      scanner without this patch.
      
      [dan.carpenter@oracle.com: fix static checker warning]
      Link: http://lkml.kernel.org/r/20190118175136.31341-20-mgorman@techsingularity.net
      
      
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      dbe2d4e4
    • Mel Gorman's avatar
      mm, compaction: reduce premature advancement of the migration target scanner · d097a6f6
      Mel Gorman authored
      The fast isolation of free pages allows the cached PFN of the free
      scanner to advance faster than necessary depending on the contents of
      the free list.  The key is that fast_isolate_freepages() can update
      zone->compact_cached_free_pfn via isolate_freepages_block().  When the
      fast search fails, the linear scan can start from a point that has
      skipped valid migration targets, particularly pageblocks with just
      low-order free pages.  This can cause the migration source/target
      scanners to meet prematurely causing a reset.
      
      This patch starts by avoiding an update of the pageblock skip
      information and cached PFN from isolate_freepages_block() and puts the
      responsibility of updating that information in the callers.  The fast
      scanner will update the cached PFN if and only if it finds a block that
      is higher than the existing cached PFN and sets the skip if the
      pageblock is full or nearly full.  The linear scanner will update
      skipped information and the cached PFN only when a block is completely
      scanned.  The total impact is that the free scanner advances more slowly
      as it is primarily driven by the linear scanner instead of the fast
      search.
      
                                           5.0.0-rc1              5.0.0-rc1
                                     noresched-v3r17         slowfree-v3r17
      Amean     fault-both-3      2965.68 (   0.00%)     3036.75 (  -2.40%)
      Amean     fault-both-5      3995.90 (   0.00%)     4522.24 * -13.17%*
      Amean     fault-both-7      5842.12 (   0.00%)     6365.35 (  -8.96%)
      Amean     fault-both-12     9550.87 (   0.00%)    10340.93 (  -8.27%)
      Amean     fault-both-18    13304.72 (   0.00%)    14732.46 ( -10.73%)
      Amean     fault-both-24    14618.59 (   0.00%)    16288.96 ( -11.43%)
      Amean     fault-both-30    16650.96 (   0.00%)    16346.21 (   1.83%)
      Amean     fault-both-32    17145.15 (   0.00%)    19317.49 ( -12.67%)
      
      The impact to latency is higher than the last version but it appears to
      be due to a slight increase in the free scan rates which is a potential
      side-effect of the patch.  However, this is necessary for later patches
      that are more careful about how pageblocks are treated as earlier
      iterations of those patches hit corner cases where the restarts were
      punishing and very visible.
      
      Link: http://lkml.kernel.org/r/20190118175136.31341-19-mgorman@techsingularity.net
      
      
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d097a6f6
    • Mel Gorman's avatar
      mm, compaction: do not consider a need to reschedule as contention · cf66f070
      Mel Gorman authored
      Scanning on large machines can take a considerable length of time and
      eventually need to be rescheduled.  This is treated as an abort event
      but that's not appropriate as the attempt is likely to be retried after
      making numerous checks and taking another cycle through the page
      allocator.  This patch will check the need to reschedule if necessary
      but continue the scanning.
      
      The main benefit is reduced scanning when compaction is taking a long
      time or the machine is over-saturated.  It also avoids an unnecessary
      exit of compaction that ends up being retried by the page allocator in
      the outer loop.
      
                                           5.0.0-rc1              5.0.0-rc1
                                    synccached-v3r16        noresched-v3r17
      Amean     fault-both-1         0.00 (   0.00%)        0.00 *   0.00%*
      Amean     fault-both-3      2958.27 (   0.00%)     2965.68 (  -0.25%)
      Amean     fault-both-5      4091.90 (   0.00%)     3995.90 (   2.35%)
      Amean     fault-both-7      5803.05 (   0.00%)     5842.12 (  -0.67%)
      Amean     fault-both-12     9481.06 (   0.00%)     9550.87 (  -0.74%)
      Amean     fault-both-18    14141.51 (   0.00%)    13304.72 (   5.92%)
      Amean     fault-both-24    16438.00 (   0.00%)    14618.59 (  11.07%)
      Amean     fault-both-30    17531.72 (   0.00%)    16650.96 (   5.02%)
      Amean     fault-both-32    17101.96 (   0.00%)    17145.15 (  -0.25%)
      
      Link: http://lkml.kernel.org/r/20190118175136.31341-18-mgorman@techsingularity.net
      
      
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cf66f070
    • Mel Gorman's avatar
      mm, compaction: rework compact_should_abort as compact_check_resched · cb810ad2
      Mel Gorman authored
      With incremental changes, compact_should_abort no longer makes any
      documented sense.  Rename to compact_check_resched and update the
      associated comments.  There is no benefit other than reducing redundant
      code and making the intent slightly clearer.  It could potentially be
      merged with earlier patches but it just makes the review slightly
      harder.
      
      Link: http://lkml.kernel.org/r/20190118175136.31341-17-mgorman@techsingularity.net
      
      
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cb810ad2
    • Mel Gorman's avatar
      mm, compaction: keep cached migration PFNs synced for unusable pageblocks · 8854c55f
      Mel Gorman authored
      Migrate has separate cached PFNs for ASYNC and SYNC* migration on the
      basis that some migrations will fail in ASYNC mode.  However, if the
      cached PFNs match at the start of scanning and pageblocks are skipped
      due to having no isolation candidates, then the sync state does not
      matter.  This patch keeps matching cached PFNs in sync until a pageblock
      with isolation candidates is found.
      
      The actual benefit is marginal given that the sync scanner following the
      async scanner will often skip a number of pageblocks but it's useless
      work.  Any benefit depends heavily on whether the scanners restarted
      recently.
      
      Link: http://lkml.kernel.org/r/20190118175136.31341-16-mgorman@techsingularity.net
      
      
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8854c55f
    • Mel Gorman's avatar
      mm, compaction: check early for huge pages encountered by the migration scanner · 9bebefd5
      Mel Gorman authored
      When scanning for sources or targets, PageCompound is checked for huge
      pages as they can be skipped quickly but it happens relatively late
      after a lot of setup and checking.  This patch short-cuts the check to
      make it earlier.  It might still change when the lock is acquired but
      this has less overhead overall.  The free scanner advances but the
      migration scanner does not.  Typically the free scanner encounters more
      movable blocks that change state over the lifetime of the system and
      also tends to scan more aggressively as it's actively filling its
      portion of the physical address space with data.  This could change in
      the future but for the moment, this worked better in practice and
      incurred fewer scan restarts.
      
      The impact on latency and allocation success rates is marginal but the
      free scan rates are reduced by 15% and system CPU usage is reduced by
      3.3%.  The 2-socket results are not materially different.
      
      Link: http://lkml.kernel.org/r/20190118175136.31341-15-mgorman@techsingularity.net
      
      
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9bebefd5
    • Mel Gorman's avatar
      mm, compaction: finish pageblock scanning on contention · cb2dcaf0
      Mel Gorman authored
      Async migration aborts on spinlock contention but contention can be high
      when there are multiple compaction attempts and kswapd is active.  The
      consequence is that the migration scanners move forward uselessly while
      still contending on locks for longer while leaving suitable migration
      sources behind.
      
      This patch will acquire the lock but track when contention occurs.  When
      it does, the current pageblock will finish as compaction may succeed for
      that block and then abort.  This will have a variable impact on latency
      as in some cases useless scanning is avoided (reduces latency) but a
      lock will be contended (increase latency) or a single contended
      pageblock is scanned that would otherwise have been skipped (increase
      latency).
      
                                           5.0.0-rc1              5.0.0-rc1
                                      norescan-v3r16    finishcontend-v3r16
      Amean     fault-both-1         0.00 (   0.00%)        0.00 *   0.00%*
      Amean     fault-both-3      3002.07 (   0.00%)     3153.17 (  -5.03%)
      Amean     fault-both-5      4684.47 (   0.00%)     4280.52 (   8.62%)
      Amean     fault-both-7      6815.54 (   0.00%)     5811.50 *  14.73%*
      Amean     fault-both-12    10864.02 (   0.00%)     9276.85 (  14.61%)
      Amean     fault-both-18    12247.52 (   0.00%)    11032.67 (   9.92%)
      Amean     fault-both-24    15683.99 (   0.00%)    14285.70 (   8.92%)
      Amean     fault-both-30    18620.02 (   0.00%)    16293.76 *  12.49%*
      Amean     fault-both-32    19250.28 (   0.00%)    16721.02 *  13.14%*
      
                                      5.0.0-rc1              5.0.0-rc1
                                 norescan-v3r16    finishcontend-v3r16
      Percentage huge-1         0.00 (   0.00%)        0.00 (   0.00%)
      Percentage huge-3        95.00 (   0.00%)       96.82 (   1.92%)
      Percentage huge-5        94.22 (   0.00%)       95.40 (   1.26%)
      Percentage huge-7        92.35 (   0.00%)       95.92 (   3.86%)
      Percentage huge-12       91.90 (   0.00%)       96.73 (   5.25%)
      Percentage huge-18       89.58 (   0.00%)       96.77 (   8.03%)
      Percentage huge-24       90.03 (   0.00%)       96.05 (   6.69%)
      Percentage huge-30       89.14 (   0.00%)       96.81 (   8.60%)
      Percentage huge-32       90.58 (   0.00%)       97.41 (   7.54%)
      
      There is a variable impact that is mostly good on latency while allocation
      success rates are slightly higher.  System CPU usage is reduced by about
      10% but scan rate impact is mixed
      
      Compaction migrate scanned    27997659.00    20148867
      Compaction free scanned      120782791.00   118324914
      
      Migration scan rates are reduced 28% which is expected as a pageblock is
      used by the async scanner instead of skipped.  The impact on the free
      scanner is known to be variable.  Overall the primary justification for
      this patch is that completing scanning of a pageblock is very important
      for later patches.
      
      [yuehaibing@huawei.com: fix unused variable warning]
      Link: http://lkml.kernel.org/r/20190118175136.31341-14-mgorman@techsingularity.net
      
      
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cb2dcaf0
    • Mel Gorman's avatar
      mm, compaction: avoid rescanning the same pageblock multiple times · 804d3121
      Mel Gorman authored
      Pageblocks are marked for skip when no pages are isolated after a scan.
      However, it's possible to hit corner cases where the migration scanner
      gets stuck near the boundary between the source and target scanner.  Due
      to pages being migrated in blocks of COMPACT_CLUSTER_MAX, pages that are
      migrated can be reallocated before the pageblock is complete.  The
      pageblock is not necessarily skipped so it can be rescanned multiple
      times.  Similarly, a pageblock with some dirty/writeback pages may fail
      to migrate and be rescanned until writeback completes which is wasteful.
      
      This patch tracks if a pageblock is being rescanned.  If so, then the
      entire pageblock will be migrated as one operation.  This narrows the
      race window during which pages can be reallocated during migration.
      Secondly, if there are pages that cannot be isolated then the pageblock
      will still be fully scanned and marked for skipping.  On the second
      rescan, the pageblock skip is set and the migration scanner makes
      progress.
      
                                           5.0.0-rc1              5.0.0-rc1
                                      findfree-v3r16         norescan-v3r16
      Amean     fault-both-1         0.00 (   0.00%)        0.00 *   0.00%*
      Amean     fault-both-3      3200.68 (   0.00%)     3002.07 (   6.21%)
      Amean     fault-both-5      4847.75 (   0.00%)     4684.47 (   3.37%)
      Amean     fault-both-7      6658.92 (   0.00%)     6815.54 (  -2.35%)
      Amean     fault-both-12    11077.62 (   0.00%)    10864.02 (   1.93%)
      Amean     fault-both-18    12403.97 (   0.00%)    12247.52 (   1.26%)
      Amean     fault-both-24    15607.10 (   0.00%)    15683.99 (  -0.49%)
      Amean     fault-both-30    18752.27 (   0.00%)    18620.02 (   0.71%)
      Amean     fault-both-32    21207.54 (   0.00%)    19250.28 *   9.23%*
      
                                      5.0.0-rc1              5.0.0-rc1
                                 findfree-v3r16         norescan-v3r16
      Percentage huge-3        96.86 (   0.00%)       95.00 (  -1.91%)
      Percentage huge-5        93.72 (   0.00%)       94.22 (   0.53%)
      Percentage huge-7        94.31 (   0.00%)       92.35 (  -2.08%)
      Percentage huge-12       92.66 (   0.00%)       91.90 (  -0.82%)
      Percentage huge-18       91.51 (   0.00%)       89.58 (  -2.11%)
      Percentage huge-24       90.50 (   0.00%)       90.03 (  -0.52%)
      Percentage huge-30       91.57 (   0.00%)       89.14 (  -2.65%)
      Percentage huge-32       91.00 (   0.00%)       90.58 (  -0.46%)
      
      Negligible difference but this was likely a case when the specific
      corner case was not hit.  A previous run of the same patch based on an
      earlier iteration of the series showed large differences where migration
      rates could be halved when the corner case was hit.
      
      The specific corner case where migration scan rates go through the roof
      was due to a dirty/writeback pageblock located at the boundary of the
      migration/free scanner did not happen in this case.  When it does
      happen, the scan rates multipled by massive margins.
      
      Link: http://lkml.kernel.org/r/20190118175136.31341-13-mgorman@techsingularity.net
      
      
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      804d3121
    • Mel Gorman's avatar
      mm, compaction: use free lists to quickly locate a migration target · 5a811889
      Mel Gorman authored
      Similar to the migration scanner, this patch uses the free lists to
      quickly locate a migration target.  The search is different in that
      lower orders will be searched for a suitable high PFN if necessary but
      the search is still bound.  This is justified on the grounds that the
      free scanner typically scans linearly much more than the migration
      scanner.
      
      If a free page is found, it is isolated and compaction continues if
      enough pages were isolated.  For SYNC* scanning, the full pageblock is
      scanned for any remaining free pages so that is can be marked for
      skipping in the near future.
      
      1-socket thpfioscale
                                           5.0.0-rc1              5.0.0-rc1
                                       isolmig-v3r15         findfree-v3r16
      Amean     fault-both-3      3024.41 (   0.00%)     3200.68 (  -5.83%)
      Amean     fault-both-5      4749.30 (   0.00%)     4847.75 (  -2.07%)
      Amean     fault-both-7      6454.95 (   0.00%)     6658.92 (  -3.16%)
      Amean     fault-both-12    10324.83 (   0.00%)    11077.62 (  -7.29%)
      Amean     fault-both-18    12896.82 (   0.00%)    12403.97 (   3.82%)
      Amean     fault-both-24    13470.60 (   0.00%)    15607.10 * -15.86%*
      Amean     fault-both-30    17143.99 (   0.00%)    18752.27 (  -9.38%)
      Amean     fault-both-32    17743.91 (   0.00%)    21207.54 * -19.52%*
      
      The impact on latency is variable but the search is optimistic and
      sensitive to the exact system state.  Success rates are similar but the
      major impact is to the rate of scanning
      
                                      5.0.0-rc1      5.0.0-rc1
                                  isolmig-v3r15 findfree-v3r16
      Compaction migrate scanned    25646769          29507205
      Compaction free scanned      201558184         100359571
      
      The free scan rates are reduced by 50%.  The 2-socket reductions for the
      free scanner are more dramatic which is a likely reflection that the
      machine has more memory.
      
      [dan.carpenter@oracle.com: fix static checker warning]
      [vbabka@suse.cz: correct number of pages scanned for lower orders]
      Link: http://lkml.kernel.org/r/20190118175136.31341-12-mgorman@techsingularity.net
      
      
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5a811889
    • Mel Gorman's avatar
      mm, compaction: keep migration source private to a single compaction instance · e380bebe
      Mel Gorman authored
      Due to either a fast search of the free list or a linear scan, it is
      possible for multiple compaction instances to pick the same pageblock
      for migration.  This is lucky for one scanner and increased scanning for
      all the others.  It also allows a race between requests on which first
      allocates the resulting free block.
      
      This patch tests and updates the pageblock skip for the migration
      scanner carefully.  When isolating a block, it will check and skip if
      the block is already in use.  Once the zone lock is acquired, it will be
      rechecked so that only one scanner can set the pageblock skip for
      exclusive use.  Any scanner contending will continue with a linear scan.
      The skip bit is still set if no pages can be isolated in a range.  While
      this may result in redundant scanning, it avoids unnecessarily acquiring
      the zone lock when there are no suitable migration sources.
      
      1-socket thpscale
      Amean     fault-both-1         0.00 (   0.00%)        0.00 *   0.00%*
      Amean     fault-both-3      3390.40 (   0.00%)     3024.41 (  10.80%)
      Amean     fault-both-5      5082.28 (   0.00%)     4749.30 (   6.55%)
      Amean     fault-both-7      7012.51 (   0.00%)     6454.95 (   7.95%)
      Amean     fault-both-12    11346.63 (   0.00%)    10324.83 (   9.01%)
      Amean     fault-both-18    15324.19 (   0.00%)    12896.82 *  15.84%*
      Amean     fault-both-24    16088.50 (   0.00%)    13470.60 *  16.27%*
      Amean     fault-both-30    18723.42 (   0.00%)    17143.99 (   8.44%)
      Amean     fault-both-32    18612.01 (   0.00%)    17743.91 (   4.66%)
      
                                      5.0.0-rc1              5.0.0-rc1
                                  findmig-v3r15          isolmig-v3r15
      Percentage huge-3        89.83 (   0.00%)       92.96 (   3.48%)
      Percentage huge-5        91.96 (   0.00%)       93.26 (   1.41%)
      Percentage huge-7        92.85 (   0.00%)       93.63 (   0.84%)
      Percentage huge-12       92.74 (   0.00%)       92.80 (   0.07%)
      Percentage huge-18       91.71 (   0.00%)       91.62 (  -0.10%)
      Percentage huge-24       92.13 (   0.00%)       91.50 (  -0.69%)
      Percentage huge-30       93.79 (   0.00%)       92.73 (  -1.13%)
      Percentage huge-32       91.27 (   0.00%)       91.94 (   0.74%)
      
      This shows a reasonable reduction in latency as multiple compaction
      scanners do not operate on the same blocks with a similar allocation
      success rate.
      
      Compaction migrate scanned    41093126    25646769
      
      Migration scan rates are reduced by 38%.
      
      Link: http://lkml.kernel.org/r/20190118175136.31341-11-mgorman@techsingularity.net
      
      
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e380bebe
    • Mel Gorman's avatar
      mm, compaction: use free lists to quickly locate a migration source · 70b44595
      Mel Gorman authored
      The migration scanner is a linear scan of a zone with a potentiall large
      search space.  Furthermore, many pageblocks are unusable such as those
      filled with reserved pages or partially filled with pages that cannot
      migrate.  These still get scanned in the common case of allocating a THP
      and the cost accumulates.
      
      The patch uses a partial search of the free lists to locate a migration
      source candidate that is marked as MOVABLE when allocating a THP.  It
      prefers picking a block with a larger number of free pages already on
      the basis that there are fewer pages to migrate to free the entire
      block.  The lowest PFN found during searches is tracked as the basis of
      the start for the linear search after the first search of the free list
      fails.  After the search, the free list is shuffled so that the next
      search will not encounter the same page.  If the search fails then the
      subsequent searches will be shorter and the linear scanner is used.
      
      If this search fails, or if the request is for a small or
      unmovable/reclaimable allocation then the linear scanner is still used.
      It is somewhat pointless to use the list search in those cases.  Small
      free pages must be used for the search and there is no guarantee that
      movable pages are located within that block that are contiguous.
      
                                           5.0.0-rc1              5.0.0-rc1
                                       noboost-v3r10          findmig-v3r15
      Amean     fault-both-3      3771.41 (   0.00%)     3390.40 (  10.10%)
      Amean     fault-both-5      5409.05 (   0.00%)     5082.28 (   6.04%)
      Amean     fault-both-7      7040.74 (   0.00%)     7012.51 (   0.40%)
      Amean     fault-both-12    11887.35 (   0.00%)    11346.63 (   4.55%)
      Amean     fault-both-18    16718.19 (   0.00%)    15324.19 (   8.34%)
      Amean     fault-both-24    21157.19 (   0.00%)    16088.50 *  23.96%*
      Amean     fault-both-30    21175.92 (   0.00%)    18723.42 *  11.58%*
      Amean     fault-both-32    21339.03 (   0.00%)    18612.01 *  12.78%*
      
                                      5.0.0-rc1              5.0.0-rc1
                                  noboost-v3r10          findmig-v3r15
      Percentage huge-3        86.50 (   0.00%)       89.83 (   3.85%)
      Percentage huge-5        92.52 (   0.00%)       91.96 (  -0.61%)
      Percentage huge-7        92.44 (   0.00%)       92.85 (   0.44%)
      Percentage huge-12       92.98 (   0.00%)       92.74 (  -0.25%)
      Percentage huge-18       91.70 (   0.00%)       91.71 (   0.02%)
      Percentage huge-24       91.59 (   0.00%)       92.13 (   0.60%)
      Percentage huge-30       90.14 (   0.00%)       93.79 (   4.04%)
      Percentage huge-32       90.03 (   0.00%)       91.27 (   1.37%)
      
      This shows an improvement in allocation latencies with similar
      allocation success rates.  While not presented, there was a 31%
      reduction in migration scanning and a 8% reduction on system CPU usage.
      A 2-socket machine showed similar benefits.
      
      [mgorman@techsingularity.net: several fixes]
        Link: http://lkml.kernel.org/r/20190204120111.GL9565@techsingularity.net
      [vbabka@suse.cz: migrate block that was found-fast, some optimisations]
      Link: http://lkml.kernel.org/r/20190118175136.31341-10-mgorman@techsingularity.net
      
      
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <Vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      70b44595
    • Mel Gorman's avatar
      mm, compaction: always finish scanning of a full pageblock · efe771c7
      Mel Gorman authored
      When compaction is finishing, it uses a flag to ensure the pageblock is
      complete but it makes sense to always complete migration of a pageblock.
      Minimally, skip information is based on a pageblock and partially
      scanned pageblocks may incur more scanning in the future.  The pageblock
      skip handling also becomes more strict later in the series and the hint
      is more useful if a complete pageblock was always scanned.
      
      The potentially impacts latency as more scanning is done but it's not a
      consistent win or loss as the scanning is not always a high percentage
      of the pageblock and sometimes it is offset by future reductions in
      scanning.  Hence, the results are not presented this time due to a
      misleading mix of gains/losses without any clear pattern.  However, full
      scanning of the pageblock is important for later patches.
      
      Link: http://lkml.kernel.org/r/20190118175136.31341-8-mgorman@techsingularity.net
      
      
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      efe771c7
    • Mel Gorman's avatar
      mm, compaction: rename map_pages to split_map_pages · 4469ab98
      Mel Gorman authored
      It's non-obvious that high-order free pages are split into order-0 pages
      from the function name.  Fix it.
      
      Link: http://lkml.kernel.org/r/20190118175136.31341-6-mgorman@techsingularity.net
      
      
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4469ab98
    • Mel Gorman's avatar
      mm, compaction: remove unnecessary zone parameter in some instances · 40cacbcb
      Mel Gorman authored
      A zone parameter is passed into a number of top-level compaction
      functions despite the fact that it's already in compact_control.  This
      is harmless but it did need an audit to check if zone actually ever
      changes meaningfully.  This patches removes the parameter in a number of
      top-level functions.  The change could be much deeper but this was
      enough to briefly clarify the flow.
      
      No functional change.
      
      Link: http://lkml.kernel.org/r/20190118175136.31341-5-mgorman@techsingularity.net
      
      
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      40cacbcb
    • Mel Gorman's avatar
      mm, compaction: remove last_migrated_pfn from compact_control · 566e54e1
      Mel Gorman authored
      The last_migrated_pfn field is a bit dubious as to whether it really
      helps but either way, the information from it can be inferred without
      increasing the size of compact_control so remove the field.
      
      Link: http://lkml.kernel.org/r/20190118175136.31341-4-mgorman@techsingularity.net
      
      
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: YueHaibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      566e54e1
    • Matthew Wilcox's avatar
      mm: remove sysctl_extfrag_handler() · 6b7e5cad
      Matthew Wilcox authored
      sysctl_extfrag_handler() neglects to propagate the return value from
      proc_dointvec_minmax() to its caller.  It's a wrapper that doesn't need
      to exist, so just use proc_dointvec_minmax() directly.
      
      Link: http://lkml.kernel.org/r/20190104032557.3056-1-willy@infradead.org
      
      
      Signed-off-by: default avatarMatthew Wilcox <willy@infradead.org>
      Reported-by: default avatarAditya Pakki <pakki001@umn.edu>
      Acked-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarRandy Dunlap <rdunlap@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6b7e5cad
  6. Dec 28, 2018
  7. Oct 26, 2018
    • Johannes Weiner's avatar
      psi: pressure stall information for CPU, memory, and IO · eb414681
      Johannes Weiner authored
      When systems are overcommitted and resources become contended, it's hard
      to tell exactly the impact this has on workload productivity, or how close
      the system is to lockups and OOM kills.  In particular, when machines work
      multiple jobs concurrently, the impact of overcommit in terms of latency
      and throughput on the individual job can be enormous.
      
      In order to maximize hardware utilization without sacrificing individual
      job health or risk complete machine lockups, this patch implements a way
      to quantify resource pressure in the system.
      
      A kernel built with CONFIG_PSI=y creates files in /proc/pressure/ that
      expose the percentage of time the system is stalled on CPU, memory, or IO,
      respectively.  Stall states are aggregate versions of the per-task delay
      accounting delays:
      
             cpu: some tasks are runnable but not executing on a CPU
             memory: tasks are reclaiming, or waiting for swapin or thrashing cache
             io: tasks are waiting for io completions
      
      These percentages of walltime can be thought of as pressure percentages,
      and they give a general sense of system health and productivity loss
      incurred by resource overcommit.  They can also indicate when the system
      is approaching lockup scenarios and OOMs.
      
      To do this, psi keeps track of the task states associated with each CPU
      and samples the time they spend in stall states.  Every 2 seconds, the
      samples are averaged across CPUs - weighted by the CPUs' non-idle time to
      eliminate artifacts from unused CPUs - and translated into percentages of
      walltime.  A running average of those percentages is maintained over 10s,
      1m, and 5m periods (similar to the loadaverage).
      
      [hannes@cmpxchg.org: doc fixlet, per Randy]
        Link: http://lkml.kernel.org/r/20180828205625.GA14030@cmpxchg.org
      [hannes@cmpxchg.org: code optimization]
        Link: http://lkml.kernel.org/r/20180907175015.GA8479@cmpxchg.org
      [hannes@cmpxchg.org: rename psi_clock() to psi_update_work(), per Peter]
        Link: http://lkml.kernel.org/r/20180907145404.GB11088@cmpxchg.org
      [hannes@cmpxchg.org: fix build]
        Link: http://lkml.kernel.org/r/20180913014222.GA2370@cmpxchg.org
      Link: http://lkml.kernel.org/r/20180828172258.3185-9-hannes@cmpxchg.org
      
      
      Signed-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Tested-by: default avatarDaniel Drake <drake@endlessm.com>
      Tested-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Johannes Weiner <jweiner@fb.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Enderborg <peter.enderborg@sony.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vinayak Menon <vinmenon@codeaurora.org>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      eb414681
  8. Jun 14, 2018
  9. May 24, 2018
    • Joonsoo Kim's avatar
      Revert "mm/cma: manage the memory of the CMA area by using the ZONE_MOVABLE" · d883c6cf
      Joonsoo Kim authored
      
      This reverts the following commits that change CMA design in MM.
      
       3d2054ad ("ARM: CMA: avoid double mapping to the CMA area if CONFIG_HIGHMEM=y")
      
       1d47a3ec ("mm/cma: remove ALLOC_CMA")
      
       bad8c6c0 ("mm/cma: manage the memory of the CMA area by using the ZONE_MOVABLE")
      
      Ville reported a following error on i386.
      
        Inode-cache hash table entries: 65536 (order: 6, 262144 bytes)
        microcode: microcode updated early to revision 0x4, date = 2013-06-28
        Initializing CPU#0
        Initializing HighMem for node 0 (000377fe:00118000)
        Initializing Movable for node 0 (00000001:00118000)
        BUG: Bad page state in process swapper  pfn:377fe
        page:f53effc0 count:0 mapcount:-127 mapping:00000000 index:0x0
        flags: 0x80000000()
        raw: 80000000 00000000 00000000 ffffff80 00000000 00000100 00000200 00000001
        page dumped because: nonzero mapcount
        Modules linked in:
        CPU: 0 PID: 0 Comm: swapper Not tainted 4.17.0-rc5-elk+ #145
        Hardware name: Dell Inc. Latitude E5410/03VXMC, BIOS A15 07/11/2013
        Call Trace:
         dump_stack+0x60/0x96
         bad_page+0x9a/0x100
         free_pages_check_bad+0x3f/0x60
         free_pcppages_bulk+0x29d/0x5b0
         free_unref_page_commit+0x84/0xb0
         free_unref_page+0x3e/0x70
         __free_pages+0x1d/0x20
         free_highmem_page+0x19/0x40
         add_highpages_with_active_regions+0xab/0xeb
         set_highmem_pages_init+0x66/0x73
         mem_init+0x1b/0x1d7
         start_kernel+0x17a/0x363
         i386_start_kernel+0x95/0x99
         startup_32_smp+0x164/0x168
      
      The reason for this error is that the span of MOVABLE_ZONE is extended
      to whole node span for future CMA initialization, and, normal memory is
      wrongly freed here.  I submitted the fix and it seems to work, but,
      another problem happened.
      
      It's so late time to fix the later problem so I decide to reverting the
      series.
      
      Reported-by: default avatarVille Syrjälä <ville.syrjala@linux.intel.com>
      Acked-by: default avatarLaura Abbott <labbott@redhat.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d883c6cf
  10. Apr 11, 2018
  11. Apr 06, 2018
    • Mike Rapoport's avatar
    • David Rientjes's avatar
      mm, compaction: drain pcps for zone when kcompactd fails · bc3106b2
      David Rientjes authored
      It's possible for free pages to become stranded on per-cpu pagesets
      (pcps) that, if drained, could be merged with buddy pages on the zone's
      free area to form large order pages, including up to MAX_ORDER.
      
      Consider a verbose example using the tools/vm/page-types tool at the
      beginning of a ZONE_NORMAL ('B' indicates a buddy page and 'S' indicates
      a slab page).  Pages on pcps do not have any page flags set.
      
        109954  1       _______S________________________________________________________
        109955  2       __________B_____________________________________________________
        109957  1       ________________________________________________________________
        109958  1       __________B_____________________________________________________
        109959  7       ________________________________________________________________
        109960  1       __________B_____________________________________________________
        109961  9       ________________________________________________________________
        10996a  1       __________B_____________________________________________________
        10996b  3       ________________________________________________________________
        10996e  1       __________B_____________________________________________________
        10996f  1       ________________________________________________________________
        ...
        109f8c  1       __________B_____________________________________________________
        109f8d  2       ________________________________________________________________
        109f8f  2       __________B_____________________________________________________
        109f91  f       ________________________________________________________________
        109fa0  1       __________B_____________________________________________________
        109fa1  7       ________________________________________________________________
        109fa8  1       __________B_____________________________________________________
        109fa9  1       ________________________________________________________________
        109faa  1       __________B_____________________________________________________
        109fab  1       _______S________________________________________________________
      
      The compaction migration scanner is attempting to defragment this memory
      since it is at the beginning of the zone.  It has done so quite well,
      all movable pages have been migrated.  From pfn [0x109955, 0x109fab),
      there are only buddy pages and pages without flags set.
      
      These pages may be stranded on pcps that could otherwise allow this
      memory to be coalesced if freed back to the zone free area.  It is
      possible that some of these pages may not be on pcps and that something
      has called alloc_pages() and used the memory directly, but we rely on
      the absence of __GFP_MOVABLE in these cases to allocate from
      MIGATE_UNMOVABLE pageblocks to try to keep these MIGRATE_MOVABLE
      pageblocks as free as possible.
      
      These buddy and pcp pages, spanning 1,621 pages, could be coalesced and
      allow for three transparent hugepages to be dynamically allocated.
      Running the numbers for all such spans on the system, it was found that
      there were over 400 such spans of only buddy pages and pages without
      flags set at the time this /proc/kpageflags sample was collected.
      Without this support, there were _no_ order-9 or order-10 pages free.
      
      When kcompactd fails to defragment memory such that a cc.order page can
      be allocated, drain all pcps for the zone back to the buddy allocator so
      this stranding cannot occur.  Compaction for that order will
      subsequently be deferred, which acts as a ratelimit on this drain.
      
      Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1803010340100.88270@chino.kir.corp.google.com
      
      
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      bc3106b2
  12. Feb 01, 2018
  13. Nov 18, 2017
Loading