Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

txg_sync: page allocation failure. order:0, mode:0x20 #325

Closed
tommycheng opened this issue Jul 15, 2011 · 3 comments
Closed

txg_sync: page allocation failure. order:0, mode:0x20 #325

tommycheng opened this issue Jul 15, 2011 · 3 comments
Milestone

Comments

@tommycheng
Copy link

This happen overnight when one of my virtual machine is running under high load, for 6 hours, 2 or more GB of data at a time is being copy back and forth via NFS between host and VM. Destination filesystem is zfs.

[49212.455709] txg_sync: page allocation failure. order:0, mode:0x20
[49212.455714] Pid: 4772, comm: txg_sync Tainted: P 2.6.38-10-server #46-Ubuntu
[49212.455716] Call Trace:
[49212.455723] [] ? __alloc_pages_nodemask+0x5f4/0x830
[49212.455733] [] ? kv_alloc.clone.10+0x6e/0xa0 [spl]
[49212.455736] [] ? __vmalloc_node_range+0xa5/0xb0
[49212.455739] [] ? alloc_pages_current+0xa5/0x110
[49212.455742] [] ? new_slab+0x282/0x290
[49212.455744] [] ? __slab_alloc+0x262/0x390
[49212.455748] [] ? kmem_alloc_debug+0xeb/0x130 [spl]
[49212.455750] [] ? __kmalloc+0x112/0x140
[49212.455777] [] ? arc_write_done+0x0/0x260 [zfs]
[49212.455781] [] ? kmem_alloc_debug+0xeb/0x130 [spl]
[49212.455786] [] ? task_alloc+0x10c/0x160 [spl]
[49212.455807] [] ? zio_execute+0x0/0x150 [zfs]
[49212.455812] [] ? __taskq_dispatch+0xac/0x1c0 [spl]
[49212.455828] [] ? zio_taskq_dispatch+0x9a/0xf0 [zfs]
[49212.455844] [] ? zio_nowait+0x165/0x180 [zfs]
[49212.455856] [] ? dbuf_sync_leaf.clone.6+0x119/0x340 [zfs]
[49212.455868] [] ? dbuf_sync_list+0x98/0xa0 [zfs]
[49212.455880] [] ? dbuf_sync_indirect+0xaa/0x100 [zfs]
[49212.455892] [] ? dbuf_sync_list+0x78/0xa0 [zfs]
[49212.455904] [] ? dbuf_sync_indirect+0xaa/0x100 [zfs]
[49212.455916] [] ? dbuf_sync_list+0x78/0xa0 [zfs]
[49212.455931] [] ? dnode_sync+0x517/0x9a0 [zfs]
[49212.455943] [] ? dbuf_sync_list+0x78/0xa0 [zfs]
[49212.455956] [] ? dmu_objset_sync_dnodes+0xbb/0xe0 [zfs]
[49212.455969] [] ? dmu_objset_sync+0x191/0x340 [zfs]
[49212.455982] [] ? dmu_objset_write_ready+0x0/0x50 [zfs]
[49212.455995] [] ? dmu_objset_write_done+0x0/0x70 [zfs]
[49212.456009] [] ? dsl_dataset_sync+0x57/0x70 [zfs]
[49212.456025] [] ? dsl_pool_sync+0xbc/0x4a0 [zfs]
[49212.456041] [] ? zio_destroy+0xae/0xf0 [zfs]
[49212.456057] [] ? spa_sync+0x3ab/0x9b0 [zfs]
[49212.456061] [] ? autoremove_wake_function+0x16/0x40
[49212.456063] [] ? __wake_up+0x53/0x70
[49212.456080] [] ? txg_sync_thread+0x241/0x3c0 [zfs]
[49212.456096] [] ? txg_sync_thread+0x0/0x3c0 [zfs]
[49212.456101] [] ? thread_generic_wrapper+0x78/0x90 [spl]
[49212.456105] [] ? thread_generic_wrapper+0x0/0x90 [spl]
[49212.456107] [] ? kthread+0x96/0xa0
[49212.456110] [] ? kernel_thread_helper+0x4/0x10
[49212.456112] [] ? kthread+0x0/0xa0
[49212.456114] [] ? kernel_thread_helper+0x0/0x10
[49212.456115] Mem-Info:
[49212.456116] Node 0 DMA per-cpu:
[49212.456118] CPU 0: hi: 0, btch: 1 usd: 0
[49212.456119] CPU 1: hi: 0, btch: 1 usd: 0
[49212.456121] CPU 2: hi: 0, btch: 1 usd: 0
[49212.456122] CPU 3: hi: 0, btch: 1 usd: 0
[49212.456123] CPU 4: hi: 0, btch: 1 usd: 0
[49212.456124] CPU 5: hi: 0, btch: 1 usd: 0
[49212.456125] Node 0 DMA32 per-cpu:
[49212.456127] CPU 0: hi: 186, btch: 31 usd: 0
[49212.456128] CPU 1: hi: 186, btch: 31 usd: 0
[49212.456129] CPU 2: hi: 186, btch: 31 usd: 0
[49212.456130] CPU 3: hi: 186, btch: 31 usd: 46
[49212.456132] CPU 4: hi: 186, btch: 31 usd: 35
[49212.456133] CPU 5: hi: 186, btch: 31 usd: 0
[49212.456134] Node 0 Normal per-cpu:
[49212.456135] CPU 0: hi: 186, btch: 31 usd: 0
[49212.456136] CPU 1: hi: 186, btch: 31 usd: 0
[49212.456137] CPU 2: hi: 186, btch: 31 usd: 0
[49212.456138] CPU 3: hi: 186, btch: 31 usd: 106
[49212.456140] CPU 4: hi: 186, btch: 31 usd: 0
[49212.456141] CPU 5: hi: 186, btch: 31 usd: 0
[49212.456144] active_anon:637888 inactive_anon:74312 isolated_anon:0
[49212.456144] active_file:280789 inactive_file:646180 isolated_file:0
[49212.456145] unevictable:0 dirty:79014 writeback:7957 unstable:0
[49212.456146] free:16605 slab_reclaimable:14250 slab_unreclaimable:35419
[49212.456146] mapped:4698 shmem:224 pagetables:3813 bounce:0
[49212.456148] Node 0 DMA free:15884kB min:12kB low:12kB high:16kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15660kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
[49212.456154] lowmem_reserve[]: 0 2998 16128 16128
[49212.456156] Node 0 DMA32 free:48272kB min:3016kB low:3768kB high:4524kB active_anon:263380kB inactive_anon:67272kB active_file:78204kB inactive_file:762256kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3070112kB mlocked:0kB dirty:87488kB writeback:0kB mapped:64kB shmem:0kB slab_reclaimable:8008kB slab_unreclaimable:11612kB kernel_stack:128kB pagetables:616kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:97 all_unreclaimable? no
[49212.456163] lowmem_reserve[]: 0 0 13130 13130
[49212.456165] Node 0 Normal free:2264kB min:13224kB low:16528kB high:19836kB active_anon:2288172kB inactive_anon:229976kB active_file:1044952kB inactive_file:1822464kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:13445120kB mlocked:0kB dirty:228568kB writeback:31828kB mapped:18728kB shmem:896kB slab_reclaimable:48992kB slab_unreclaimable:130064kB kernel_stack:4912kB pagetables:14636kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:142 all_unreclaimable? no
[49212.456171] lowmem_reserve[]: 0 0 0 0
[49212.456173] Node 0 DMA: 1_4kB 1_8kB 0_16kB 0_32kB 2_64kB 1_128kB 1_256kB 0_512kB 1_1024kB 1_2048kB 3_4096kB = 15884kB
[49212.456178] Node 0 DMA32: 150_4kB 3276_8kB 738_16kB 177_32kB 43_64kB 4_128kB 3_256kB 0_512kB 0_1024kB 0_2048kB 0_4096kB = 48312kB
[49212.456183] Node 0 Normal: 486_4kB 46_8kB 2_16kB 0_32kB 0_64kB 0_128kB 0_256kB 0_512kB 0_1024kB 0_2048kB 0_4096kB = 2344kB
[49212.456188] 927178 total pagecache pages
[49212.456190] 0 pages in swap cache
[49212.456207] Swap cache stats: add 0, delete 0, find 0/0
[49212.456208] Free swap = 7812092kB
[49212.456209] Total swap = 7812092kB
[49212.459660] swapper: page allocation failure. order:0, mode:0x4020
[49212.459664] Pid: 0, comm: swapper Tainted: P 2.6.38-10-server #46-Ubuntu
[49212.459665] Call Trace:
[49212.459667] [] ? __alloc_pages_nodemask+0x5f4/0x830
[49212.459675] [] ? __sg_alloc_table+0x81/0x150
[49212.459682] [] ? scsi_sg_alloc+0x0/0x50
[49212.459685] [] ? blk_rq_map_sg+0x97/0x390
[49212.459688] [] ? alloc_pages_current+0xa5/0x110
[49212.459690] [] ? new_slab+0x282/0x290
[49212.459692] [] ? __slab_alloc+0x262/0x390
[49212.459695] [] ? __netdev_alloc_skb+0x24/0x50
[49212.459697] [] ? __kmalloc_node_track_caller+0x8d/0x1a0
[49212.459698] [] ? __netdev_alloc_skb+0x24/0x50
[49212.459700] [] ? __alloc_skb+0x83/0x170
[49212.459701] [] ? __netdev_alloc_skb+0x24/0x50
[49212.459722] [] ? rtl8169_try_rx_copy.clone.54+0x90/0x110 [r8169]
[49212.459726] [] ? rtl8169_rx_interrupt.clone.56+0x160/0x370 [r8169]
[49212.459730] [] ? rtl8169_poll+0x40/0xa0 [r8169]
[49212.459733] [] ? net_rx_action+0x128/0x270
[49212.459735] [] ? __do_softirq+0xa8/0x1c0
[49212.459738] [] ? call_softirq+0x1c/0x30
[49212.459739] [] ? do_softirq+0x65/0xa0
[49212.459741] [] ? irq_exit+0x85/0x90
[49212.459743] [] ? do_IRQ+0x66/0xe0
[49212.459746] [] ? ret_from_intr+0x0/0x15
[49212.459747] [] ? native_safe_halt+0xb/0x10
[49212.459752] [] ? clockevents_notify+0x45/0x170
[49212.459754] [] ? default_idle+0x41/0xe0
[49212.459755] [] ? c1e_idle+0x9a/0x120
[49212.459758] [] ? cpu_idle+0xa6/0xf0
[49212.459760] [] ? rest_init+0x75/0x80
[49212.459763] [] ? start_kernel+0x3f5/0x400
[49212.459765] [] ? x86_64_start_reservations+0x132/0x136
[49212.459767] [] ? x86_64_start_kernel+0xd1/0xe0
[49212.459768] Mem-Info:
[49212.459769] Node 0 DMA per-cpu:
[49212.459771] CPU 0: hi: 0, btch: 1 usd: 0
[49212.459772] CPU 1: hi: 0, btch: 1 usd: 0
[49212.459773] CPU 2: hi: 0, btch: 1 usd: 0
[49212.459774] CPU 3: hi: 0, btch: 1 usd: 0
[49212.459775] CPU 4: hi: 0, btch: 1 usd: 0
[49212.459776] CPU 5: hi: 0, btch: 1 usd: 0
[49212.459777] Node 0 DMA32 per-cpu:
[49212.459778] CPU 0: hi: 186, btch: 31 usd: 0
[49212.459779] CPU 1: hi: 186, btch: 31 usd: 0
[49212.459781] CPU 2: hi: 186, btch: 31 usd: 0
[49212.459782] CPU 3: hi: 186, btch: 31 usd: 46
[49212.459783] CPU 4: hi: 186, btch: 31 usd: 157
[49212.459784] CPU 5: hi: 186, btch: 31 usd: 0
[49212.459785] Node 0 Normal per-cpu:
[49212.459786] CPU 0: hi: 186, btch: 31 usd: 0
[49212.459787] CPU 1: hi: 186, btch: 31 usd: 0
[49212.459788] CPU 2: hi: 186, btch: 31 usd: 0
[49212.459789] CPU 3: hi: 186, btch: 31 usd: 106
[49212.459790] CPU 4: hi: 186, btch: 31 usd: 159
[49212.459791] CPU 5: hi: 186, btch: 31 usd: 0
[49212.459794] active_anon:637888 inactive_anon:74312 isolated_anon:0
[49212.459795] active_file:280789 inactive_file:645198 isolated_file:0
[49212.459795] unevictable:0 dirty:79014 writeback:7957 unstable:0
[49212.459796] free:17301 slab_reclaimable:14250 slab_unreclaimable:35419
[49212.459796] mapped:4698 shmem:224 pagetables:3813 bounce:0
[49212.459798] Node 0 DMA free:15884kB min:12kB low:12kB high:16kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15660kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
[49212.459803] lowmem_reserve[]: 0 2998 16128 16128
[49212.459805] Node 0 DMA32 free:48768kB min:3016kB low:3768kB high:4524kB active_anon:263380kB inactive_anon:67272kB active_file:78204kB inactive_file:761200kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3070112kB mlocked:0kB dirty:87488kB writeback:0kB mapped:64kB shmem:0kB slab_reclaimable:8008kB slab_unreclaimable:11612kB kernel_stack:128kB pagetables:616kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
[49212.459811] lowmem_reserve[]: 0 0 13130 13130
[49212.459813] Node 0 Normal free:4552kB min:13224kB low:16528kB high:19836kB active_anon:2288172kB inactive_anon:229976kB active_file:1044952kB inactive_file:1819592kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:13445120kB mlocked:0kB dirty:228568kB writeback:31828kB mapped:18728kB shmem:896kB slab_reclaimable:48992kB slab_unreclaimable:130064kB kernel_stack:4912kB pagetables:14636kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:33 all_unreclaimable? no
[49212.459818] lowmem_reserve[]: 0 0 0 0
[49212.459820] Node 0 DMA: 1_4kB 1_8kB 0_16kB 0_32kB 2_64kB 1_128kB 1_256kB 0_512kB 1_1024kB 1_2048kB 3_4096kB = 15884kB
[49212.459825] Node 0 DMA32: 177_4kB 3280_8kB 756_16kB 183_32kB 43_64kB 4_128kB 3_256kB 0_512kB 0_1024kB 0_2048kB 0_4096kB = 48932kB
[49212.459830] Node 0 Normal: 908_4kB 86_8kB 10_16kB 3_32kB 0_64kB 0_128kB 0_256kB 0_512kB 0_1024kB 0_2048kB 0_4096kB = 4576kB
[49212.459835] 926213 total pagecache pages
[49212.459836] 0 pages in swap cache
[49212.459837] Swap cache stats: add 0, delete 0, find 0/0
[49212.459837] Free swap = 7812092kB
[49212.459838] Total swap = 7812092kB
[49212.470056] 4194288 pages RAM
[49212.470056] 77418 pages reserved
[49212.470056] 906053 pages shared
[49212.470056] 3261881 pages non-shared
[49212.470056] SLUB: Unable to allocate memory on node -1 (gfp=0x20)
[49212.470056] cache: kmalloc-64, object size: 64, buffer size: 64, default order: 0, min order: 0
[49212.470056] node 0: slabs: 13613, objs: 871232, free: 3
[49212.552497] 4194288 pages RAM
[49212.552497] 77418 pages reserved
[49212.552497] 901474 pages shared
[49212.552497] 3263509 pages non-shared
[49212.552497] SLUB: Unable to allocate memory on node -1 (gfp=0x20)
[49212.552497] cache: kmalloc-1024, object size: 1024, buffer size: 1024, default order: 3, min order: 0
[49212.552497] node 0: slabs: 135, objs: 4180, free: 3

@behlendorf
Copy link
Contributor

This warning wasn't fatal correct? It simply indicates that we are pushing the Linux memory subsystem very hard and were until to allocate a 4k page when it was needed. This sort of thing will generally be retried and unless this is happening constantly it shouldn't cause problems.

@tommycheng
Copy link
Author

Yes, wasn't fatal, was hours before i notice it. I did a reboot afterwards anyways just in case.

@behlendorf
Copy link
Contributor

Closing, this isn't a bug just a kernel warning that memory was very tight. We'll be making progress in other issues to improve ZFS integration with the VM subsystem to make this less likely.

kernelOfTruth pushed a commit to kernelOfTruth/zfs that referenced this issue Mar 1, 2015
When this code was written it appears to have been assumed that
every taskq would have a large number of threads.  In this case
it would make sense to attempt to evenly bind the threads over
all available CPUs.  However, it failed to consider that creating
taskqs with a small number of threads will cause the CPUs with
lower ids become over-subscribed.

For this reason the kthread_bind() call is being removed and
we're leaving the kernel to schedule these threads as it sees fit.

Signed-off-by: Brian Behlendorf <[email protected]>
Closes openzfs#325
sdimitro pushed a commit to sdimitro/zfs that referenced this issue May 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants