Lines Matching +full:high +full:- +full:performance

13 ------------------------------------------------------------------------------
27 - admin_reserve_kbytes
28 - compact_memory
29 - compaction_proactiveness
30 - compact_unevictable_allowed
31 - dirty_background_bytes
32 - dirty_background_ratio
33 - dirty_bytes
34 - dirty_expire_centisecs
35 - dirty_ratio
36 - dirtytime_expire_seconds
37 - dirty_writeback_centisecs
38 - drop_caches
39 - enable_soft_offline
40 - extfrag_threshold
41 - highmem_is_dirtyable
42 - hugetlb_shm_group
43 - laptop_mode
44 - legacy_va_layout
45 - lowmem_reserve_ratio
46 - max_map_count
47 - mem_profiling (only if CONFIG_MEM_ALLOC_PROFILING=y)
48 - memory_failure_early_kill
49 - memory_failure_recovery
50 - min_free_kbytes
51 - min_slab_ratio
52 - min_unmapped_ratio
53 - mmap_min_addr
54 - mmap_rnd_bits
55 - mmap_rnd_compat_bits
56 - nr_hugepages
57 - nr_hugepages_mempolicy
58 - nr_overcommit_hugepages
59 - nr_trim_pages (only if CONFIG_MMU=n)
60 - numa_zonelist_order
61 - oom_dump_tasks
62 - oom_kill_allocating_task
63 - overcommit_kbytes
64 - overcommit_memory
65 - overcommit_ratio
66 - page-cluster
67 - page_lock_unfairness
68 - panic_on_oom
69 - percpu_pagelist_high_fraction
70 - stat_interval
71 - stat_refresh
72 - numa_stat
73 - swappiness
74 - unprivileged_userfaultfd
75 - user_reserve_kbytes
76 - vfs_cache_pressure
77 - watermark_boost_factor
78 - watermark_scale_factor
79 - zone_reclaim_mode
127 Note that compaction has a non-trivial system-wide impact as pages
193 of a second. Data which has been dirty in-memory for longer than this
248 This is a non-destructive operation and will not free any dirty objects.
258 Use of this file can cause performance problems. Since it discards cached
273 Correctable memory errors are very common on servers. Soft-offline is kernel's
276 For different types of page, soft-offline has different behaviors / costs.
278 - For a raw error page, soft-offline migrates the in-use page's content to
281 - For a page that is part of a transparent hugepage, soft-offline splits the
284 memory access performance.
286 - For a page that is part of a HugeTLB hugepage, soft-offline first migrates
292 physical memory) vs performance / capacity implications in transparent and
303 - Request to soft offline pages from RAS Correctable Errors Collector.
305 - On ARM, the request to soft offline pages from GHES driver.
307 - On PARISC, the request to soft offline pages from Page Deallocation Table.
313 reclaim to satisfy a high-order allocation. The extfrag/extfrag_index file in
316 of memory, values towards 1000 imply failures are due to fragmentation and -1
328 This parameter controls whether the high memory is considered for dirty
337 storage more effectively. Note this also comes with a risk of pre-mature
354 controlled by this knob are discussed in Documentation/admin-guide/laptops/laptop-mode.rst.
360 If non-zero, this sysctl disables the new 32-bit mmap layout - the kernel
398 in /proc/zoneinfo like the following. (This is an example of x86-64 box).
405 high 4
428 zone[i]->protection[j]
448 The minimum value is 1 (1/1 -> 100%). The value less than 1 completely
456 may have. Memory map areas are used as a side-effect of calling
476 Enabling memory profiling introduces a small performance overhead for all
490 no other up-to-date copy of the data it will kill to prevent any data
531 become subtly broken, and prone to deadlock under high loads.
533 Setting this too high will OOM your machine instantly.
564 against all file-backed unmapped pages including swapcache pages and tmpfs
616 See Documentation/admin-guide/mm/hugetlbpage.rst
659 Change the size of the hugepage pool at run-time on a specific
662 See Documentation/admin-guide/mm/hugetlbpage.rst
671 See Documentation/admin-guide/mm/hugetlbpage.rst
679 This value adjusts the excess page trimming behaviour of power-of-2 aligned
688 See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
702 In non-NUMA case, a zonelist for GFP_KERNEL is ordered as following.
703 ZONE_NORMAL -> ZONE_DMA
710 (A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL
711 (B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA.
715 out-of-memory(OOM) of ZONE_DMA because ZONE_DMA is tend to be small.
730 On 32-bit, the Normal zone needs to be preserved for allocations accessible
733 On 64-bit, devices that require DMA32/DMA are relatively rare, so "node"
743 Enables a system-wide task dump (excluding kernel threads) to be produced
744 when the kernel performs an OOM-killing and includes such information as
753 be forced to incur a performance penalty in OOM conditions when the
756 If this is set to non-zero, this information is shown whenever the
757 OOM killer actually kills a memory-hogging task.
765 This enables or disables killing the OOM-triggering task in
766 out-of-memory situations.
770 selects a rogue memory-hogging task that frees up a large amount of
773 If this is set to non-zero, the OOM killer simply kills the task that
774 triggered the out-of-memory condition. This avoids the expensive
810 programs that malloc() huge amounts of memory "just-in-case"
815 See Documentation/mm/overcommit-accounting.rst and
827 page-cluster
830 page-cluster controls the number of pages up to which consecutive pages
834 but consecutive on swap space - that means they were swapped out together.
836 It is a logarithmic value - setting it to zero means "1 page", setting
842 swap-intensive.
860 This enables or disables panic on out-of-memory feature.
866 If this is set to 1, the kernel panics when out-of-memory happens.
869 may be killed by oom-killer. No panic occurs in this case.
874 above-mentioned. Even oom happens under memory cgroup, the whole
890 per-cpu page lists. It is an upper boundary that is divided depending
893 on per-cpu page lists. This entry only changes the value of hot per-cpu
895 each zone between per-cpu lists.
897 The batch value of each per-cpu page list remains the same regardless of
898 the value of the high fraction so allocation latencies are unaffected.
900 The initial value is zero. Kernel uses this value to set the high pcp->high
916 Any read or write (by root only) flushes all the per-cpu vm statistics
920 As a side-effect, it also checks for negative totals (elsewhere reported
931 When page allocation performance becomes a bottleneck and you can tolerate
937 When page allocation performance is not a bottleneck and you want all
949 cache and swap-backed pages equally; lower values signify more
954 experimentation and will also be workload-dependent.
958 For in-memory swap, like zram or zswap, as well as hybrid setups that
965 file-backed pages is less than the high watermark in a zone.
985 Documentation/admin-guide/mm/userfaultfd.rst.
1016 lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100
1020 performance impact. Reclaim code needs to take various locks to find freeable
1029 It defines the percentage of the high watermark of a zone that will be
1032 increase the success rate of future high-order allocations such as SLUB
1037 15,000 means that up to 150% of the high watermark will be reclaimed in the
1041 worth of pages will be reclaimed (e.g. 2MB on 64-bit x86). A boost factor
1056 A high rate of threads entering direct reclaim (allocstall) or kswapd
1086 and that accessing remote memory would cause a measurable performance
1093 throttle the process. This may decrease the performance of a single process
1095 anymore but it preserve the memory on other nodes so that the performance