Lines Matching full:pages
88 admin_reserve_kbytes defaults to min(3% of free pages, 8MB)
117 huge pages although processes will also directly compact memory as required.
127 Note that compaction has a non-trivial system-wide impact as pages
140 allowed to examine the unevictable lru (mlocked pages) for pages to compact.
143 compaction from moving pages that are unevictable. Default value is 1.
165 Contains, as a percentage of total available memory that contains free pages
166 and reclaimable pages, the number of pages at which the background kernel
183 Note: the minimum value allowed for dirty_bytes is two pages (in bytes); any
200 Contains, as a percentage of total available memory that contains free pages
201 and reclaimable pages, the number of pages at which a process which is
210 When a lazytime inode is constantly having its pages dirtied, the inode with
274 solution for memory pages having (excessive) corrected memory errors.
282 transparent hugepage into raw pages, then migrates only the raw error page.
289 pages without compensation, reducing the capacity of the HugeTLB pool by 1.
296 memory pages. When set to 1, kernel attempts to soft offline the pages
298 the request to soft offline the pages. Its default value is 1.
301 following requests to soft offline pages will not be performed:
303 - Request to soft offline pages from RAS Correctable Errors Collector.
305 - On ARM, the request to soft offline pages from GHES driver.
307 - On PARISC, the request to soft offline pages from Page Deallocation Table.
397 pages for each zones from them. These are shown as array of protection pages
399 Each zone has an array of protection pages like this::
402 pages free 1355
418 In this example, if normal pages (index=2) are required to this DMA zone and
444 256 means 1/256. # of protection pages becomes about "0.39%" of total managed
445 pages of higher zones on the node.
447 If you would like to protect more pages, smaller values are effective.
449 disables protection of the pages.
495 for a few types of pages, like kernel internally allocated data or
496 the swap cache, but works for the majority of user pages.
526 Each lowmem zone gets a number of reserved free pages based
541 A percentage of the total pages in each zone. On Zone reclaim
543 than this percentage of pages in a zone are reclaimable slab pages.
559 This is a percentage of the total pages in each zone. Zone reclaim will
560 only occur if more than this percentage of pages are in a state that
564 against all file-backed unmapped pages including swapcache pages and tmpfs
565 files. Otherwise, only unmapped pages backed by normal files but not tmpfs
576 accidentally operate based on the information in the first couple of pages
628 Once enabled, the vmemmap pages of subsequent allocation of HugeTLB pages from
629 buddy allocator will be optimized (7 pages per 2MB HugeTLB page and 4095 pages
630 per 1GB HugeTLB page), whereas already allocated HugeTLB pages will not be
631 optimized. When those optimized HugeTLB pages are freed from the HugeTLB pool
632 to the buddy allocator, the vmemmap pages representing that range needs to be
633 remapped again and the vmemmap pages discarded earlier need to be rellocated
634 again. If your use case is that HugeTLB pages are allocated 'on the fly' (e.g.
635 never explicitly allocating HugeTLB pages with 'nr_hugepages' but only set
636 'nr_overcommit_hugepages', those overcommitted HugeTLB pages are allocated 'on
639 of allocation or freeing HugeTLB pages between the HugeTLB pool and the buddy
641 pressure, it could prevent the user from freeing HugeTLB pages from the HugeTLB
642 pool to the buddy allocator since the allocation of vmemmap pages could be
645 Once disabled, the vmemmap pages of subsequent allocation of HugeTLB pages from
647 time from buddy allocator disappears, whereas already optimized HugeTLB pages
649 pages, you can set "nr_hugepages" to 0 first and then disable this. Note that
650 writing 0 to nr_hugepages will make any "in use" HugeTLB pages become surplus
651 pages. So, those surplus pages are still optimized until they are no longer
652 in use. You would need to wait for those surplus pages to be released before
653 there are no optimized pages in the system.
683 trims excess pages aggressively. Any value >= 1 acts as the watermark where
830 page-cluster controls the number of pages up to which consecutive pages
837 it to 1 means "2 pages", setting it to 2 means "4 pages", etc.
840 The default value is three (eight pages at a time). There may be some
846 that consecutive pages readahead would have brought in.
889 This is the fraction of pages in each zone that are can be stored to
892 that we do not allow more than 1/8th of pages in each zone to be stored
949 cache and swap-backed pages equally; lower values signify more
965 file-backed pages is less than the high watermark in a zone.
1030 reclaimed if pages of different mobility are being mixed within pageblocks.
1033 allocations, THP and hugetlbfs pages.
1041 worth of pages will be reclaimed (e.g. 2MB on 64-bit x86). A boost factor
1058 that the number of free pages kswapd maintains for latency reasons is
1075 2 Zone reclaim writes dirty pages out
1076 4 Zone reclaim swaps pages
1088 allocating off node pages.
1090 Allowing zone reclaim to write out pages stops processes that are
1091 writing large amounts of data from dirtying pages on other nodes. Zone
1092 reclaim will write out dirty pages if a zone fills up and so effectively