Lines Matching +full:page +full:- +full:size

12 that supports the automatic promotion and demotion of page sizes and
19 in the examples below we presume that the basic page size is 4K and
20 the huge page size is 2M, although the actual numbers may vary
26 requiring larger clear-page copy-page in page faults which is a
28 single page fault for each 2M virtual region touched by userland (so
43 larger size only if both KVM and the Linux guest are using
48 Modern kernels support "multi-size THP" (mTHP), which introduces the
49 ability to allocate memory in blocks that are bigger than a base page
50 but smaller than traditional PMD-size (as described above), in
51 increments of a power-of-2 number of pages. mTHP can back anonymous
53 PTE-mapped, but in many cases can still provide similar benefits to
54 those outlined above: Page faults are significantly reduced (by a
56 prominent because the size of each page isn't as huge as the PMD-sized
57 variant and there is less memory to clear in each page fault. Some
66 collapses sequences of basic pages into PMD-sized huge pages.
84 lived page allocations even for hugepage unaware applications that
89 large region but only touch 1 byte of it, in that case a 2M page might
90 be allocated instead of a 4k page for no good. This is why it's
91 possible to disable hugepages system-wide and to only have them inside
108 -------------------
113 system wide. This can be achieved per-supported-THP-size with one of::
115 echo always >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
116 echo madvise >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
117 echo never >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
119 where <size> is the hugepage size being addressed, the available sizes
124 echo always >/sys/kernel/mm/transparent_hugepage/hugepages-2048kB/enabled
126 Alternatively it is possible to specify that a given hugepage size
127 will inherit the top-level "enabled" value::
129 echo inherit >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
133 echo inherit >/sys/kernel/mm/transparent_hugepage/hugepages-2048kB/enabled
135 The top-level setting (for use with "inherit") can be set by issuing
142 By default, PMD-sized hugepages have enabled="inherit" and all other
144 sizes, the kernel will select the most appropriate enabled size for a
190 should be self-explanatory.
192 By default kernel tries to use huge, PMD-mappable zero page on read
193 page fault to anonymous mapping. It's possible to disable huge zero
194 page by writing 0 or enable it back by writing 1::
200 allocation library) may want to know the size (in bytes) of a
201 PMD-mappable transparent hugepage::
207 "underused". A THP is underused if the number of zero-filled pages in
215 khugepaged will be automatically started when PMD-sized THP is enabled
216 (either of the per-size anon control or the top-level control are set
218 PMD-sized THP is disabled (when both the per-size anon control and the
219 top-level control are "never")
222 -------------------
226 PMD-sized THP and no attempt is made to collapse to other THP
230 invoke defrag algorithms synchronously during the page faults, it
270 of small pages into one large page::
280 swap when collapsing a group of pages into a transparent huge page::
290 processes. khugepaged might treat pages of THPs as shared if any page of
300 You can change the sysfs boot time default for the top-level "enabled"
305 Alternatively, each supported anonymous THP size can be controlled by
306 passing ``thp_anon=<size>[KMG],<size>[KMG]:<state>;<size>[KMG]-<size>[KMG]:<state>``,
307 where ``<size>`` is the THP size (must be a power of 2 of PAGE_SIZE and
315 thp_anon=16K-64K:always;128K,512K:inherit;256K:madvise;1M-2M:never
336 Attempt to allocate huge pages every time we need a new page;
342 Only allocate huge page if it will be fully within i_size.
350 ``mount -o remount,huge= /mountpoint`` works fine after mount: remounting
366 Force the huge option on for all - very useful for testing;
368 Shmem can also use "multi-size THP" (mTHP) by adding a new sysfs knob to
370 '/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/shmem_enabled',
377 Attempt to allocate <size> huge pages every time we need a new page;
380 Inherit the top-level "shmem_enabled" value. By default, PMD-sized hugepages
384 Do not allocate <size> huge pages;
387 Only allocate <size> huge page if it will be fully within i_size.
391 Only allocate <size> huge pages if requested with fadvise()/madvise();
397 transparent_hugepage/hugepages-<size>kB/enabled values and tmpfs mount
405 The number of PMD-sized anonymous transparent huge pages currently used by the
407 To identify what applications are using PMD-sized anonymous transparent huge
410 PMD-sized THP for historical reasons and should have been called
426 is incremented every time a huge page is successfully
427 allocated and charged to handle a page fault.
431 a range of pages to collapse into one huge page and has
432 successfully allocated a new huge page to store the data.
435 is incremented if a page fault fails to allocate or charge
436 a huge page and instead falls back to using small pages.
439 is incremented if a page fault fails to charge a huge page and
445 of pages that should be collapsed into one huge page but failed
449 is incremented every time a shmem huge page is successfully
454 is incremented if a shmem huge page is attempted to be allocated
459 is incremented if a shmem huge page cannot be charged and instead
465 is incremented every time a file or shmem huge page is mapped into
469 is incremented every time a huge page is split into base
471 reason is that a huge page is old and is being reclaimed.
472 This action implies splitting all PMD the page mapped with.
476 page. This can happen if the page was pinned by somebody.
479 is incremented when a huge page is put onto split
480 queue. This happens when a huge page is partially unmapped and
485 is incremented when a huge page on the split queue was split
493 munmap() on part of huge page. It doesn't split huge page, only
494 page table entry.
497 is incremented every time a huge zero page used for thp is
499 the huge zero page, only its allocation.
503 huge zero page and falls back to using small pages.
506 is incremented every time a huge page is swapout in one
510 is incremented if a huge page has to be split before swapout.
512 for the huge page.
514 In /sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/stats, There are
515 also individual counters for each huge page size, which can be utilized to
520 is incremented every time a huge page is successfully
521 allocated and charged to handle a page fault.
524 is incremented if a page fault fails to allocate or charge
525 a huge page and instead falls back to using huge pages with
529 is incremented if a page fault fails to charge a huge page and
534 is incremented every time a huge page is swapped out in one
538 is incremented if a huge page has to be split before swapout.
540 for the huge page.
543 is incremented every time a shmem huge page is successfully
547 is incremented if a shmem huge page is attempted to be allocated
551 is incremented if a shmem huge page cannot be charged and instead
556 is incremented every time a huge page is successfully split into
558 common reason is that a huge page is old and is being reclaimed.
562 page. This can happen if the page was pinned by somebody.
565 is incremented when a huge page is put onto split queue.
566 This happens when a huge page is partially unmapped and splitting
584 huge page for use. There are some counters in ``/proc/vmstat`` to help
589 memory compaction so that a huge page is free for use.
593 freed a huge page for use.