Lines Matching full:pages
34 main memory will have over 32 million 4k pages in a single node. When a large
35 fraction of these pages are not evictable for any reason [see below], vmscan
37 of pages that are evictable. This can result in a situation where all CPUs are
41 The unevictable list addresses the following classes of unevictable pages:
51 The infrastructure may also be able to handle other conditions that make pages
104 lru_list enum element). The memory controller tracks the movement of pages to
108 not attempt to reclaim pages on the unevictable list. This has a couple of
111 (1) Because the pages are "hidden" from reclaim on the unevictable list, the
112 reclaim process can be more efficient, dealing only with pages that have a
115 (2) On the other hand, if too many of the pages charged to the control group
126 For facilities such as ramfs none of the pages attached to the address space
127 may be evicted. To prevent eviction of any such pages, the AS_UNEVICTABLE
150 Note that SHM_LOCK is not required to page in the locked pages if they're
151 swapped out; the application must touch the pages manually if it wants to
159 Detecting Unevictable Pages
170 any special effort to push any pages in the SHM_LOCK'd area to the unevictable
175 the pages in the region and "rescue" them from the unevictable list if no other
177 the pages are also "rescued" from the unevictable list in the process of
213 MLOCKED Pages
224 The "Unevictable mlocked Pages" infrastructure is based on work originally
225 posted by Nick Piggin in an RFC patch entitled "mm: mlocked pages off LRU".
227 to achieve the same objective: hiding mlocked pages from vmscan.
232 of the pages on an LRU list, and thus mlocked pages were not migratable as
236 Nick resolved this by putting mlocked pages back on the LRU list before
246 put to work, without preventing the migration of mlocked pages. This is why
247 the "Unevictable LRU list" cannot be a linked list of pages now; but there was
254 mlocked pages - pages mapped into a VM_LOCKED VMA - are a class of unevictable
255 pages. When such a page has been "noticed" by the memory management subsystem,
260 the LRU. Such pages can be "noticed" by memory management in several places:
275 mlocked pages become unlocked and rescued from the unevictable list when:
300 off a subset of the VMA if the range does not cover the entire VMA. Any pages
305 __mm_populate() to fault in the remaining pages via get_user_pages() and to
306 mark those pages as mlocked as they are faulted.
309 get_user_pages() will be unable to fault in the pages. That's okay. If pages
342 1) VMAs with VM_IO or VM_PFNMAP set are skipped entirely. The pages behind
344 mlocked. In any case, most of the pages have no struct page in which to so
349 neither need nor want to mlock() these pages. But __mm_populate() includes
350 hugetlbfs ranges, allocating the huge pages and populating the PTEs.
352 3) VMAs with VM_DONTEXPAND are generally userspace mappings of kernel pages,
353 such as the VDSO page, relay channel pages, etc. These pages are inherently
376 specified range. All pages in the VMA are then munlocked by munlock_folio() via
394 Migrating MLOCKED Pages
401 of mlocked pages and other unevictable pages. PG_mlocked is cleared from the
414 before mlocking any pages already present, if one of those pages were migrated
419 To complete page migration, we place the old and new pages back onto the LRU
424 Compacting MLOCKED Pages
428 is to let unevictable pages be moved. /proc/sys/vm/compact_unevictable_allowed
431 flow as described in Migrating MLOCKED Pages will apply.
434 MLOCKING Transparent Huge Pages
447 We handle this by keeping PTE-mlocked huge pages on evictable LRU lists:
466 The mmapped area will still have properties of the locked area - pages will not
472 changes, the kernel simply called make_pages_present() to allocate pages
485 munlock the pages if we're removing the last VM_LOCKED VMA that maps the pages.
486 Before the unevictable/mlock changes, mlocking did not mark the pages in any
506 Truncating MLOCKED Pages
509 File truncation or hole punching forcibly unmaps the deleted pages from
510 userspace; truncation even unmaps and deletes any private anonymous pages
511 which had been Copied-On-Write from the file pages now being truncated.
513 Mlocked pages can be munlocked and deleted in this way: like with munmap(),
519 munlocking by clearing VM_LOCKED from a VMA, before munlocking all the pages
520 present, if one of those pages were unmapped by truncation or hole punch before
532 vmscan's shrink_active_list() culls any obviously unevictable pages -
533 i.e. !page_evictable(page) pages - diverting those to the unevictable list.
534 However, shrink_active_list() only sees unevictable pages that made it onto the
535 active/inactive LRU lists. Note that these pages do not have PG_unevictable
539 Some examples of these unevictable pages on the LRU lists are:
541 (1) ramfs pages that have been placed on the LRU lists when first allocated.
543 (2) SHM_LOCK'd shared memory pages. shmctl(SHM_LOCK) does not attempt to
544 allocate or fault in the pages in the shared memory region. This happens
548 (3) pages still mapped into VM_LOCKED VMAs, which should be marked mlocked,
552 unevictable pages found on the inactive lists to the appropriate memory cgroup
557 check for (3) pages still mapped into VM_LOCKED VMAs, and call mlock_vma_folio()
558 to correct them. Such pages are culled to the unevictable list when released