Lines Matching +full:t +full:- +full:head

12 - "graceful fallback": mm components which don't have transparent hugepage
17 - if a hugepage allocation fails because of memory fragmentation,
22 - if some task quits and more hugepages become available (either
27 - it doesn't require memory reservation and in turn it uses hugepages
38 head or tail pages as usual (exactly as they would do on
41 is complete, so they won't ever notice the fact the page is huge. But
43 page (like for checking page->mapping or other bits that are relevant
44 for the head page and not the tail page), it should be updated to jump
45 to check head page instead. Taking a reference on any head/tail page would
49 these aren't new constraints to the GUP API, and they match the
67 that you can't handle natively in your code, you can split it by
75 diff --git a/mm/mremap.c b/mm/mremap.c
76 --- a/mm/mremap.c
78 @@ -41,6 +41,7 @@ static pmd_t *get_old_pmd(struct mm_stru
99 page table lock (pmd_lock()) and re-run pmd_trans_huge. Taking the
113 - get_page()/put_page() and GUP operate on the folio->_refcount.
115 - ->_refcount in tail pages is always zero: get_page_unless_zero() never
118 - map/unmap of a PMD entry for the whole THP increment/decrement
119 folio->_entire_mapcount, increment/decrement folio->_large_mapcount
120 and also increment/decrement folio->_nr_pages_mapped by ENTIRELY_MAPPED
121 when _entire_mapcount goes from -1 to 0 or 0 to -1.
123 - map/unmap of individual pages with PTE entry increment/decrement
124 page->_mapcount, increment/decrement folio->_large_mapcount and also
125 increment/decrement folio->_nr_pages_mapped when page->_mapcount goes
126 from -1 to 0 or 0 to -1 as this counts the number of pages mapped by PTE.
128 split_huge_page internally has to distribute the refcounts in the head
131 entries, but we don't have enough information on how to distribute any
134 the sum of mapcount of all sub-pages plus one (split_huge_page caller must
135 have a reference to the head page).
137 split_huge_page uses migration entries to stabilize page->_refcount and
138 page->_mapcount of anonymous pages. File pages just get unmapped.
143 All tail pages have zero ->_refcount until atomic_add(). This prevents the
145 atomic_add() we don't care about the ->_refcount value. We already know how
146 many references should be uncharged from the head page.
148 For head page get_page_unless_zero() will succeed and we don't mind. It's
149 clear where references should go after split: it will stay on the head page.
151 Note that split_huge_pmd() doesn't have any limitations on refcounting: