Lines Matching +full:pre +full:- +full:determined

1 /* SPDX-License-Identifier: GPL-2.0-only */
5 * Copyright (C) 1999-2003 Russell King
69 * - 4KB : 1
70 * - 16KB : 2
71 * - 64KB : 3
92 * Level-based TLBI operations.
94 * When ARMv8.4-TTL exists, TLBI operations take an additional hint for
97 * cannot be easily determined, the value TLBI_TTL_UNKNOWN will perform
98 * a non-hinted invalidation. Any provided level outside the hint range
99 * will also cause fall-back to non-hinted invalidation.
101 * For Stage-2 invalidation, use the level values provided to that effect
131 * +----------+------+-------+-------+-------+----------------------+
133 * +-----------------+-------+-------+-------+----------------------+
136 * The address range is determined by below formula: [BADDR, BADDR + (NUM + 1) *
139 * Note that the first argument, baddr, is pre-shifted; If LPA2 is in use, BADDR
171 * Generate 'num' values from -1 to 31 with -1 rejected by the
181 (__pages >> (5 * (scale) + 1)) - 1; \
188 * This header file implements the low-level TLB invalidation routines
193 * DSB ISHST // Ensure prior page-table updates have completed
201 * as documented in Documentation/core-api/cachetlb.rst:
211 * Invalidate the virtual-address range '[start, end)' on all
212 * CPUs for the user address space corresponding to 'vma->mm'.
213 * Note that this operation also invalidates any walk-cache
225 * address space corresponding to 'vma->mm'. Note that this
226 * operation only invalidates a single, last-level page-table
227 * entry and therefore does not affect any walk-caches.
238 * CPUs, ensuring that any walk-cache entries associated with the
242 * Invalidate the virtual-address range '[start, end)' on all
243 * CPUs for the user address space corresponding to 'vma->mm'.
245 * determined by 'stride' and only affect any walk-cache entries
249 * cannot be easily determined, the value TLBI_TTL_UNKNOWN will
250 * perform a non-hinted invalidation.
282 mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); in flush_tlb_mm()
301 return __flush_tlb_page_nosync(vma->vm_mm, uaddr); in flush_tlb_page_nosync()
358 * This is meant to avoid soft lock-ups on large TLB flushing ranges and not
364 * __flush_tlb_range_op - Perform TLBI operation upon a range
382 * using the non-range operations. This step is skipped if LPA2 is not in
392 * 3. If there is 1 page remaining, flush it through non-range operations. Range
413 pages -= stride >> PAGE_SHIFT; \
425 pages -= __TLBI_RANGE_PAGES(num, scale); \
427 scale--; \
443 pages = (end - start) >> PAGE_SHIFT; in __flush_tlb_range_nosync()
447 * (MAX_DVM_OPS - 1) pages; in __flush_tlb_range_nosync()
452 (end - start) >= (MAX_DVM_OPS * stride)) || in __flush_tlb_range_nosync()
454 flush_tlb_mm(vma->vm_mm); in __flush_tlb_range_nosync()
459 asid = ASID(vma->vm_mm); in __flush_tlb_range_nosync()
468 mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, start, end); in __flush_tlb_range_nosync()
485 * We cannot use leaf-only invalidation here, since we may be invalidating in flush_tlb_range()
497 if ((end - start) > (MAX_DVM_OPS * PAGE_SIZE)) { in flush_tlb_kernel_range()
506 for (addr = start; addr < end; addr += 1 << (PAGE_SHIFT - 12)) in flush_tlb_kernel_range()