/linux-6.12.1/drivers/iommu/ |
D | io-pgfault.c | 3 * Handle device page faults 46 list_for_each_entry_safe(iopf, next, &group->faults, list) { in __iopf_free_group() 99 INIT_LIST_HEAD(&group->faults); in iopf_group_alloc() 101 list_add(&group->last_fault.list, &group->faults); in iopf_group_alloc() 103 /* See if we have partial faults for this group */ in iopf_group_alloc() 108 list_move(&iopf->list, &group->faults); in iopf_group_alloc() 110 list_add(&group->pending_node, &iopf_param->faults); in iopf_group_alloc() 113 group->fault_count = list_count_nodes(&group->faults); in iopf_group_alloc() 134 * managed PASID table. Therefore page faults for in find_fault_handler() 181 * them before reporting faults. A PASID Stop Marker (LRW = 0b100) doesn't [all …]
|
/linux-6.12.1/Documentation/userspace-api/media/v4l/ |
D | ext-ctrls-flash.rst | 63 presence of some faults. See V4L2_CID_FLASH_FAULT. 106 control may not be possible in presence of some faults. See 129 some faults. See V4L2_CID_FLASH_FAULT. 137 Faults related to the flash. The faults tell about specific problems 138 in the flash chip itself or the LEDs attached to it. Faults may 141 if the fault affects the flash LED. Exactly which faults have such 142 an effect is chip dependent. Reading the faults resets the control
|
/linux-6.12.1/Documentation/admin-guide/mm/ |
D | userfaultfd.rst | 10 memory page faults, something otherwise only the kernel code could do. 19 regions of virtual memory with it. Then, any page faults which occur within the 26 1) ``read/POLLIN`` protocol to notify a userland thread of the faults 58 handle kernel page faults have been a useful tool for exploiting the kernel). 63 - Any user can always create a userfaultfd which traps userspace page faults 67 - In order to also trap kernel page faults for the address space, either the 80 to /dev/userfaultfd can always create userfaultfds that trap kernel page faults; 102 other than page faults are supported. These events are described in more 127 bitmask will specify to the kernel which kind of faults to track for 132 hugetlbfs), or all types of intercepted faults. [all …]
|
/linux-6.12.1/arch/powerpc/platforms/powernv/ |
D | vas-fault.c | 24 * 8MB FIFO can be used if expects more faults for each VAS 57 * It can raise a single interrupt for multiple faults. Expects OS to 58 * process all valid faults and return credit for each fault on user 78 * VAS can interrupt with multiple page faults. So process all in vas_fault_thread_fn() 92 * fifo_in_progress is set. Means these new faults will be in vas_fault_thread_fn() 153 * NX sees faults only with user space windows. in vas_fault_thread_fn() 176 * NX can generate an interrupt for multiple faults. So the in vas_fault_handler() 178 * entry. In case if NX sees continuous faults, it is possible in vas_fault_handler() 197 * FIFO upon page faults.
|
/linux-6.12.1/Documentation/mm/ |
D | page_tables.rst | 157 MMU, TLB, and Page Faults 181 When these conditions happen, the MMU triggers page faults, which are types of 185 There are common and expected causes of page faults. These are triggered by 187 "Copy-on-Write". Page faults may also happen when frames have been swapped out 212 Additionally, page faults may be also caused by code bugs or by maliciously 223 Linux kernel handles these page faults, creates tables and tables' entries, 272 Linux to handle page faults in a way that is tailored to the specific 276 To conclude this high altitude view of how Linux handles page faults, let's 277 add that the page faults handler can be disabled and enabled respectively with 281 disable traps into the page faults handler, mostly to prevent deadlocks.
|
/linux-6.12.1/Documentation/gpu/rfc/ |
D | i915_vm_bind.rst | 96 newer VM_BIND mode, the VM_BIND mode with GPU page faults and possible future 98 The older execbuf mode and the newer VM_BIND mode without page faults manages 99 residency of backing storage using dma_fence. The VM_BIND mode with page faults 108 In future, when GPU page faults are supported, we can potentially use a 124 When GPU page faults are supported, the execbuf path do not take any of these 180 Where GPU page faults are not available, kernel driver upon buffer invalidation 210 GPU page faults 212 GPU page faults when supported (in future), will only be supported in the 214 binding will require using dma-fence to ensure residency, the GPU page faults 240 faults enabled.
|
/linux-6.12.1/Documentation/admin-guide/cgroup-v1/ |
D | hugetlb.rst | 25 …rsvd.max_usage_in_bytes # show max "hugepagesize" hugetlb reservations and no-reserve faults 26 …svd.usage_in_bytes # show current reservations and no-reserve faults for "hugepagesize"… 28 …tlb.<hugepagesize>.limit_in_bytes # set/show limit of "hugepagesize" hugetlb faults 116 For shared HugeTLB memory, both HugeTLB reservation and page faults are charged 127 When a HugeTLB cgroup goes offline with some reservations or faults still 138 complex compared to the tracking of HugeTLB faults, so it is significantly
|
/linux-6.12.1/Documentation/driver-api/ |
D | dma-buf.rst | 308 Recoverable Hardware Page Faults Implications 311 Modern hardware supports recoverable page faults, which has a lot of 317 means any workload using recoverable page faults cannot use DMA fences for 324 faults. Specifically this means implicit synchronization will not be possible. 325 The exception is when page faults are only used as migration hints and never to 327 faults on GPUs are limited to pure compute workloads. 331 job with a DMA fence and a compute workload using recoverable page faults are 362 to guarantee all pending GPU page faults are flushed. 365 allocating memory to repair hardware page faults, either through separate 369 robust to limit the impact of handling hardware page faults to the specific [all …]
|
/linux-6.12.1/drivers/gpu/drm/xe/ |
D | xe_vm_doc.h | 214 * idle to ensure no faults. This done by waiting on all of VM's dma-resv slots. 311 * A VM in fault mode can be enabled on devices that support page faults. If 312 * page faults are enabled, using dma fences can potentially induce a deadlock: 331 * Page faults are received in the G2H worker under the CT lock which is in the 332 * path of dma fences (no memory allocations are allowed, faults require memory 333 * allocations) thus we cannot process faults under the CT lock. Another issue 334 * is faults issue TLB invalidations which require G2H credits and we cannot 339 * To work around the above issue with processing faults in the G2H worker, we 340 * sink faults to a buffer which is large enough to sink all possible faults on 341 * the GT (1 per hardware engine) and kick a worker to process the faults. Since [all …]
|
D | xe_gt_types.h | 241 * @usm.pf_queue: Page fault queue used to sync faults so faults can 243 * it can sync all possible faults (1 per physical engine). 244 * Multiple queues exists for page faults from different VMs are 260 * moved by worker which processes faults (consumer). 270 /** @usm.pf_queue.worker: to process page faults */
|
/linux-6.12.1/tools/testing/selftests/powerpc/mm/ |
D | stress_code_patching.sh | 20 echo "Testing for spurious faults when mapping kernel memory..." 44 echo "FAILED: Mapping kernel memory causes spurious faults" 1>&2 47 echo "OK: Mapping kernel memory does not cause spurious faults"
|
D | pkey_exec_prot.c | 52 /* Check if too many faults have occurred for a single test case */ in segv_handler() 54 sigsafe_err("got too many faults for the same address\n"); in segv_handler() 228 * This should generate two faults. First, a pkey fault in test() 256 * This should generate pkey faults based on IAMR bits which in test()
|
/linux-6.12.1/include/uapi/linux/ |
D | virtio_balloon.h | 66 #define VIRTIO_BALLOON_S_MAJFLT 2 /* Number of major faults */ 67 #define VIRTIO_BALLOON_S_MINFLT 3 /* Number of minor faults */ 85 VIRTIO_BALLOON_S_NAMES_prefix "major-faults", \ 86 VIRTIO_BALLOON_S_NAMES_prefix "minor-faults", \
|
/linux-6.12.1/drivers/hwmon/ |
D | ltc4260.c | 98 if (fault) /* Clear reported faults in chip register */ in ltc4260_bool_show() 110 * UV/OV faults are associated with the input voltage, and the POWER BAD and 111 * FET SHORT faults are associated with the output voltage. 156 /* Clear faults */ in ltc4260_probe()
|
D | ltc4222.c | 112 if (fault) /* Clear reported faults in chip register */ in ltc4222_bool_show() 126 * UV/OV faults are associated with the input voltage, and power bad and fet 127 * faults are associated with the output voltage. 192 /* Clear faults */ in ltc4222_probe()
|
/linux-6.12.1/lib/ |
D | test_hmm_uapi.h | 21 * @faults: (out) number of device page faults seen 28 __u64 faults; member
|
/linux-6.12.1/Documentation/ABI/testing/ |
D | sysfs-class-led-flash | 54 Space separated list of flash faults that may have occurred. 55 Flash faults are re-read after strobing the flash. Possible 56 flash faults:
|
/linux-6.12.1/Documentation/virt/kvm/devices/ |
D | s390_flic.rst | 18 - enable/disable for the guest transparent async page faults 58 Enables async page faults for the guest. So in case of a major page fault 62 Disables async page faults for the guest and waits until already pending 63 async page faults are done. This is necessary to trigger a completion interrupt
|
/linux-6.12.1/arch/x86/mm/ |
D | fault.c | 418 * Note we only handle faults in kernel here. 563 * ones are faults accessing the GDT, or LDT. Perhaps in show_fault_oops() 620 * kernel addresses are always protection faults. in sanitize_error_code() 693 /* Only not-present faults should be handled by KFENCE. */ in page_fault_oops() 896 * 3. T1 : faults... in bad_area_access_error() 971 * Spurious faults may only occur if the TLB contains an entry with 973 * and reserved bit (R = 1) faults are never spurious. 995 * spurious faults. in spurious_kernel_fault() 1000 * faults. in spurious_kernel_fault() 1080 * faults just to hit a X86_PF_PK as soon as we fill in a in access_error() [all …]
|
/linux-6.12.1/tools/perf/pmu-events/arch/x86/amdzen2/ |
D | floating-point.json | 119 "BriefDescription": "Floating Point Dispatch Faults. YMM spill fault.", 125 "BriefDescription": "Floating Point Dispatch Faults. YMM fill fault.", 131 "BriefDescription": "Floating Point Dispatch Faults. XMM fill fault.", 137 "BriefDescription": "Floating Point Dispatch Faults. x87 fill fault.",
|
/linux-6.12.1/tools/perf/pmu-events/arch/x86/amdzen3/ |
D | floating-point.json | 118 "BriefDescription": "Floating Point Dispatch Faults. YMM spill fault.", 124 "BriefDescription": "Floating Point Dispatch Faults. YMM fill fault.", 130 "BriefDescription": "Floating Point Dispatch Faults. XMM fill fault.", 136 "BriefDescription": "Floating Point Dispatch Faults. x87 fill fault.",
|
/linux-6.12.1/Documentation/arch/arm64/ |
D | memory-tagging-extension.rst | 58 Tag Check Faults 75 thread, asynchronously following one or multiple tag check faults, 87 - ``PR_MTE_TCF_NONE`` - *Ignore* tag check faults 92 If no modes are specified, tag check faults are ignored. If a single 172 - No tag checking modes are selected (tag check faults ignored) 321 * tag check faults (based on per-CPU preference) and allow all
|
/linux-6.12.1/arch/x86/include/asm/ |
D | kfence.h | 50 * We need to avoid IPIs, as we may get KFENCE allocations or faults in kfence_protect_page() 53 * lazy fault handling takes care of faults after the page is PRESENT. in kfence_protect_page()
|
/linux-6.12.1/arch/powerpc/lib/ |
D | vmx-helper.c | 20 * We need to disable page faults as they can call schedule and in enter_vmx_usercopy() 21 * thus make us lose the VMX context. So on page faults, we just in enter_vmx_usercopy()
|
/linux-6.12.1/Documentation/i2c/ |
D | fault-codes.rst | 11 Not all fault reports imply errors; "page faults" should be a familiar 13 faults. There may be fancier recovery schemes that are appropriate in 86 about probe faults other than ENXIO and ENODEV.)
|