Home
last modified time | relevance | path

Searched refs:faults (Results 1 – 25 of 119) sorted by relevance

12345

/linux-6.12.1/drivers/iommu/
Dio-pgfault.c46 list_for_each_entry_safe(iopf, next, &group->faults, list) { in __iopf_free_group()
99 INIT_LIST_HEAD(&group->faults); in iopf_group_alloc()
101 list_add(&group->last_fault.list, &group->faults); in iopf_group_alloc()
108 list_move(&iopf->list, &group->faults); in iopf_group_alloc()
110 list_add(&group->pending_node, &iopf_param->faults); in iopf_group_alloc()
113 group->fault_count = list_count_nodes(&group->faults); in iopf_group_alloc()
410 INIT_LIST_HEAD(&fault_param->faults); in iopf_queue_add_device()
471 list_for_each_entry_safe(group, temp, &fault_param->faults, pending_node) { in iopf_queue_remove_device()
/linux-6.12.1/Documentation/userspace-api/media/v4l/
Dext-ctrls-flash.rst63 presence of some faults. See V4L2_CID_FLASH_FAULT.
106 control may not be possible in presence of some faults. See
129 some faults. See V4L2_CID_FLASH_FAULT.
137 Faults related to the flash. The faults tell about specific problems
141 if the fault affects the flash LED. Exactly which faults have such
142 an effect is chip dependent. Reading the faults resets the control
/linux-6.12.1/Documentation/admin-guide/cgroup-v1/
Dhugetlb.rst25 …rsvd.max_usage_in_bytes # show max "hugepagesize" hugetlb reservations and no-reserve faults
26 …svd.usage_in_bytes # show current reservations and no-reserve faults for "hugepagesize"…
28 …tlb.<hugepagesize>.limit_in_bytes # set/show limit of "hugepagesize" hugetlb faults
116 For shared HugeTLB memory, both HugeTLB reservation and page faults are charged
127 When a HugeTLB cgroup goes offline with some reservations or faults still
138 complex compared to the tracking of HugeTLB faults, so it is significantly
/linux-6.12.1/tools/perf/util/
Dparse-events.l399 page-faults|faults { return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_PAGE_FAULTS); }
400 minor-faults { return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_PAGE_FAULTS_MIN); }
401 major-faults { return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_PAGE_FAULTS_MAJ); }
404 alignment-faults { return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_ALIGNMENT_FAULTS); }
405 emulation-faults { return sym(yyscanner, PERF_TYPE_SOFTWARE, PERF_COUNT_SW_EMULATION_FAULTS); }
/linux-6.12.1/Documentation/gpu/rfc/
Di915_vm_bind.rst96 newer VM_BIND mode, the VM_BIND mode with GPU page faults and possible future
98 The older execbuf mode and the newer VM_BIND mode without page faults manages
99 residency of backing storage using dma_fence. The VM_BIND mode with page faults
108 In future, when GPU page faults are supported, we can potentially use a
124 When GPU page faults are supported, the execbuf path do not take any of these
180 Where GPU page faults are not available, kernel driver upon buffer invalidation
210 GPU page faults
212 GPU page faults when supported (in future), will only be supported in the
214 binding will require using dma-fence to ensure residency, the GPU page faults
240 faults enabled.
/linux-6.12.1/Documentation/admin-guide/mm/
Duserfaultfd.rst10 memory page faults, something otherwise only the kernel code could do.
19 regions of virtual memory with it. Then, any page faults which occur within the
26 1) ``read/POLLIN`` protocol to notify a userland thread of the faults
58 handle kernel page faults have been a useful tool for exploiting the kernel).
63 - Any user can always create a userfaultfd which traps userspace page faults
67 - In order to also trap kernel page faults for the address space, either the
80 to /dev/userfaultfd can always create userfaultfds that trap kernel page faults;
102 other than page faults are supported. These events are described in more
127 bitmask will specify to the kernel which kind of faults to track for
132 hugetlbfs), or all types of intercepted faults.
[all …]
/linux-6.12.1/Documentation/driver-api/
Ddma-buf.rst311 Modern hardware supports recoverable page faults, which has a lot of
317 means any workload using recoverable page faults cannot use DMA fences for
324 faults. Specifically this means implicit synchronization will not be possible.
325 The exception is when page faults are only used as migration hints and never to
327 faults on GPUs are limited to pure compute workloads.
331 job with a DMA fence and a compute workload using recoverable page faults are
362 to guarantee all pending GPU page faults are flushed.
365 allocating memory to repair hardware page faults, either through separate
369 robust to limit the impact of handling hardware page faults to the specific
374 in the kernel even for resolving hardware page faults, e.g. by using copy
[all …]
/linux-6.12.1/drivers/gpu/drm/msm/
Dmsm_submitqueue.c240 size_t size = min_t(size_t, args->len, sizeof(queue->faults)); in msm_submitqueue_query_faults()
245 args->len = sizeof(queue->faults); in msm_submitqueue_query_faults()
252 ret = copy_to_user(u64_to_user_ptr(args->data), &queue->faults, size); in msm_submitqueue_query_faults()
/linux-6.12.1/Documentation/ABI/testing/
Dsysfs-class-led-flash54 Space separated list of flash faults that may have occurred.
55 Flash faults are re-read after strobing the flash. Possible
56 flash faults:
Dsysfs-bus-iio-thermocouple16 Open-circuit fault. The detection of open-circuit faults,
/linux-6.12.1/tools/testing/selftests/mm/
Dhmm-tests.c43 uint64_t faults; member
200 buffer->faults = cmd.faults; in hmm_dmirror_cmd()
340 ASSERT_EQ(buffer->faults, 1); in TEST_F()
450 ASSERT_EQ(buffer->faults, 1); in TEST_F()
494 ASSERT_EQ(buffer->faults, 1); in TEST_F()
516 ASSERT_EQ(buffer->faults, 1); in TEST_F()
593 ASSERT_EQ(buffer->faults, 1); in TEST_F()
671 ASSERT_EQ(buffer->faults, 1); in TEST_F()
727 ASSERT_EQ(buffer->faults, 1); in TEST_F()
831 ASSERT_EQ(buffer->faults, 1); in TEST_F()
[all …]
/linux-6.12.1/Documentation/mm/
Dpage_tables.rst181 When these conditions happen, the MMU triggers page faults, which are types of
185 There are common and expected causes of page faults. These are triggered by
187 "Copy-on-Write". Page faults may also happen when frames have been swapped out
212 Additionally, page faults may be also caused by code bugs or by maliciously
223 Linux kernel handles these page faults, creates tables and tables' entries,
272 Linux to handle page faults in a way that is tailored to the specific
276 To conclude this high altitude view of how Linux handles page faults, let's
277 add that the page faults handler can be disabled and enabled respectively with
281 disable traps into the page faults handler, mostly to prevent deadlocks.
/linux-6.12.1/Documentation/virt/kvm/devices/
Ds390_flic.rst18 - enable/disable for the guest transparent async page faults
58 Enables async page faults for the guest. So in case of a major page fault
62 Disables async page faults for the guest and waits until already pending
63 async page faults are done. This is necessary to trigger a completion interrupt
/linux-6.12.1/lib/
Dtest_hmm_uapi.h28 __u64 faults; member
/linux-6.12.1/Documentation/scheduler/
Dsched-debug.rst14 high then the rate the kernel samples for NUMA hinting faults may be
35 Higher scan rates incur higher system overhead as page faults must be
/linux-6.12.1/kernel/sched/
Dfair.c1469 unsigned long faults[]; member
1636 return ng->faults[task_faults_idx(NUMA_MEM, nid, 0)] + in group_faults()
1637 ng->faults[task_faults_idx(NUMA_MEM, nid, 1)]; in group_faults()
1642 return group->faults[task_faults_idx(NUMA_CPU, nid, 0)] + in group_faults_cpu()
1643 group->faults[task_faults_idx(NUMA_CPU, nid, 1)]; in group_faults_cpu()
1648 unsigned long faults = 0; in group_faults_priv() local
1652 faults += ng->faults[task_faults_idx(NUMA_MEM, node, 1)]; in group_faults_priv()
1655 return faults; in group_faults_priv()
1660 unsigned long faults = 0; in group_faults_shared() local
1664 faults += ng->faults[task_faults_idx(NUMA_MEM, node, 0)]; in group_faults_shared()
[all …]
/linux-6.12.1/tools/perf/tests/shell/
Dstat+std_output.sh14 event_name=(cpu-clock task-clock context-switches cpu-migrations page-faults stalled-cycles-fronten…
Drecord_bpf_filter.sh123 -e page-faults --filter 'ip < 0xffffffff00000000' \
/linux-6.12.1/Documentation/i2c/
Dfault-codes.rst11 Not all fault reports imply errors; "page faults" should be a familiar
13 faults. There may be fancier recovery schemes that are appropriate in
86 about probe faults other than ENXIO and ENODEV.)
/linux-6.12.1/Documentation/arch/arm64/
Dmemory-tagging-extension.rst75 thread, asynchronously following one or multiple tag check faults,
87 - ``PR_MTE_TCF_NONE``  - *Ignore* tag check faults
92 If no modes are specified, tag check faults are ignored. If a single
172 - No tag checking modes are selected (tag check faults ignored)
321 * tag check faults (based on per-CPU preference) and allow all
/linux-6.12.1/drivers/gpu/drm/amd/amdkfd/
DKconfig24 preemptions and one based on page faults. To enable page fault
/linux-6.12.1/Documentation/devicetree/bindings/iommu/
Dti,omap-iommu.txt22 back a bus error response on MMU faults.
/linux-6.12.1/Documentation/fault-injection/
Dfault-injection.rst222 In order to inject faults while debugfs is not available (early boot time),
242 Note that this file enables all types of faults (slab, futex, etc).
247 This feature is intended for systematic testing of faults in a single
511 Systematic faults using fail-nth
514 The following code systematically faults 0-th, 1-st, 2-nd and so on
/linux-6.12.1/drivers/ras/
DKconfig14 faults.
/linux-6.12.1/Documentation/hwmon/
Dmax31827.rst130 The Fault Queue bits select how many consecutive temperature faults must occur
131 before overtemperature or undertemperature faults are indicated in the

12345