Lines Matching full:faults
214 * idle to ensure no faults. This done by waiting on all of VM's dma-resv slots.
311 * A VM in fault mode can be enabled on devices that support page faults. If
312 * page faults are enabled, using dma fences can potentially induce a deadlock:
331 * Page faults are received in the G2H worker under the CT lock which is in the
332 * path of dma fences (no memory allocations are allowed, faults require memory
333 * allocations) thus we cannot process faults under the CT lock. Another issue
334 * is faults issue TLB invalidations which require G2H credits and we cannot
339 * To work around the above issue with processing faults in the G2H worker, we
340 * sink faults to a buffer which is large enough to sink all possible faults on
341 * the GT (1 per hardware engine) and kick a worker to process the faults. Since
342 * the page faults G2H are already received in a worker, kicking another worker
350 * faults from different VMs can be processed in parallel.
383 * G2H. Unlike page faults there is no upper bound so if the buffer is full we
385 * safe to drop these unlike page faults.
543 * Update page faults to handle BOs are page level grainularity (e.g. part of BO
550 * signal page fault complete. Our handling of short circuting for atomic faults