Lines Matching refs:swiotlb
4 DMA and swiotlb
7 swiotlb is a memory buffer allocator used by the Linux kernel DMA layer. It is
10 the DMA layer calls swiotlb to allocate a temporary memory buffer that conforms
16 Device drivers don't interact directly with swiotlb. Instead, drivers inform
31 swiotlb was originally created to handle DMA for devices with addressing
61 The primary swiotlb APIs are swiotlb_tbl_map_single() and
77 swiotlb also provides "sync" APIs that correspond to the dma_sync_*() APIs that
79 device. The swiotlb "sync" APIs cause a CPU copy of the data between the
80 original buffer and the bounce buffer. Like the dma_sync_*() APIs, the swiotlb
86 The swiotlb map/unmap/sync APIs must operate without blocking, as they are
88 block. Hence the default memory pool for swiotlb allocations must be
89 pre-allocated at boot time (but see Dynamic swiotlb below). Because swiotlb
93 The need to pre-allocate the default swiotlb pool creates a boot-time tradeoff.
102 on the I/O patterns of the workload in the VM. The dynamic swiotlb feature
103 described below can help, but has limitations. Better management of the swiotlb
106 A single allocation from swiotlb is limited to IO_TLB_SIZE * IO_TLB_SEGSIZE
108 are such that the device might use swiotlb, the maximum size of a DMA segment
112 are too large for swiotlb, and get a "swiotlb full" error.
115 so that some number of low order bits are set, or it may be zero. swiotlb
123 swiotlb, max_sectors_kb might be 512 KiB or larger. If a device might use
124 swiotlb, max_sectors_kb will be 256 KiB. When min_align_mask is non-zero,
134 pre-padding or post-padding space is not initialized by swiotlb code. The
141 Memory used for swiotlb bounce buffers is allocated from overall system memory
144 "swiotlb=" kernel boot line parameter. The default size may also be adjusted
151 pool memory must be decrypted before swiotlb is used.
165 for a single global spin lock when swiotlb is heavily used, such as in a CoCo
169 number of areas can also be set via the "swiotlb=" kernel boot parameter.
174 trying an allocation, so contention may occur if swiotlb is relatively busy
193 Dynamic swiotlb
195 When CONFIG_SWIOTLB_DYNAMIC is enabled, swiotlb can do on-demand expansion of
199 into an swiotlb pool. Creating an additional pool must be done asynchronously
200 because the memory allocation may block, and as noted above, swiotlb requests
202 buffer request creates a "transient pool" to avoid returning an "swiotlb full"
224 New pools added via dynamic swiotlb are linked together in a linear list.
225 swiotlb code frequently must search for the pool containing a particular
226 swiotlb physical address, so that search is linear and not performant with a
230 Overall, dynamic swiotlb works best for small configurations with relatively
231 few CPUs. It allows the default swiotlb pool to be smaller so that memory is
237 swiotlb is managed with four primary data structures: io_tlb_mem, io_tlb_pool,
238 io_tlb_area, and io_tlb_slot. io_tlb_mem describes a swiotlb memory allocator,
240 linked to it. Limited statistics on swiotlb usage are kept per memory allocator
242 /sys/kernel/debug/swiotlb when CONFIG_DEBUG_FS is set.
252 calling processor ID. Areas exist solely to allow parallel access to swiotlb
262 APIs and the corresponding swiotlb APIs use the bounce buffer address as the
268 swiotlb data structures must save the original memory buffer address so that it
275 address of the start of the bounce buffer isn't known to swiotlb code. But
276 swiotlb code must be able to calculate the corresponding original memory buffer
309 The swiotlb machinery is also used for "restricted pools", which are pools of
310 memory separate from the default swiotlb pool, and that are dedicated for DMA
315 on its own io_tlb_mem data structure that is independent of the main swiotlb