Lines Matching full:bounce
13 memory buffer. This approach is generically called "bounce buffering", and the
14 temporary memory buffer is called a "bounce buffer".
20 if bounce buffering is necessary. If so, the DMA layer manages the allocation,
21 freeing, and sync'ing of bounce buffers. Since the DMA attributes are per
22 device, some devices in a system may use bounce buffering while others do not.
24 Because the CPU copies data between the bounce buffer and the original target
25 memory buffer, doing bounce buffering is slower than doing DMA directly to the
33 only provide 32-bit DMA addresses. By allocating bounce buffer memory below
41 to force all DMA I/O to use bounce buffers, and the bounce buffer memory is set
42 up as unencrypted. The host does DMA I/O to/from the bounce buffer memory, and
45 the unencrypted and the encrypted memory. This use of bounce buffers allows
49 Other edge case scenarios arise for bounce buffers. For example, when IOMMU
55 the unrelated kernel data. This problem is solved by bounce buffering the DMA
56 operation and ensuring that unused portions of the bounce buffers do not
62 swiotlb_tbl_unmap_single(). The "map" API allocates a bounce buffer of a
67 multiple memory buffer segments, a separate bounce buffer must be allocated for
69 CPU copy) to initialize the bounce buffer to match the contents of the original
73 updated the bounce buffer memory and DMA_ATTR_SKIP_CPU_SYNC is not set, the
74 unmap does a "sync" operation to cause a CPU copy of the data from the bounce
75 buffer back to the original buffer. Then the bounce buffer memory is freed.
80 original buffer and the bounce buffer. Like the dma_sync_*() APIs, the swiotlb
81 "sync" APIs support doing a partial sync, where only a subset of the bounce
94 The pool should be large enough to ensure that bounce buffer requests can
98 tradeoff is particularly acute in CoCo VMs that use bounce buffers for all DMA
117 bounce buffer match the same bits in the address of the original buffer. When
119 of the bounce buffer that slightly reduces the maximum size of an allocation.
128 parameter specifies the allocation of bounce buffer space must start at a
130 bounce buffer might start at a larger address if min_align_mask is non-zero.
132 the bounce buffer. Similarly, the end of the bounce buffer is rounded up to an
136 devices. It is set to the granule size - 1 so that the bounce buffer is
141 Memory used for swiotlb bounce buffers is allocated from overall system memory
155 what might be called a "slot set". When a bounce buffer is allocated, it
157 bounce buffers. Furthermore, a bounce buffer must be allocated from a single
158 slot set, which leads to the maximum bounce buffer size being IO_TLB_SIZE *
159 IO_TLB_SEGSIZE. Multiple smaller bounce buffers may co-exist in a single slot
171 When allocating a bounce buffer, if the area associated with the calling CPU
186 Because a bounce buffer allocation can't cross a slot set boundary, eliminating
187 those initial slots effectively reduces the max size of a bounce buffer.
196 the amount of memory available for allocation as bounce buffers. If a bounce
201 are not allowed to block. Once the background task is kicked off, the bounce
203 error. A transient pool has the size of the bounce buffer request, and is
204 deleted when the bounce buffer is freed. Memory for this transient pool comes
257 index computed from the bounce buffer address relative to the starting memory
262 APIs and the corresponding swiotlb APIs use the bounce buffer address as the
263 identifier for a bounce buffer. This address is returned by
273 the argument to swiotlb_sync_*() is not the address of the start of the bounce
274 buffer but an address somewhere in the middle of the bounce buffer, and the
275 address of the start of the bounce buffer isn't known to swiotlb code. But
279 occupied by the bounce buffer. An adjusted "alloc_size" of the bounce buffer is
291 available slots to use for a new bounce buffer. They are updated when allocating
292 a new bounce buffer and when freeing a bounce buffer. At pool creation time, the
298 swiotlb_tlb_map_single() allocates bounce buffer space to meet alloc_align_mask
300 when swiotbl_tlb_unmap_single() is called with the bounce buffer address, the
305 to the bounce buffer.