Lines Matching +full:parallel +full:- +full:out
11 pre-fetch makes the cache overhead relatively significant. If the DMA
12 preparations for the next request are done in parallel with the current
15 The intention of non-blocking (asynchronous) MMC requests is to minimize the
19 dma_unmap_sg are processing. Using non-blocking MMC requests makes it
20 possible to prepare the caches for next job in parallel with an active
26 The mmc_blk_issue_rw_rq() in the MMC block driver is made non-blocking.
35 in parallel with the transfer performance won't be affected.
40 https://wiki.linaro.org/WorkingGroups/Kernel/Specs/StoragePerfMMC-async-req
48 truly non-blocking. If there is an ongoing async request it waits
56 There are two optional members in the mmc_host_ops -- pre_req() and
57 post_req() -- that the host driver may implement in order to move work
66 The first request in a series of requests can't be prepared in parallel
77 if (is_first_req && req->size > threshold)
90 dma_issue_pending(req->dma_desc);
94 * The second issue_pending should be called before MMC runs out
95 * of the first chunk. If the MMC runs out of the first data chunk
98 dma_issue_pending(req->dma_desc);