Searched refs:batching (Results 1 – 16 of 16) sorted by relevance
/linux-6.12.1/block/ |
D | kyber-iosched.c | 185 unsigned int batching; member 505 khd->batching = 0; in kyber_init_hctx() 776 khd->batching++; in kyber_dispatch_cur_domain() 789 khd->batching++; in kyber_dispatch_cur_domain() 816 if (khd->batching < kyber_batch_size[khd->cur_domain]) { in kyber_dispatch_request() 831 khd->batching = 0; in kyber_dispatch_request() 983 seq_printf(m, "%u\n", khd->batching); in kyber_batching_show()
|
D | mq-deadline.c | 91 unsigned int batching; /* number of sequential requests made */ member 342 if (rq && dd->batching < dd->fifo_batch) { in __dd_dispatch_request() 406 dd->batching = 0; in __dd_dispatch_request() 415 dd->batching++; in __dd_dispatch_request() 924 seq_printf(m, "%u\n", dd->batching); in deadline_batching_show()
|
/linux-6.12.1/kernel/rcu/ |
D | Kconfig | 342 thus defeating the 32-callback batching used to amortize the 346 jiffy, and overrides the 32-callback batching if this limit
|
/linux-6.12.1/Documentation/trace/ |
D | events-kmem.rst | 66 callers should be batching their activities.
|
/linux-6.12.1/tools/memory-model/Documentation/ |
D | simple.txt | 64 single-threaded grace-period processing is use of batching, where all 67 it more efficient. Nor is RCU unique: Similar batching optimizations
|
/linux-6.12.1/Documentation/RCU/Design/Expedited-Grace-Periods/ |
D | Expedited-Grace-Periods.rst | 258 This batching is controlled by a sequence counter named 497 batching, so that a single grace-period operation can serve numerous 520 permits much higher degrees of batching, and thus much lower per-request
|
/linux-6.12.1/Documentation/networking/ |
D | kcm.rst | 245 Message batching
|
D | napi.rst | 185 In most scenarios batching happens due to IRQ coalescing which is done
|
/linux-6.12.1/Documentation/scheduler/ |
D | sched-design-CFS.rst | 104 "server" (i.e., good batching) workloads. It defaults to a setting suitable
|
/linux-6.12.1/Documentation/filesystems/iomap/ |
D | operations.rst | 352 for post-writeback updates by batching them. 355 iomap ioends contain a ``list_head`` to enable batching.
|
/linux-6.12.1/Documentation/RCU/ |
D | RTFP.txt | 36 this paper helped inspire the update-side batching used in the later 2382 RCU updates, RCU grace-period batching, update overhead, 2663 RCU updates, RCU grace-period batching, update overhead,
|
D | whatisRCU.rst | 391 implementations of the RCU infrastructure make heavy use of batching in
|
/linux-6.12.1/Documentation/RCU/Design/Requirements/ |
D | Requirements.rst | 1217 synchronize_rcu() are required to use batching optimizations so that 1560 requirement is another factor driving batching of grace periods, but it 1577 complete more quickly, but at the cost of restricting RCU's batching 2280 must *decrease* the per-operation overhead, witness the batching
|
/linux-6.12.1/Documentation/admin-guide/ |
D | kernel-parameters.txt | 5649 callback batching for call_rcu_tasks(). 5651 of zero will disable batching. Batching is 5656 Trace asynchronous callback batching for 5659 disable batching. Batching is always disabled
|
/linux-6.12.1/drivers/scsi/aic7xxx/ |
D | aic79xx.seq | 365 * our batching and round-robin selection scheme
|
/linux-6.12.1/Documentation/filesystems/xfs/ |
D | xfs-delayed-logging-design.rst | 357 in memory - batching them, if you like - to minimise the impact of the log IO on
|