Lines Matching full:throughput

28  * to distribute the device throughput among processes as desired,
29 * without any distortion due to throughput fluctuations, or to device
34 * guarantees that each queue receives a fraction of the throughput
37 * processes issuing sequential requests (to boost the throughput),
76 * preserving both a low latency and a high throughput on NCQ-capable,
81 * the maximum-possible throughput at all times, then do switch off
190 * writes to steal I/O throughput to reads.
240 * because it is characterized by limited throughput and apparently
320 * a) unjustly steal throughput to applications that may actually need
323 * in loss of device throughput with most flash-based storage, and may
349 * throughput-friendly I/O operations. This is even more true if BFQ
823 * must receive the same share of the throughput (symmetric scenario),
825 * throughput lower than or equal to the share that every other active
828 * throughput even if I/O dispatching is not plugged when bfqq remains
1290 * throughput: the quicker the requests of the activated queues are
1296 * weight-raising these new queues just lowers throughput in most
1320 * idling depending on which choice boosts the throughput more. The
1574 * I/O, which may in turn cause loss of throughput. Finally, there may
1691 * budget. Do not care about throughput consequences, in bfq_update_bfqq_wr_on_rq_arrival()
1934 * guarantees or throughput. As for guarantees, we care in bfq_bfqq_handle_idle_busy_switch()
1964 * As for throughput, we ask bfq_better_to_idle() whether we in bfq_bfqq_handle_idle_busy_switch()
1967 * boost throughput or to perserve service guarantees. Then in bfq_bfqq_handle_idle_busy_switch()
1969 * would certainly lower throughput. We may end up in this in bfq_bfqq_handle_idle_busy_switch()
2021 * throughput, as explained in detail in the comments in in bfq_reset_inject_limit()
2089 * A remarkable throughput boost can be reached by unconditionally
2092 * plugged for bfqq. In addition to boosting throughput, this
2116 * The sooner a waker queue is detected, the sooner throughput can be
2148 * doesn't hurt throughput that much. The condition below makes sure in bfq_check_waker()
2734 * the best possible order for throughput. in bfq_find_close_cooperator()
2803 * are likely to increase the throughput. in bfq_setup_merge()
2957 * throughput, it must have many requests enqueued at the same in bfq_setup_cooperator()
2963 * the throughput reached by the device is likely to be the in bfq_setup_cooperator()
2967 * terms of throughput. Merging tends to make many workloads in bfq_setup_cooperator()
2976 * for BFQ to let the device reach a high throughput. in bfq_setup_cooperator()
3285 * budget. This prevents seeky processes from lowering the throughput.
3389 * its reserved share of the throughput (in particular, it is in bfq_arm_slice_timer()
3412 * this maximises throughput with sequential workloads.
3421 * Update parameters related to throughput and responsiveness, as a
3679 * throughput concerns, but to preserve the throughput share of
3691 * determine also the actual throughput distribution among
3693 * concern about per-process throughput distribution, and
3696 * scheduler is likely to coincide with the desired throughput
3699 * (i-a) each of these processes must get the same throughput as
3703 * throughput than any of the other processes;
3712 * same throughput. This is exactly the desired throughput
3719 * that bfqq receives its assigned fraction of the device throughput
3722 * The problem is that idling may significantly reduce throughput with
3726 * throughput, it is important to check conditions (i-a), i(-b) and
3742 * share of the throughput even after being dispatched. In this
3747 * guaranteed its fair share of the throughput (basically because
3775 * risk of getting less throughput than its fair share.
3779 * throughput. This mechanism and its benefits are explained
3816 * part) without minimally sacrificing throughput. And, if
3818 * this device is probably a high throughput.
3992 * for throughput. in __bfq_bfqq_recalc_budget()
4016 * the throughput, as discussed in the in __bfq_bfqq_recalc_budget()
4031 * the chance to boost the throughput if this in __bfq_bfqq_recalc_budget()
4045 * candidate to boost the disk throughput. in __bfq_bfqq_recalc_budget()
4127 * their chances to lower the throughput. More details in the comments
4238 * throughput with the I/O of the application (e.g., because the I/O
4329 * tends to lower the throughput). In addition, this time-charging
4475 * only to be kicked off for preserving a high throughput.
4508 * boosts the throughput. in idling_boosts_thr_without_issues()
4511 * idling is virtually always beneficial for the throughput if: in idling_boosts_thr_without_issues()
4521 * throughput even with sequential I/O; rather it would lower in idling_boosts_thr_without_issues()
4522 * the throughput in proportion to how fast the device in idling_boosts_thr_without_issues()
4545 * of the device throughput proportional to their high in idling_boosts_thr_without_issues()
4573 * device idling plays a critical role for both throughput boosting
4578 * beneficial for throughput or, even if detrimental for throughput,
4580 * latency, desired throughput distribution, ...). In particular, on
4583 * device boost the throughput without causing any service-guarantee
4624 * either boosts the throughput (without issues), or is in bfq_better_to_idle()
4638 * why performing device idling is the best choice to boost the throughput
4726 * drive reach a very high throughput, even if in bfq_choose_bfqq_for_injection()
4875 * provide a reasonable throughput. in bfq_select_queue()
4891 * throughput and is possible. in bfq_select_queue()
4932 * throughput. The best action to take is therefore to in bfq_select_queue()
4952 * bfqq delivers more throughput when served without in bfq_select_queue()
4955 * count more than overall throughput, and may be in bfq_select_queue()
4976 * reasons. First, throughput may be low because the in bfq_select_queue()
5228 * throughput. in __bfq_dispatch_request()
5710 * Many throughput-sensitive workloads are made of several parallel
5719 * throughput, and not detrimental for service guarantees. The
5726 * throughput of the flows and task-wide I/O latency. In particular,
5747 * with ten random readers on /dev/nullb shows a throughput boost of
5749 * the total per-request processing time, the above throughput boost
5776 * underutilized, and throughput may decrease. in bfq_do_or_sched_stable_merge()
5780 * throughput-beneficial if not merged. Currently this is in bfq_do_or_sched_stable_merge()
5782 * such a drive, not merging bfqq is better for throughput if in bfq_do_or_sched_stable_merge()
5804 * throughput benefits compared with in bfq_do_or_sched_stable_merge()
6014 * and in a severe loss of total throughput. in bfq_update_has_short_ttime()
6040 * performed at all times, and throughput gets boosted. in bfq_update_has_short_ttime()
6059 * to boost throughput more effectively, by injecting the I/O in bfq_update_has_short_ttime()
6096 * - we are idling to boost throughput, and in bfq_rq_enqueued()
6413 * control troubles than throughput benefits. Then reset in bfq_completed_request()
6492 * and the throughput is not affected. In contrast, if BFQ is not
6503 * To counter this loss of throughput, BFQ implements a "request
6507 * both boost throughput and not break bfqq's bandwidth and latency
6550 * set to 1, to start boosting throughput, and to prepare the