Lines Matching +full:locality +full:- +full:specific
33 thread system-wide. A single MT wq needed to keep around the same
60 * Use per-CPU unified worker pools shared by all wq to provide
85 worker-pools.
87 The cmwq design differentiates between the user-facing workqueues that
89 which manages worker-pools and processes the queued work items.
91 There are two worker-pools, one for normal work items and the other
93 worker-pools to serve work items queued on unbound workqueues - the
98 Each per-CPU BH worker pool contains only one pseudo worker which represents
106 things like CPU locality, concurrency limits, priority and more. To
110 When a work item is queued to a workqueue, the target worker-pool is
112 and appended on the shared worklist of the worker-pool. For example,
114 be queued on the worklist of either normal or highpri worker-pool that
123 Each worker-pool bound to an actual CPU implements concurrency
124 management by hooking into the scheduler. The worker-pool is notified
130 workers on the CPU, the worker-pool doesn't start execution of a new
152 wq's that have a rescue-worker reserved for execution under memory
153 pressure. Else it is possible that the worker-pool deadlocks waiting
162 removal. ``alloc_workqueue()`` takes three arguments - ``@name``,
173 ---------
177 workqueues are always per-CPU and all BH work items are executed in the
188 worker-pools which host workers which are not bound to any
189 specific CPU. This makes the wq behave as a simple execution
191 worker-pools try to start execution of work items as soon as
192 possible. Unbound wq sacrifices locality but is useful for
215 worker-pool of the target cpu. Highpri worker-pools are
218 Note that normal and highpri worker-pools don't interact with
226 worker-pool from starting execution. This is useful for bound
233 non-CPU-intensive work items can delay execution of CPU
240 --------------
245 at the same time per CPU. This is always a per-CPU attribute, even for
255 may queue at the same time. Unless there is a specific need for
348 * Unless there is a specific need, using 0 for @max_active is
362 level of locality in wq operations and work item execution.
369 cache locality. For example, if a workqueue is using the default affinity
384 worker on the same CPU. This makes unbound workqueues behave as per-cpu
392 CPUs are grouped according to cache boundaries. Which specific cache
404 ``workqueue.default_affinity_scope`` and a specific workqueue's affinity
419 item starts execution, workqueue makes a best-effort attempt to ensure
423 locality while still being able to utilize other CPUs if necessary and
438 kernel, there exists a pronounced trade-off between locality and utilization
441 Higher locality leads to higher efficiency where more work is performed for
442 the same number of consumed CPU cycles. However, higher locality may also
445 testing with dm-crypt clearly illustrates this trade-off.
447 The tests are run on a CPU with 12-cores/24-threads split across four L3
449 ``/dev/dm-0`` is a dm-crypt device created on NVME SSD (Samsung 990 PRO) and
454 -------------------------------------------------------------
458 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k --ioengine=libaio \
459 --iodepth=64 --runtime=60 --numjobs=24 --time_based --group_reporting \
460 --name=iops-test-job --verify=sha512
462 There are 24 issuers, each issuing 64 IOs concurrently. ``--verify=sha512``
464 execution locality matter between the issuer and ``kcryptd``. The following
469 .. list-table::
471 :header-rows: 1
473 * - Affinity
474 - Bandwidth (MiBps)
475 - CPU util (%)
477 * - system
478 - 1159.40 ±1.34
479 - 99.31 ±0.02
481 * - cache
482 - 1166.40 ±0.89
483 - 99.34 ±0.01
485 * - cache (strict)
486 - 1166.00 ±0.71
487 - 99.35 ±0.01
491 machine but the cache-affine ones outperform by 0.6% thanks to improved
492 locality.
496 -----------------------------------------------------
500 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k \
501 --ioengine=libaio --iodepth=64 --runtime=60 --numjobs=8 \
502 --time_based --group_reporting --name=iops-test-job --verify=sha512
504 The only difference from the previous scenario is ``--numjobs=8``. There are
508 .. list-table::
510 :header-rows: 1
512 * - Affinity
513 - Bandwidth (MiBps)
514 - CPU util (%)
516 * - system
517 - 1155.40 ±0.89
518 - 97.41 ±0.05
520 * - cache
521 - 1154.40 ±1.14
522 - 96.15 ±0.09
524 * - cache (strict)
525 - 1112.00 ±4.64
526 - 93.26 ±0.35
539 -----------------------------------------------------------
543 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k \
544 --ioengine=libaio --iodepth=64 --runtime=60 --numjobs=4 \
545 --time_based --group_reporting --name=iops-test-job --verify=sha512
547 Again, the only difference is ``--numjobs=4``. With the number of issuers
551 .. list-table::
553 :header-rows: 1
555 * - Affinity
556 - Bandwidth (MiBps)
557 - CPU util (%)
559 * - system
560 - 993.60 ±1.82
561 - 75.49 ±0.06
563 * - cache
564 - 973.40 ±1.52
565 - 74.90 ±0.07
567 * - cache (strict)
568 - 828.20 ±4.49
569 - 66.84 ±0.29
571 Now, the tradeoff between locality and utilization is clearer. "cache" shows
576 ------------------------------
583 While the loss of work-conservation in certain scenarios hurts, it is a lot
594 ``WQ_CPU_INTENSIVE`` per-cpu workqueue. There is no real advanage to the
600 * The loss of work-conservation in non-strict affinity scopes is likely
603 work-conservation in most cases. As such, it is possible that future
645 pod_node [0]=-1
651 pool[01] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 0
653 pool[03] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 1
655 pool[05] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 2
657 pool[07] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 3
661 pool[11] ref= 1 nice=-20 idle/workers= 1/ 1 cpus=0000000f
662 pool[12] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=00000003
663 pool[13] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=0000000c
665 Workqueue CPU -> pool
691 events 18545 0 6.1 0 5 - -
692 events_highpri 8 0 0.0 0 0 - -
693 events_long 3 0 0.0 0 0 - -
694 events_unbound 38306 0 0.1 - 7 - -
695 events_freezable 0 0 0.0 0 0 - -
696 events_power_efficient 29598 0 0.2 0 0 - -
697 events_freezable_pwr_ef 10 0 0.0 0 0 - -
698 sock_diag_events 0 0 0.0 0 0 - -
701 events 18548 0 6.1 0 5 - -
702 events_highpri 8 0 0.0 0 0 - -
703 events_long 3 0 0.0 0 0 - -
704 events_unbound 38322 0 0.1 - 7 - -
705 events_freezable 0 0 0.0 0 0 - -
706 events_power_efficient 29603 0 0.2 0 0 - -
707 events_freezable_pwr_ef 10 0 0.0 0 0 - -
708 sock_diag_events 0 0 0.0 0 0 - -
755 Non-reentrance Conditions
758 Workqueue guarantees that a work item cannot be re-entrant if the following
766 executed by at most one worker system-wide at any given time.
776 .. kernel-doc:: include/linux/workqueue.h
778 .. kernel-doc:: kernel/workqueue.c