Lines Matching +full:- +full:affinity

33 thread system-wide.  A single MT wq needed to keep around the same
60 * Use per-CPU unified worker pools shared by all wq to provide
85 worker-pools.
87 The cmwq design differentiates between the user-facing workqueues that
89 which manages worker-pools and processes the queued work items.
91 There are two worker-pools, one for normal work items and the other
93 worker-pools to serve work items queued on unbound workqueues - the
98 Each per-CPU BH worker pool contains only one pseudo worker which represents
110 When a work item is queued to a workqueue, the target worker-pool is
112 and appended on the shared worklist of the worker-pool. For example,
114 be queued on the worklist of either normal or highpri worker-pool that
123 Each worker-pool bound to an actual CPU implements concurrency
124 management by hooking into the scheduler. The worker-pool is notified
130 workers on the CPU, the worker-pool doesn't start execution of a new
152 wq's that have a rescue-worker reserved for execution under memory
153 pressure. Else it is possible that the worker-pool deadlocks waiting
162 removal. ``alloc_workqueue()`` takes three arguments - ``@name``,
173 ---------
177 workqueues are always per-CPU and all BH work items are executed in the
188 worker-pools which host workers which are not bound to any
191 worker-pools try to start execution of work items as soon as
215 worker-pool of the target cpu. Highpri worker-pools are
218 Note that normal and highpri worker-pools don't interact with
226 worker-pool from starting execution. This is useful for bound
233 non-CPU-intensive work items can delay execution of CPU
240 --------------
245 at the same time per CPU. This is always a per-CPU attribute, even for
365 Affinity Scopes
368 An unbound workqueue groups CPUs according to its affinity scope to improve
369 cache locality. For example, if a workqueue is using the default affinity
376 Workqueue currently supports the following affinity scopes.
384 worker on the same CPU. This makes unbound workqueues behave as per-cpu
394 cases. This is the default affinity scope.
403 The default affinity scope can be changed with the module parameter
404 ``workqueue.default_affinity_scope`` and a specific workqueue's affinity
407 If ``WQ_SYSFS`` is set, the workqueue will have the following affinity scope
412 Read to see the current affinity scope. Write to change.
418 0 by default indicating that affinity scopes are not strict. When a work
419 item starts execution, workqueue makes a best-effort attempt to ensure
420 that the worker is inside its affinity scope, which is called
427 scope. This may be useful when crossing affinity scopes has other
433 Affinity Scopes and Performance
438 kernel, there exists a pronounced trade-off between locality and utilization
444 enough across the affinity scopes by the issuers. The following performance
445 testing with dm-crypt clearly illustrates this trade-off.
447 The tests are run on a CPU with 12-cores/24-threads split across four L3
449 ``/dev/dm-0`` is a dm-crypt device created on NVME SSD (Samsung 990 PRO) and
454 -------------------------------------------------------------
458 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k --ioengine=libaio \
459 --iodepth=64 --runtime=60 --numjobs=24 --time_based --group_reporting \
460 --name=iops-test-job --verify=sha512
462 There are 24 issuers, each issuing 64 IOs concurrently. ``--verify=sha512``
465 are the read bandwidths and CPU utilizations depending on different affinity
469 .. list-table::
471 :header-rows: 1
473 * - Affinity
474 - Bandwidth (MiBps)
475 - CPU util (%)
477 * - system
478 - 1159.40 ±1.34
479 - 99.31 ±0.02
481 * - cache
482 - 1166.40 ±0.89
483 - 99.34 ±0.01
485 * - cache (strict)
486 - 1166.00 ±0.71
487 - 99.35 ±0.01
491 machine but the cache-affine ones outperform by 0.6% thanks to improved
496 -----------------------------------------------------
500 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k \
501 --ioengine=libaio --iodepth=64 --runtime=60 --numjobs=8 \
502 --time_based --group_reporting --name=iops-test-job --verify=sha512
504 The only difference from the previous scenario is ``--numjobs=8``. There are
508 .. list-table::
510 :header-rows: 1
512 * - Affinity
513 - Bandwidth (MiBps)
514 - CPU util (%)
516 * - system
517 - 1155.40 ±0.89
518 - 97.41 ±0.05
520 * - cache
521 - 1154.40 ±1.14
522 - 96.15 ±0.09
524 * - cache (strict)
525 - 1112.00 ±4.64
526 - 93.26 ±0.35
539 -----------------------------------------------------------
543 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k \
544 --ioengine=libaio --iodepth=64 --runtime=60 --numjobs=4 \
545 --time_based --group_reporting --name=iops-test-job --verify=sha512
547 Again, the only difference is ``--numjobs=4``. With the number of issuers
551 .. list-table::
553 :header-rows: 1
555 * - Affinity
556 - Bandwidth (MiBps)
557 - CPU util (%)
559 * - system
560 - 993.60 ±1.82
561 - 75.49 ±0.06
563 * - cache
564 - 973.40 ±1.52
565 - 74.90 ±0.07
567 * - cache (strict)
568 - 828.20 ±4.49
569 - 66.84 ±0.29
576 ------------------------------
578 In the above experiments, the efficiency advantage of the "cache" affinity
583 While the loss of work-conservation in certain scenarios hurts, it is a lot
586 affinity scope for unbound pools.
593 * An unbound workqueue with strict "cpu" affinity scope behaves the same as
594 ``WQ_CPU_INTENSIVE`` per-cpu workqueue. There is no real advanage to the
597 * Affinity scopes are introduced in Linux v6.5. To emulate the previous
598 behavior, use strict "numa" affinity scope.
600 * The loss of work-conservation in non-strict affinity scopes is likely
603 work-conservation in most cases. As such, it is possible that future
610 Use tools/workqueue/wq_dump.py to examine unbound CPU affinity
614 Affinity Scopes
645 pod_node [0]=-1
651 pool[01] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 0
653 pool[03] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 1
655 pool[05] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 2
657 pool[07] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 3
661 pool[11] ref= 1 nice=-20 idle/workers= 1/ 1 cpus=0000000f
662 pool[12] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=00000003
663 pool[13] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=0000000c
665 Workqueue CPU -> pool
691 events 18545 0 6.1 0 5 - -
692 events_highpri 8 0 0.0 0 0 - -
693 events_long 3 0 0.0 0 0 - -
694 events_unbound 38306 0 0.1 - 7 - -
695 events_freezable 0 0 0.0 0 0 - -
696 events_power_efficient 29598 0 0.2 0 0 - -
697 events_freezable_pwr_ef 10 0 0.0 0 0 - -
698 sock_diag_events 0 0 0.0 0 0 - -
701 events 18548 0 6.1 0 5 - -
702 events_highpri 8 0 0.0 0 0 - -
703 events_long 3 0 0.0 0 0 - -
704 events_unbound 38322 0 0.1 - 7 - -
705 events_freezable 0 0 0.0 0 0 - -
706 events_power_efficient 29603 0 0.2 0 0 - -
707 events_freezable_pwr_ef 10 0 0.0 0 0 - -
708 sock_diag_events 0 0 0.0 0 0 - -
755 Non-reentrance Conditions
758 Workqueue guarantees that a work item cannot be re-entrant if the following
766 executed by at most one worker system-wide at any given time.
776 .. kernel-doc:: include/linux/workqueue.h
778 .. kernel-doc:: kernel/workqueue.c