Lines Matching full:workqueues
22 * pools for workqueues which are not bound to any specific CPU - the
337 struct list_head list; /* PR: list of all workqueues */
374 * the workqueues list without grabbing wq_pool_mutex.
375 * This is used to dump all workqueues from sysrq.
386 * Each pod type describes how CPUs should be grouped for unbound workqueues.
440 static DEFINE_MUTEX(wq_pool_mutex); /* protects pools and workqueues list */
446 static LIST_HEAD(workqueues); /* PR: list of all workqueues */
1294 * workqueues as appropriate. To avoid flooding the console, each violating work
1546 * - %NULL for per-cpu workqueues as they don't need to use shared nr_active.
1806 * This function should only be called for ordered workqueues where only the
1928 * For unbound workqueues, this function may temporarily drop @pwq->pool->lock.
1939 * workqueues. in pwq_dec_nr_active()
1982 * For unbound workqueues, this function may temporarily drop @pwq->pool->lock
2455 * This current implementation is specific to unbound workqueues. in queue_work_node()
3225 * workqueues), so hiding them isn't a problem. in process_one_work()
3331 * exception is work items which belong to workqueues with a rescuer which
3424 * workqueues which have works queued on the pool and let them process
3762 * BH and threaded workqueues need separate lockdep keys to avoid in insert_wq_barrier()
4155 * For single threaded workqueues the deadlock happens when the work in start_flush_work()
4157 * workqueues the deadlock happens when the rescuer stalls, blocking in start_flush_work()
5284 * For initialized ordered workqueues, there should only be one pwq in apply_wqattrs_prepare()
5333 /* only unbound workqueues can change attributes */ in apply_workqueue_attrs_locked()
5390 * may execute on any CPU. This is similar to how per-cpu workqueues behave on
5524 * Workqueues which may be used during memory reclaim should have a rescuer
5673 * BH workqueues always share a single execution context per CPU in __alloc_workqueue()
5703 * wq_pool_mutex protects the workqueues list, allocations of PWQs, in __alloc_workqueue()
5715 list_add_tail_rcu(&wq->list, &workqueues); in __alloc_workqueue()
5907 /* max_active doesn't mean anything for BH workqueues */ in workqueue_set_max_active()
5910 /* disallow meddling with max_active for ordered workqueues */ in workqueue_set_max_active()
5933 * Set min_active of an unbound workqueue. Unlike other types of workqueues, an
5944 /* min_active is only meaningful for non-ordered unbound workqueues */ in workqueue_set_min_active()
5997 * With the exception of ordered workqueues, all workqueues have per-cpu
6372 * Called from a sysrq handler and prints out all busy workqueues and pools.
6382 pr_info("Showing busy workqueues and worker pools:\n"); in show_all_workqueues()
6384 list_for_each_entry_rcu(wq, &workqueues, list) in show_all_workqueues()
6396 * Called from try_to_freeze_tasks() and prints out all freezable workqueues
6405 pr_info("Showing freezable workqueues that are still busy:\n"); in show_freezable_workqueues()
6407 list_for_each_entry_rcu(wq, &workqueues, list) { in show_freezable_workqueues()
6638 /* update pod affinity of unbound workqueues */ in workqueue_online_cpu()
6639 list_for_each_entry(wq, &workqueues, list) { in workqueue_online_cpu()
6669 /* update pod affinity of unbound workqueues */ in workqueue_offline_cpu()
6674 list_for_each_entry(wq, &workqueues, list) { in workqueue_offline_cpu()
6762 * freeze_workqueues_begin - begin freezing workqueues
6764 * Start freezing workqueues. After this function returns, all freezable
6765 * workqueues will queue new works to their inactive_works list instead of
6780 list_for_each_entry(wq, &workqueues, list) { in freeze_workqueues_begin()
6790 * freeze_workqueues_busy - are freezable workqueues still busy?
6799 * %true if some freezable workqueues are still busy. %false if freezing
6812 list_for_each_entry(wq, &workqueues, list) { in freeze_workqueues_busy()
6836 * thaw_workqueues - thaw workqueues
6838 * Thaw workqueues. Normal queueing is restored and all collected
6856 list_for_each_entry(wq, &workqueues, list) { in thaw_workqueues()
6876 list_for_each_entry(wq, &workqueues, list) { in workqueue_apply_unbound_cpumask()
6967 list_for_each_entry(wq, &workqueues, list) { in wq_affn_dfl_set()
6992 * Workqueues with WQ_SYSFS flag set is visible to userland via
6993 * /sys/bus/workqueue/devices/WQ_NAME. All visible workqueues have the
6999 * Unbound workqueues have the following extra attributes.
7237 * The low-level workqueues cpumask is a global cpumask that limits
7238 * the affinity of all unbound workqueues. This function check the @cpumask
7239 * and apply it to all unbound workqueues and updates all pwqs of them.
7360 * ordered workqueues. in workqueue_sysfs_register()
7701 * up. It sets up all the data structures and system workqueues and allows early
7702 * boot code to create workqueues and queue/cancel work items. Actual work item
7855 * and invoked as soon as kthreads can be created and scheduled. Workqueues have
7872 * up. Also, create a rescuer for workqueues that requested it. in workqueue_init()
7881 list_for_each_entry(wq, &workqueues, list) { in workqueue_init()
7976 * workqueue_init_topology - initialize CPU pods for unbound workqueues
7997 * Workqueues allocated earlier would have all CPUs sharing the default in workqueue_init_topology()
8001 list_for_each_entry(wq, &workqueues, list) { in workqueue_init_topology()
8016 pr_warn("WARNING: Flushing system-wide workqueues will be prohibited in near future.\n"); in __warn_flushing_systemwide_wq()