Lines Matching full:workqueue

3  * kernel/workqueue.c - generic async execution with shared worker pool
25 * Please read Documentation/core-api/workqueue.rst for details.
35 #include <linux/workqueue.h>
235 * tools/workqueue/wq_monitor.py.
251 * The per-pool workqueue. While queued, bits below WORK_PWQ_SHIFT
258 struct workqueue_struct *wq; /* I: the owning workqueue */
301 * Structure used to wait for workqueue flush.
312 * Unlike in a per-cpu workqueue where max_active limits its concurrency level
313 * on each CPU, in an unbound workqueue, max_active applies to the whole system.
332 * The externally visible workqueue. It relays the issued work items to
370 char name[WQ_NAME_LEN]; /* I: workqueue name */
531 #include <trace/events/workqueue.h>
587 * for_each_pwq - iterate through all pool_workqueues of the specified workqueue
589 * @wq: the target workqueue
739 * unbound_effective_cpumask - effective cpumask of an unbound workqueue
740 * @wq: workqueue of interest
1091 * before the original execution finishes, workqueue will identify the
1290 * should be using an unbound workqueue instead.
1339 …printk_deferred(KERN_WARNING "workqueue: %ps hogged CPU for >%luus %llu times, consider switching … in wq_cpu_intensive_report()
1523 * As this function doesn't involve any workqueue-related locking, it
1541 * @wq: workqueue of interest
1566 * @wq: workqueue to update
1720 /* BH or per-cpu workqueue, pwq->nr_active is sufficient */ in pwq_tryinc_nr_active()
1729 * Unbound workqueue uses per-node shared nr_active $nna. If @pwq is in pwq_tryinc_nr_active()
1944 * For a percpu workqueue, it's simple. Just need to kick the first in pwq_dec_nr_active()
1953 * If @pwq is for an unbound workqueue, it's more complicated because in pwq_dec_nr_active()
1979 * decrement nr_in_flight of its pwq and handle workqueue flushing.
2193 * same workqueue.
2220 pr_warn_once("workqueue: round-robin CPU selection forced, expect performance impact\n"); in wq_select_unbound_cpu()
2252 * For a draining wq, only works from the same workqueue are in __queue_work()
2277 * For ordered workqueue, work items must be queued on the newest pwq in __queue_work()
2316 WARN_ONCE(true, "workqueue: per-cpu pwq for %s on cpu%d has 0 refcnt", in __queue_work()
2369 * @wq: workqueue to use
2431 * @wq: workqueue to use
2459 * If this is used with a per-cpu workqueue then the logic in in queue_work_node()
2532 * @wq: workqueue to use
2564 * @wq: workqueue to use
2607 * @wq: workqueue to use
2757 * create_worker - create a new workqueue worker
2776 pr_err_once("workqueue: Failed to allocate a worker ID: %pe\n", in create_worker()
2783 pr_err_once("workqueue: Failed to allocate a worker\n"); in create_worker()
2797 pr_err("workqueue: Interrupted when creating a worker thread \"%s\"\n", in create_worker()
2800 pr_err_once("workqueue: Failed to create a worker thread: %pe", in create_worker()
3243 pr_err("BUG: workqueue leaked atomic, lock or RCU: %s[%d]\n" in process_one_work()
3330 * work items regardless of their specific target workqueue. The only
3341 /* tell the scheduler that this is a workqueue worker */ in worker_thread()
3414 * Workqueue rescuer thread function. There's one rescuer for each
3415 * workqueue which has WQ_MEM_RECLAIM set.
3448 * By the time the rescuer is requested to stop, the workqueue in rescuer_thread()
3476 * Slurp in all works issued via this workqueue and in rescuer_thread()
3582 * TODO: Convert all tasklet users to workqueue and use softirq directly.
3681 * @target_wq: workqueue being flushed
3682 * @target_work: work item being flushed (NULL for workqueue flushes)
3686 * reclaiming memory or running on a workqueue which doesn't have
3702 "workqueue: PF_MEMALLOC task %d(%s) is flushing !WQ_MEM_RECLAIM %s:%ps", in check_flush_dependency()
3706 "workqueue: WQ_MEM_RECLAIM %s:%ps is flushing !WQ_MEM_RECLAIM %s:%ps", in check_flush_dependency()
3801 * flush_workqueue_prep_pwqs - prepare pwqs for workqueue flushing
3802 * @wq: workqueue being flushed
3806 * Prepare pwqs for workqueue flushing.
3905 * @wq: workqueue to flush
4061 * drain_workqueue - drain a workqueue
4062 * @wq: workqueue to drain
4064 * Wait until the workqueue becomes empty. While draining is in progress,
4102 pr_warn("workqueue %s: %s() isn't complete after %u tries\n", in drain_workqueue()
4153 * single-threaded or rescuer equipped workqueue. in start_flush_work()
4188 * was queued on a BH workqueue, we also know that it was running in the in __flush_work()
4291 WARN_ONCE(true, "workqueue: work disable count overflowed\n"); in work_offqd_disable()
4299 WARN_ONCE(true, "workqueue: work disable count underflowed\n"); in work_offqd_enable()
4359 * even if the work re-queues itself or migrates to another workqueue. On return
4367 * workqueue. Can also be called from non-hardirq atomic contexts including BH
4368 * if @work was last queued on a BH workqueue.
4441 * workqueue. Can also be called from non-hardirq atomic contexts including BH
4442 * if @work was last queued on a BH workqueue.
4522 * system workqueue and blocks until all CPUs have completed.
4641 * Some attrs fields are workqueue-only. Clear them for worker_pool's. See the
5055 * For ordered workqueue with a plugged dfl_pwq, restart it now. in pwq_release_workfn()
5162 * @attrs: the wq_attrs of the default pwq of the target workqueue
5165 * Calculate the cpumask a workqueue with @attrs should use on @pod.
5208 struct workqueue_struct *wq; /* target workqueue */
5349 * apply_workqueue_attrs - apply new workqueue_attrs to an unbound workqueue
5350 * @wq: the target workqueue
5353 * Apply @attrs to an unbound workqueue @wq. Unless disabled, this function maps
5377 * @wq: the target workqueue
5387 * Note that when the last allowed CPU of a pod goes offline for a workqueue
5389 * executing the work items for the workqueue will lose their CPU affinity and
5391 * CPU_DOWN. If a workqueue user wants strict affinity, it's the user's
5422 pr_warn("workqueue: allocation failed while updating CPU pod affinity of \"%s\"\n", in unbound_wq_update_pwq()
5492 "ordering guarantee broken for workqueue %s\n", wq->name); in alloc_and_link_pwqs()
5517 pr_warn("workqueue: max_active %d requested for %s is out of range, clamping between %d and %d\n", in wq_clamp_max_active()
5540 pr_err("workqueue: Failed to allocate a rescuer for wq \"%s\"\n", in init_rescuer()
5551 pr_err("workqueue: Failed to create a rescuer kthread for wq \"%s\": %pe", in init_rescuer()
5569 * @wq: target workqueue
5668 pr_warn_once("workqueue: name exceeds WQ_NAME_LEN. Truncating to: %s\n", in __alloc_workqueue()
5808 * destroy_workqueue - safely terminate a workqueue
5809 * @wq: target workqueue
5811 * Safely destroy a workqueue. All work currently pending will be done first.
5824 /* mark the workqueue destruction is in progress */ in destroy_workqueue()
5895 * workqueue_set_max_active - adjust max_active of a workqueue
5896 * @wq: target workqueue
5929 * workqueue_set_min_active - adjust min_active of an unbound workqueue
5930 * @wq: target unbound workqueue
5933 * Set min_active of an unbound workqueue. Unlike other types of workqueues, an
5934 * unbound workqueue is not guaranteed to be able to process max_active
5935 * interdependent work items. Instead, an unbound workqueue is guaranteed to be
5958 * Determine if %current task is a workqueue worker and what it's working on.
5961 * Return: work struct if %current task is a workqueue worker, %NULL otherwise.
5972 * current_is_workqueue_rescuer - is %current workqueue rescuer?
5974 * Determine whether %current is a workqueue rescuer. Can be used from
5977 * Return: %true if %current is a workqueue rescuer. %false otherwise.
5987 * workqueue_congested - test whether a workqueue is congested
5989 * @wq: target workqueue
5991 * Test whether @wq's cpu workqueue for @cpu is congested. There is
5998 * pool_workqueues, each with its own congested state. A workqueue being
5999 * congested on one CPU doesn't mean that the workqueue is contested on any
6089 * name of the workqueue being serviced and worker description set with
6115 * Carefully copy the associated workqueue's workfn, name and desc. in print_worker_info()
6277 * show_one_workqueue - dump state of specified workqueue
6278 * @wq: workqueue whose state will be printed
6292 if (idle) /* Nothing to print for idle workqueue */ in show_one_workqueue()
6295 pr_info("workqueue %s: flags=0x%x\n", wq->name, wq->flags); in show_one_workqueue()
6370 * show_all_workqueues - dump workqueue state
6394 * show_freezable_workqueues - dump freezable workqueue state
6924 * by any subsequent write to workqueue/cpumask sysfs file. in workqueue_unbound_exclude_cpumask()
6993 * /sys/bus/workqueue/devices/WQ_NAME. All visible workqueues have the
6996 * per_cpu RO bool : whether the workqueue is per-cpu or unbound
7229 .name = "workqueue",
7339 * workqueue_sysfs_register - make a workqueue visible in sysfs
7340 * @wq: the workqueue to register
7342 * Expose @wq in sysfs under /sys/bus/workqueue/devices.
7346 * Workqueue user should use this function directly iff it wants to apply
7347 * workqueue_attrs before making the workqueue visible in sysfs; otherwise,
7407 * @wq: the workqueue to unregister
7426 * Workqueue watchdog.
7430 * indefinitely. Workqueue stalls can be very difficult to debug as the
7431 * usual warning mechanisms don't trigger and internal workqueue state is
7434 * Workqueue watchdog monitors all worker pools periodically and dumps
7439 * "workqueue.watchdog_thresh" which can be updated at runtime through the
7569 pr_emerg("BUG: workqueue lockup - pool"); in wq_watchdog_timer_fn()
7672 pr_warn("workqueue: Restricting unbound_cpumask (%*pb) with %s (%*pb) leaves no CPU, ignoring\n", in restrict_unbound_cpumask()
7697 * workqueue_init_early - early init for workqueue subsystem
7699 * This is the first step of three-staged workqueue subsystem initialization and
7726 restrict_unbound_cpumask("workqueue.unbound_cpus", &wq_cmdline_cpumask); in workqueue_init_early()
7736 * If nohz_full is enabled, set power efficient workqueue as unbound. in workqueue_init_early()
7737 * This allows workqueue items to be moved to HK CPUs. in workqueue_init_early()
7852 * workqueue_init - bring workqueue subsystem fully online
7854 * This is the second step of three-staged workqueue subsystem initialization
7883 "workqueue: failed to create early rescuer for %s", in workqueue_init()
7978 * This is the third step of three-staged workqueue subsystem initialization and
7998 * worker pool. Explicitly call unbound_wq_update_pwq() on all workqueue in workqueue_init_topology()
8025 pr_warn("workqueue.unbound_cpus: incorrect CPU range, using default\n"); in workqueue_unbound_cpus_setup()
8030 __setup("workqueue.unbound_cpus=", workqueue_unbound_cpus_setup);