Home
last modified time | relevance | path

Searched refs:scheduler (Results 1 – 25 of 206) sorted by relevance

123456789

/linux-6.12.1/net/netfilter/ipvs/
Dip_vs_sched.c41 struct ip_vs_scheduler *scheduler) in ip_vs_bind_scheduler() argument
45 if (scheduler->init_service) { in ip_vs_bind_scheduler()
46 ret = scheduler->init_service(svc); in ip_vs_bind_scheduler()
52 rcu_assign_pointer(svc->scheduler, scheduler); in ip_vs_bind_scheduler()
65 cur_sched = rcu_dereference_protected(svc->scheduler, 1); in ip_vs_unbind_scheduler()
133 void ip_vs_scheduler_put(struct ip_vs_scheduler *scheduler) in ip_vs_scheduler_put() argument
135 if (scheduler) in ip_vs_scheduler_put()
136 module_put(scheduler->module); in ip_vs_scheduler_put()
145 struct ip_vs_scheduler *sched = rcu_dereference(svc->scheduler); in ip_vs_scheduler_err()
167 int register_ip_vs_scheduler(struct ip_vs_scheduler *scheduler) in register_ip_vs_scheduler() argument
[all …]
/linux-6.12.1/drivers/gpu/drm/i915/gvt/
Dsched_policy.c134 struct intel_gvt_workload_scheduler *scheduler = &gvt->scheduler; in try_to_schedule_next_vgpu() local
143 if (scheduler->next_vgpu == scheduler->current_vgpu) { in try_to_schedule_next_vgpu()
144 scheduler->next_vgpu = NULL; in try_to_schedule_next_vgpu()
152 scheduler->need_reschedule = true; in try_to_schedule_next_vgpu()
156 if (scheduler->current_workload[engine->id]) in try_to_schedule_next_vgpu()
161 vgpu_update_timeslice(scheduler->current_vgpu, cur_time); in try_to_schedule_next_vgpu()
162 vgpu_data = scheduler->next_vgpu->sched_data; in try_to_schedule_next_vgpu()
166 scheduler->current_vgpu = scheduler->next_vgpu; in try_to_schedule_next_vgpu()
167 scheduler->next_vgpu = NULL; in try_to_schedule_next_vgpu()
169 scheduler->need_reschedule = false; in try_to_schedule_next_vgpu()
[all …]
Dscheduler.c292 struct intel_gvt_workload_scheduler *scheduler = &gvt->scheduler; in shadow_context_status_change() local
298 spin_lock_irqsave(&scheduler->mmio_context_lock, flags); in shadow_context_status_change()
300 scheduler->engine_owner[ring_id]) { in shadow_context_status_change()
302 intel_gvt_switch_mmio(scheduler->engine_owner[ring_id], in shadow_context_status_change()
304 scheduler->engine_owner[ring_id] = NULL; in shadow_context_status_change()
306 spin_unlock_irqrestore(&scheduler->mmio_context_lock, flags); in shadow_context_status_change()
311 workload = scheduler->current_workload[ring_id]; in shadow_context_status_change()
317 spin_lock_irqsave(&scheduler->mmio_context_lock, flags); in shadow_context_status_change()
318 if (workload->vgpu != scheduler->engine_owner[ring_id]) { in shadow_context_status_change()
320 intel_gvt_switch_mmio(scheduler->engine_owner[ring_id], in shadow_context_status_change()
[all …]
/linux-6.12.1/Documentation/block/
Dswitching-sched.rst5 Each io queue has a set of io scheduler tunables associated with it. These
6 tunables control how the io scheduler works. You can find these entries
16 It is possible to change the IO scheduler for a given block device on
20 To set a specific scheduler, simply do this::
22 echo SCHEDNAME > /sys/block/DEV/queue/scheduler
24 where SCHEDNAME is the name of a defined IO scheduler, and DEV is the
28 a "cat /sys/block/DEV/queue/scheduler" - the list of valid names
29 will be displayed, with the currently selected scheduler in brackets::
31 # cat /sys/block/sda/queue/scheduler
33 # echo none >/sys/block/sda/queue/scheduler
[all …]
Ddeadline-iosched.rst2 Deadline IO scheduler tunables
5 This little file attempts to document how the deadline io scheduler works.
12 selecting an io scheduler on a per-device basis.
19 The goal of the deadline io scheduler is to attempt to guarantee a start
21 tunable. When a read request first enters the io scheduler, it is assigned
49 When we have to move requests from the io scheduler queue to the block
60 Sometimes it happens that a request enters the io scheduler that is contiguous
69 rbtree front sector lookup when the io scheduler merge function is called.
Dkyber-iosched.rst2 Kyber I/O scheduler tunables
5 The only two tunables for the Kyber scheduler are the target latencies for
/linux-6.12.1/Documentation/scheduler/
Dsched-ext.rst5 sched_ext is a scheduler class whose behavior can be defined by a set of BPF
6 programs - the BPF scheduler.
11 * The BPF scheduler can group CPUs however it sees fit and schedule them
14 * The BPF scheduler can be turned on and off dynamically anytime.
16 * The system integrity is maintained no matter what the BPF scheduler does.
21 * When the BPF scheduler triggers an error, debug information is dumped to
23 scheduler binary. The debug dump can also be accessed through the
25 triggers a debug dump. This doesn't terminate the BPF scheduler and can
47 sched_ext is used only when the BPF scheduler is loaded and running.
50 treated as ``SCHED_NORMAL`` and scheduled by CFS until the BPF scheduler is
[all …]
Dsched-design-CFS.rst12 scheduler implemented by Ingo Molnar and merged in Linux 2.6.23. When
14 scheduler's SCHED_OTHER interactivity code. Nowadays, CFS is making room
16 Documentation/scheduler/sched-eevdf.rst.
63 previous vanilla scheduler and RSDL/SD are affected).
83 schedules (or a scheduler tick happens) the task's CPU usage is "accounted
97 other HZ detail. Thus the CFS scheduler has no notion of "timeslices" in the
98 way the previous scheduler had, and has no heuristics whatsoever. There is
103 which can be used to tune the scheduler from "desktop" (i.e., low latencies) to
105 for desktop workloads. SCHED_BATCH is handled by the CFS scheduler module too.
110 Due to its design, the CFS scheduler is not prone to any of the "attacks" that
[all …]
Dsched-nice-design.rst6 nice-levels implementation in the new Linux scheduler.
12 scheduler, (otherwise we'd have done it long ago) because nice level
16 In the O(1) scheduler (in 2003) we changed negative nice levels to be
77 With the old scheduler, if you for example started a niced task with +1
88 The new scheduler in v2.6.23 addresses all three types of complaints:
91 enough), the scheduler was decoupled from 'time slice' and HZ concepts
94 support: with the new scheduler nice +19 tasks get a HZ-independent
96 scheduler.
99 the new scheduler makes nice(1) have the same CPU utilization effect on
101 scheduler, running a nice +10 and a nice 11 task has the same CPU
[all …]
Dsched-energy.rst8 Energy Aware Scheduling (or EAS) gives the scheduler the ability to predict
23 The actual EM used by EAS is _not_ maintained by the scheduler, but by a
50 scheduler. This alternative considers two objectives: energy-efficiency and
53 The idea behind introducing an EM is to allow the scheduler to evaluate the
56 time, the EM must be as simple as possible to minimize the scheduler latency
60 for the scheduler to decide where a task should run (during wake-up), the EM
71 EAS (as well as the rest of the scheduler) uses the notion of 'capacity' to
87 The scheduler manages references to the EM objects in the topology code when the
89 scheduler maintains a singly linked list of all performance domains intersecting
115 Please note that the scheduler will create two duplicate list nodes for
[all …]
/linux-6.12.1/block/
DKconfig.iosched5 tristate "MQ deadline I/O scheduler"
8 MQ version of the deadline IO scheduler.
11 tristate "Kyber I/O scheduler"
14 The Kyber I/O scheduler is a low-overhead scheduler suitable for
20 tristate "BFQ I/O scheduler"
23 BFQ I/O scheduler for BLK-MQ. BFQ distributes the bandwidth of
/linux-6.12.1/Documentation/gpu/rfc/
Di915_scheduler.rst8 i915 with the DRM scheduler is:
14 * Lots of rework will need to be done to integrate with DRM scheduler so
32 * Convert the i915 to use the DRM scheduler
33 * GuC submission backend fully integrated with DRM scheduler
35 handled in DRM scheduler)
36 * Resets / cancels hook in DRM scheduler
37 * Watchdog hooks into DRM scheduler
39 integrated with DRM scheduler (e.g. state machine gets
41 * Execlists backend will minimum required to hook in the DRM scheduler
44 be difficult to integrate with the DRM scheduler and these
[all …]
/linux-6.12.1/drivers/gpu/drm/panthor/
Dpanthor_sched.c359 struct drm_gpu_scheduler scheduler; member
678 if (!queue_work((group)->ptdev->scheduler->wq, &(group)->wname ## _work)) \
835 if (queue->scheduler.ops) in group_free_queue()
836 drm_sched_fini(&queue->scheduler); in group_free_queue()
910 lockdep_assert_held(&ptdev->scheduler->lock); in group_bind_locked()
913 ptdev->scheduler->csg_slots[csg_id].group)) in group_bind_locked()
920 csg_slot = &ptdev->scheduler->csg_slots[csg_id]; in group_bind_locked()
951 lockdep_assert_held(&ptdev->scheduler->lock); in group_unbind_locked()
959 slot = &ptdev->scheduler->csg_slots[group->csg_id]; in group_unbind_locked()
990 struct panthor_queue *queue = ptdev->scheduler->csg_slots[csg_id].group->queues[cs_id]; in cs_slot_prog_locked()
[all …]
/linux-6.12.1/tools/sched_ext/
DREADME.md51 In order to run a sched_ext scheduler, you'll have to run a kernel compiled
93 example, using vmlinux.h allows a scheduler to access fields defined directly
108 bpf_printk("Task %s enabled in example scheduler", p->comm);
119 The scheduler build system will generate this vmlinux.h file as part of the
120 scheduler build pipeline. It looks for a vmlinux file in the following
163 For more scheduler implementations, tools and documentation, visit
168 A simple scheduler that provides an example of a minimal sched_ext scheduler.
171 Though very simple, in limited scenarios, this scheduler can perform reasonably
176 Another simple, yet slightly more complex scheduler that provides an example of
184 A "central" scheduler where scheduling decisions are made from a single CPU.
[all …]
/linux-6.12.1/sound/pci/mixart/
Dmixart_core.h218 u64 scheduler; member
231 u64 scheduler; member
240 u64 scheduler; member
388 u64 scheduler; member
438 u64 scheduler; member
498 u64 scheduler; member
543 u64 scheduler; member
/linux-6.12.1/drivers/md/dm-vdo/
Daction-manager.c59 vdo_action_scheduler_fn scheduler; member
106 vdo_action_scheduler_fn scheduler, struct vdo *vdo, in vdo_make_action_manager() argument
117 .scheduler = in vdo_make_action_manager()
118 ((scheduler == NULL) ? no_default_action : scheduler), in vdo_make_action_manager()
247 manager->scheduler(manager->context)); in vdo_schedule_default_action()
/linux-6.12.1/drivers/gpu/drm/xe/
DKconfig.profile6 be forcefully taken away from scheduler.
12 be forcefully taken away from scheduler.
48 bool "Default configuration of limitation on scheduler timeout"
51 Configures the enablement of limitation on scheduler timeout
/linux-6.12.1/net/mptcp/
Dctrl.c40 char scheduler[MPTCP_SCHED_NAME_MAX]; member
87 return mptcp_get_pernet(net)->scheduler; in mptcp_get_scheduler()
101 strscpy(pernet->scheduler, "default", sizeof(pernet->scheduler)); in mptcp_pernet_set_defaults()
114 strscpy(pernet->scheduler, name, MPTCP_SCHED_NAME_MAX); in mptcp_set_scheduler()
267 table[6].data = &pernet->scheduler; in mptcp_pernet_new_table()
/linux-6.12.1/drivers/gpu/drm/imagination/
Dpvr_queue.c603 struct pvr_queue *queue = container_of(job->base.sched, struct pvr_queue, scheduler); in pvr_queue_submit_job_to_cccb()
746 struct pvr_queue, scheduler); in pvr_queue_run_job()
758 drm_sched_stop(&queue->scheduler, bad_job ? &bad_job->base : NULL); in pvr_queue_stop()
770 list_for_each_entry(job, &queue->scheduler.pending_list, base.list) { in pvr_queue_start()
785 drm_sched_start(&queue->scheduler); in pvr_queue_start()
802 struct pvr_queue *queue = container_of(sched, struct pvr_queue, scheduler); in pvr_queue_timedout_job()
907 spin_lock(&queue->scheduler.job_list_lock); in pvr_queue_signal_done_fences()
909 list_for_each_entry_safe(job, tmp_job, &queue->scheduler.pending_list, base.list) { in pvr_queue_signal_done_fences()
919 spin_unlock(&queue->scheduler.job_list_lock); in pvr_queue_signal_done_fences()
1151 struct pvr_queue *queue = container_of(job->base.sched, struct pvr_queue, scheduler); in pvr_queue_job_push()
[all …]
/linux-6.12.1/tools/testing/kunit/test_data/
Dtest_is_test_passed-no_tests_run_no_header.log33 io scheduler noop registered
34 io scheduler deadline registered
35 io scheduler cfq registered (default)
36 io scheduler mq-deadline registered
37 io scheduler kyber registered
/linux-6.12.1/Documentation/networking/device_drivers/ethernet/mellanox/mlx5/
Dtracepoints.rst110 - mlx5_esw_vport_qos_create: trace creation of transmit scheduler arbiter for vport::
117 - mlx5_esw_vport_qos_config: trace configuration of transmit scheduler arbiter for vport::
124 - mlx5_esw_vport_qos_destroy: trace deletion of transmit scheduler arbiter for vport::
131 - mlx5_esw_group_qos_create: trace creation of transmit scheduler arbiter for rate group::
138 - mlx5_esw_group_qos_config: trace configuration of transmit scheduler arbiter for rate group::
145 - mlx5_esw_group_qos_destroy: trace deletion of transmit scheduler arbiter for group::
/linux-6.12.1/Documentation/virt/kvm/
Dhalt-polling.rst12 before giving up the cpu to the scheduler in order to let something else run.
15 very quickly by at least saving us a trip through the scheduler, normally on
18 interval or some other task on the runqueue is runnable the scheduler is
21 savings of not invoking the scheduler are distinguishable.
34 The maximum time for which to poll before invoking the scheduler, referred to
77 whether the scheduler is invoked within that function).
/linux-6.12.1/kernel/
DKconfig.preempt80 low level and critical code paths (entry code, scheduler, low
82 execution contexts under scheduler control.
141 This option enables a new scheduler class sched_ext (SCX), which
150 - Rapid scheduler deployments: Non-disruptive swap outs of
160 Documentation/scheduler/sched-ext.rst
/linux-6.12.1/Documentation/translations/zh_CN/scheduler/
Dschedutil.rst4 :Original: Documentation/scheduler/schedutil.rst
89 …- Documentation/translations/zh_CN/scheduler/sched-capacity.rst:"1. CPU Capacity + 2. Task utiliza…
/linux-6.12.1/Documentation/admin-guide/mm/
Dmultigen_lru.rst100 When a new job comes in, the job scheduler needs to find out whether
103 scheduler needs to estimate the working sets of the existing jobs.
133 A typical use case is that a job scheduler runs this command at a
142 comes in, the job scheduler wants to proactively reclaim cold pages on
157 A typical use case is that a job scheduler runs this command before it

123456789