1======================================
2NO_HZ: Reducing Scheduling-Clock Ticks
3======================================
4
5
6This document describes Kconfig options and boot parameters that can
7reduce the number of scheduling-clock interrupts, thereby improving energy
8efficiency and reducing OS jitter.  Reducing OS jitter is important for
9some types of computationally intensive high-performance computing (HPC)
10applications and for real-time applications.
11
12There are three main ways of managing scheduling-clock interrupts
13(also known as "scheduling-clock ticks" or simply "ticks"):
14
151.	Never omit scheduling-clock ticks (CONFIG_HZ_PERIODIC=y or
16	CONFIG_NO_HZ=n for older kernels).  You normally will -not-
17	want to choose this option.
18
192.	Omit scheduling-clock ticks on idle CPUs (CONFIG_NO_HZ_IDLE=y or
20	CONFIG_NO_HZ=y for older kernels).  This is the most common
21	approach, and should be the default.
22
233.	Omit scheduling-clock ticks on CPUs that are either idle or that
24	have only one runnable task (CONFIG_NO_HZ_FULL=y).  Unless you
25	are running realtime applications or certain types of HPC
26	workloads, you will normally -not- want this option.
27
28These three cases are described in the following three sections, followed
29by a third section on RCU-specific considerations, a fourth section
30discussing testing, and a fifth and final section listing known issues.
31
32
33Never Omit Scheduling-Clock Ticks
34=================================
35
36Very old versions of Linux from the 1990s and the very early 2000s
37are incapable of omitting scheduling-clock ticks.  It turns out that
38there are some situations where this old-school approach is still the
39right approach, for example, in heavy workloads with lots of tasks
40that use short bursts of CPU, where there are very frequent idle
41periods, but where these idle periods are also quite short (tens or
42hundreds of microseconds).  For these types of workloads, scheduling
43clock interrupts will normally be delivered any way because there
44will frequently be multiple runnable tasks per CPU.  In these cases,
45attempting to turn off the scheduling clock interrupt will have no effect
46other than increasing the overhead of switching to and from idle and
47transitioning between user and kernel execution.
48
49This mode of operation can be selected using CONFIG_HZ_PERIODIC=y (or
50CONFIG_NO_HZ=n for older kernels).
51
52However, if you are instead running a light workload with long idle
53periods, failing to omit scheduling-clock interrupts will result in
54excessive power consumption.  This is especially bad on battery-powered
55devices, where it results in extremely short battery lifetimes.  If you
56are running light workloads, you should therefore read the following
57section.
58
59In addition, if you are running either a real-time workload or an HPC
60workload with short iterations, the scheduling-clock interrupts can
61degrade your applications performance.  If this describes your workload,
62you should read the following two sections.
63
64
65Omit Scheduling-Clock Ticks For Idle CPUs
66=========================================
67
68If a CPU is idle, there is little point in sending it a scheduling-clock
69interrupt.  After all, the primary purpose of a scheduling-clock interrupt
70is to force a busy CPU to shift its attention among multiple duties,
71and an idle CPU has no duties to shift its attention among.
72
73An idle CPU that is not receiving scheduling-clock interrupts is said to
74be "dyntick-idle", "in dyntick-idle mode", "in nohz mode", or "running
75tickless".  The remainder of this document will use "dyntick-idle mode".
76
77The CONFIG_NO_HZ_IDLE=y Kconfig option causes the kernel to avoid sending
78scheduling-clock interrupts to idle CPUs, which is critically important
79both to battery-powered devices and to highly virtualized mainframes.
80A battery-powered device running a CONFIG_HZ_PERIODIC=y kernel would
81drain its battery very quickly, easily 2-3 times as fast as would the
82same device running a CONFIG_NO_HZ_IDLE=y kernel.  A mainframe running
831,500 OS instances might find that half of its CPU time was consumed by
84unnecessary scheduling-clock interrupts.  In these situations, there
85is strong motivation to avoid sending scheduling-clock interrupts to
86idle CPUs.  That said, dyntick-idle mode is not free:
87
881.	It increases the number of instructions executed on the path
89	to and from the idle loop.
90
912.	On many architectures, dyntick-idle mode also increases the
92	number of expensive clock-reprogramming operations.
93
94Therefore, systems with aggressive real-time response constraints often
95run CONFIG_HZ_PERIODIC=y kernels (or CONFIG_NO_HZ=n for older kernels)
96in order to avoid degrading from-idle transition latencies.
97
98There is also a boot parameter "nohz=" that can be used to disable
99dyntick-idle mode in CONFIG_NO_HZ_IDLE=y kernels by specifying "nohz=off".
100By default, CONFIG_NO_HZ_IDLE=y kernels boot with "nohz=on", enabling
101dyntick-idle mode.
102
103
104Omit Scheduling-Clock Ticks For CPUs With Only One Runnable Task
105================================================================
106
107If a CPU has only one runnable task, there is little point in sending it
108a scheduling-clock interrupt because there is no other task to switch to.
109Note that omitting scheduling-clock ticks for CPUs with only one runnable
110task implies also omitting them for idle CPUs.
111
112The CONFIG_NO_HZ_FULL=y Kconfig option causes the kernel to avoid
113sending scheduling-clock interrupts to CPUs with a single runnable task,
114and such CPUs are said to be "adaptive-ticks CPUs".  This is important
115for applications with aggressive real-time response constraints because
116it allows them to improve their worst-case response times by the maximum
117duration of a scheduling-clock interrupt.  It is also important for
118computationally intensive short-iteration workloads:  If any CPU is
119delayed during a given iteration, all the other CPUs will be forced to
120wait idle while the delayed CPU finishes.  Thus, the delay is multiplied
121by one less than the number of CPUs.  In these situations, there is
122again strong motivation to avoid sending scheduling-clock interrupts.
123
124By default, no CPU will be an adaptive-ticks CPU.  The "nohz_full="
125boot parameter specifies the adaptive-ticks CPUs.  For example,
126"nohz_full=1,6-8" says that CPUs 1, 6, 7, and 8 are to be adaptive-ticks
127CPUs.  Note that you are prohibited from marking all of the CPUs as
128adaptive-tick CPUs:  At least one non-adaptive-tick CPU must remain
129online to handle timekeeping tasks in order to ensure that system
130calls like gettimeofday() returns accurate values on adaptive-tick CPUs.
131(This is not an issue for CONFIG_NO_HZ_IDLE=y because there are no running
132user processes to observe slight drifts in clock rate.) Note that this
133means that your system must have at least two CPUs in order for
134CONFIG_NO_HZ_FULL=y to do anything for you.
135
136Finally, adaptive-ticks CPUs must have their RCU callbacks offloaded.
137This is covered in the "RCU IMPLICATIONS" section below.
138
139Normally, a CPU remains in adaptive-ticks mode as long as possible.
140In particular, transitioning to kernel mode does not automatically change
141the mode.  Instead, the CPU will exit adaptive-ticks mode only if needed,
142for example, if that CPU enqueues an RCU callback.
143
144Just as with dyntick-idle mode, the benefits of adaptive-tick mode do
145not come for free:
146
1471.	CONFIG_NO_HZ_FULL selects CONFIG_NO_HZ_COMMON, so you cannot run
148	adaptive ticks without also running dyntick idle.  This dependency
149	extends down into the implementation, so that all of the costs
150	of CONFIG_NO_HZ_IDLE are also incurred by CONFIG_NO_HZ_FULL.
151
1522.	The user/kernel transitions are slightly more expensive due
153	to the need to inform kernel subsystems (such as RCU) about
154	the change in mode.
155
1563.	POSIX CPU timers prevent CPUs from entering adaptive-tick mode.
157	Real-time applications needing to take actions based on CPU time
158	consumption need to use other means of doing so.
159
1604.	If there are more perf events pending than the hardware can
161	accommodate, they are normally round-robined so as to collect
162	all of them over time.  Adaptive-tick mode may prevent this
163	round-robining from happening.  This will likely be fixed by
164	preventing CPUs with large numbers of perf events pending from
165	entering adaptive-tick mode.
166
1675.	Scheduler statistics for adaptive-tick CPUs may be computed
168	slightly differently than those for non-adaptive-tick CPUs.
169	This might in turn perturb load-balancing of real-time tasks.
170
171Although improvements are expected over time, adaptive ticks is quite
172useful for many types of real-time and compute-intensive applications.
173However, the drawbacks listed above mean that adaptive ticks should not
174(yet) be enabled by default.
175
176
177RCU Implications
178================
179
180There are situations in which idle CPUs cannot be permitted to
181enter either dyntick-idle mode or adaptive-tick mode, the most
182common being when that CPU has RCU callbacks pending.
183
184Avoid this by offloading RCU callback processing to "rcuo" kthreads
185using the CONFIG_RCU_NOCB_CPU=y Kconfig option.  The specific CPUs to
186offload may be selected using The "rcu_nocbs=" kernel boot parameter,
187which takes a comma-separated list of CPUs and CPU ranges, for example,
188"1,3-5" selects CPUs 1, 3, 4, and 5.  Note that CPUs specified by
189the "nohz_full" kernel boot parameter are also offloaded.
190
191The offloaded CPUs will never queue RCU callbacks, and therefore RCU
192never prevents offloaded CPUs from entering either dyntick-idle mode
193or adaptive-tick mode.  That said, note that it is up to userspace to
194pin the "rcuo" kthreads to specific CPUs if desired.  Otherwise, the
195scheduler will decide where to run them, which might or might not be
196where you want them to run.
197
198
199Testing
200=======
201
202So you enable all the OS-jitter features described in this document,
203but do not see any change in your workload's behavior.  Is this because
204your workload isn't affected that much by OS jitter, or is it because
205something else is in the way?  This section helps answer this question
206by providing a simple OS-jitter test suite, which is available on branch
207master of the following git archive:
208
209git://git.kernel.org/pub/scm/linux/kernel/git/frederic/dynticks-testing.git
210
211Clone this archive and follow the instructions in the README file.
212This test procedure will produce a trace that will allow you to evaluate
213whether or not you have succeeded in removing OS jitter from your system.
214If this trace shows that you have removed OS jitter as much as is
215possible, then you can conclude that your workload is not all that
216sensitive to OS jitter.
217
218Note: this test requires that your system have at least two CPUs.
219We do not currently have a good way to remove OS jitter from single-CPU
220systems.
221
222
223Known Issues
224============
225
226*	Dyntick-idle slows transitions to and from idle slightly.
227	In practice, this has not been a problem except for the most
228	aggressive real-time workloads, which have the option of disabling
229	dyntick-idle mode, an option that most of them take.  However,
230	some workloads will no doubt want to use adaptive ticks to
231	eliminate scheduling-clock interrupt latencies.  Here are some
232	options for these workloads:
233
234	a.	Use PMQOS from userspace to inform the kernel of your
235		latency requirements (preferred).
236
237	b.	On x86 systems, use the "idle=mwait" boot parameter.
238
239	c.	On x86 systems, use the "intel_idle.max_cstate=" to limit
240	`	the maximum C-state depth.
241
242	d.	On x86 systems, use the "idle=poll" boot parameter.
243		However, please note that use of this parameter can cause
244		your CPU to overheat, which may cause thermal throttling
245		to degrade your latencies -- and that this degradation can
246		be even worse than that of dyntick-idle.  Furthermore,
247		this parameter effectively disables Turbo Mode on Intel
248		CPUs, which can significantly reduce maximum performance.
249
250*	Adaptive-ticks slows user/kernel transitions slightly.
251	This is not expected to be a problem for computationally intensive
252	workloads, which have few such transitions.  Careful benchmarking
253	will be required to determine whether or not other workloads
254	are significantly affected by this effect.
255
256*	Adaptive-ticks does not do anything unless there is only one
257	runnable task for a given CPU, even though there are a number
258	of other situations where the scheduling-clock tick is not
259	needed.  To give but one example, consider a CPU that has one
260	runnable high-priority SCHED_FIFO task and an arbitrary number
261	of low-priority SCHED_OTHER tasks.  In this case, the CPU is
262	required to run the SCHED_FIFO task until it either blocks or
263	some other higher-priority task awakens on (or is assigned to)
264	this CPU, so there is no point in sending a scheduling-clock
265	interrupt to this CPU.	However, the current implementation
266	nevertheless sends scheduling-clock interrupts to CPUs having a
267	single runnable SCHED_FIFO task and multiple runnable SCHED_OTHER
268	tasks, even though these interrupts are unnecessary.
269
270	And even when there are multiple runnable tasks on a given CPU,
271	there is little point in interrupting that CPU until the current
272	running task's timeslice expires, which is almost always way
273	longer than the time of the next scheduling-clock interrupt.
274
275	Better handling of these sorts of situations is future work.
276
277*	A reboot is required to reconfigure both adaptive idle and RCU
278	callback offloading.  Runtime reconfiguration could be provided
279	if needed, however, due to the complexity of reconfiguring RCU at
280	runtime, there would need to be an earthshakingly good reason.
281	Especially given that you have the straightforward option of
282	simply offloading RCU callbacks from all CPUs and pinning them
283	where you want them whenever you want them pinned.
284
285*	Additional configuration is required to deal with other sources
286	of OS jitter, including interrupts and system-utility tasks
287	and processes.  This configuration normally involves binding
288	interrupts and tasks to particular CPUs.
289
290*	Some sources of OS jitter can currently be eliminated only by
291	constraining the workload.  For example, the only way to eliminate
292	OS jitter due to global TLB shootdowns is to avoid the unmapping
293	operations (such as kernel module unload operations) that
294	result in these shootdowns.  For another example, page faults
295	and TLB misses can be reduced (and in some cases eliminated) by
296	using huge pages and by constraining the amount of memory used
297	by the application.  Pre-faulting the working set can also be
298	helpful, especially when combined with the mlock() and mlockall()
299	system calls.
300
301*	Unless all CPUs are idle, at least one CPU must keep the
302	scheduling-clock interrupt going in order to support accurate
303	timekeeping.
304
305*	If there might potentially be some adaptive-ticks CPUs, there
306	will be at least one CPU keeping the scheduling-clock interrupt
307	going, even if all CPUs are otherwise idle.
308
309	Better handling of this situation is ongoing work.
310
311*	Some process-handling operations still require the occasional
312	scheduling-clock tick.	These operations include calculating CPU
313	load, maintaining sched average, computing CFS entity vruntime,
314	computing avenrun, and carrying out load balancing.  They are
315	currently accommodated by scheduling-clock tick every second
316	or so.	On-going work will eliminate the need even for these
317	infrequent scheduling-clock ticks.
318