Lines Matching full:that
33 CPU idle time management operates on CPUs as seen by the *CPU scheduler* (that
35 work in the system). In its view, CPUs are *logical* units. That is, they need
38 entity which appears to be fetching instructions that belong to one sequence
43 program) at a time, it is a CPU. In that case, if the hardware is asked to
44 enter an idle state, that applies to the processor as a whole.
51 time. The entire cores are CPUs in that case and if the hardware is asked to
52 enter an idle state, that applies to the core that asked for it in the first
54 that the core belongs to (in fact, it may apply to an entire hierarchy of larger
57 remaining core asks the processor to enter an idle state, that may trigger it
59 other cores in that unit.
62 program in the same time frame (that is, each core may be able to fetch
64 frame, but not necessarily entirely in parallel with each other). In that case
67 (or hyper-threads specifically on Intel hardware), that each can follow one
70 by one of them, the hardware thread (or CPU) that asked for it is stopped, but
72 core also have asked the processor to enter an idle state. In that situation,
86 running that code, and some context information that needs to be loaded into the
92 there is a CPU available for that (for example, they are not waiting for any
103 in Linux idle CPUs run the code of the "idle" task called *the idle loop*. That
118 calls into a code module referred to as the *governor* that belongs to the CPU
125 conditions at hand. For this purpose, idle states that the hardware can be
128 (linear) array. That array has to be prepared and supplied by the ``CPUIdle``
131 hardware and to work with any platforms that the Linux kernel can run on.
133 Each idle state present in that array is characterized by two parameters to be
139 corresponds to the power drawn by the processor in that state.] The exit
142 wakeup from that state. Note that in general the exit latency also must cover
147 There are two types of information that can influence the governor's decisions.
148 First of all, the governor knows the time until the closest timer event. That
150 when they will trigger, and it is the maximum time the hardware that the given
154 when that may happen. The governor can only see how much time the CPU actually
155 was idle after it has been woken up (that time will be referred to as the *idle
156 duration* from now on) and it can use that information somehow along with the
158 governor uses that information depends on what algorithm is implemented by it
159 and that is the primary reason for having more than one governor in the
174 matching driver. For example, there are two drivers that can work with the
176 hardcoded idle states information and the other able to read that information
191 The scheduler tick is a timer that triggers periodically in order to implement
197 prioritization and so on and when that time slice is used up, the CPU should be
200 is there to make the switch happen regardless. That is not the only role of the
208 the tick period length. Moreover, in that case the idle duration of any CPU
218 the scheduler tick entirely on idle CPUs in principle, even though that may not
225 reprogrammed in that case. Second, if the governor is expecting a non-timer
227 be harmful. Namely, in that case the governor will select an idle state with
228 the target residency within the time until the expected wakeup, so that state is
230 state then, as that would contradict its own expectation of a wakeup in short
239 so that it does not wake up the CPU too early.
244 to leave it as is and the governor needs to take that into account.
247 loop altogether. That can be done through the build-time configuration of it
253 The systems that run kernels configured to allow the scheduler tick to be
268 Namely, when invoked to select an idle state for a CPU (i.e. an idle state that
273 that the scheduler tick will be stopped. That time, referred to as the *sleep
280 for some I/O operations to complete and the other one is used when that is not
281 the case. Each array contains several correction factor values that correspond
282 to different sleep length ranges organized so that each range represented in the
289 The sleep length is multiplied by the correction factor for the range that it
297 that 6 times the standard deviation), the average is regarded as the "typical
309 workloads. It uses the observation that if the exit latency of the selected
311 in that state probably will be very short and the amount of energy to save by
313 overhead related to entering that state and exiting it. Thus selecting a
316 additionally is divided by a value depending on the number of tasks that
318 complete. The result of that division is compared with the latency limit coming
327 idle duration, but still below it, and exit latency that does not exceed the
331 if it has not decided to `stop the scheduler tick <idle-cpus-and-tick_>`_. That
336 that time, the governor may need to select a shallower state with a suitable
348 given conditions. However, it applies a different approach to that problem.
364 the hierarchy. In that case, the `target residency and exit latency parameters
370 a "module" and suppose that asking the hardware to enter a specific idle state
375 "module" level, but there is no guarantee that this is going to happen (the core
376 asking for idle state "X" may just end up in that state by itself instead).
379 the module (including the time needed to enter it), because that is the minimum
381 that state. Analogously, the exit latency parameter of that object must cover
383 because that is the maximum delay between a wakeup signal and the time the CPU
384 will start to execute the first new instruction (assuming that both cores in the
394 that the processor hardware finally goes into must always follow the parameters
396 latency of that idle state must not exceed the exit latency parameter of the
402 order to ask the hardware to enter that state. Also, for each
405 statistics of the given idle state. That information is exposed by the kernel
410 CPU at the initialization time. That directory contains a set of subdirectories
463 between them is that the name is expected to be more concise, while the
468 given idle state is disabled for this particular CPU, which means that the
470 driver will never ask the hardware to enter it for that CPU as a result.
473 never be asked for by any of them. [Note that, due to the way the ``ladder``
474 governor is implemented, disabling an idle state prevents that governor from
482 unless that state was disabled globally in the driver (in which case it cannot
489 available) and if it contains a nonzero number, that number may not be very
508 may return an error code to indicate that this was the case. The :file:`usage`
539 framework maintains a list of requests that have been made so far for the
547 represents that request. If that file descriptor is then used for writing, the
551 that effective value will be set as a new CPU latency limit. Thus requesting a
558 associated with that file descriptor, but it controls this particular PM QoS
563 with that file descriptor to be removed from the global priority list of CPU
564 latency limit requests and destroyed. If that happens, the priority list
566 list and that value will become the new limit.
572 process does that. In other words, this PM QoS request is shared by the entire
580 (there may be other requests coming from kernel code in that list).
584 the given CPU as the upper limit for the exit latency of the idle states that
585 they are allowed to select for that CPU. They should never select any idle
586 states with exit latency beyond that limit.
601 support code that is expected to provide a default mechanism for this purpose.
602 That default mechanism usually is the least common denominator for all of the
609 the name of an available governor (e.g. ``cpuidle.governor=menu``) and that
611 the ``menu`` governor to be used on the systems that use the ``ladder`` governor
623 architecture support code to deal with idle CPUs. How it does that depends on
630 that using ``idle=poll`` is somewhat drastic in many cases, as preventing idle
633 P-states (see |cpufreq|) that require any number of CPUs in a package to be
642 and forces the use of the ``acpi_idle`` driver instead. Note that in either
648 drivers that can be passed to them via the kernel command line. Specifically,
654 idle states deeper than idle state ``<n>``. In that case, they will never ask
660 Also, the ``acpi_idle`` driver is part of the ``processor`` kernel module that