Lines Matching full:all

36  * WFI state until all cpus are ready to enter a coupled state, at
37 * which point the coupled state function will be called on all
40 * Once all cpus are ready to enter idle, they are woken by an smp
43 * final pass is needed to guarantee that all cpus will call the
56 * and only read after all the cpus are ready for the coupled idle
68 * Set struct cpuidle_device.coupled_cpus to the mask of all
69 * coupled cpus, usually the same as cpu_possible_mask if all cpus
81 * called on all cpus at approximately the same time. The driver
82 * should ensure that the cpus all abort together if any cpu tries
132 * cpuidle_coupled_parallel_barrier - synchronize all online coupled cpus
136 * No caller to this function will return from this function until all online
228 int all; in cpuidle_coupled_set_not_ready() local
231 all = coupled->online_count | (coupled->online_count << WAITING_BITS); in cpuidle_coupled_set_not_ready()
233 -MAX_WAITING_CPUS, all); in cpuidle_coupled_set_not_ready()
242 * Returns true if all of the cpus in a coupled set are out of the ready loop.
251 * cpuidle_coupled_cpus_ready - check if all cpus in a coupled set are ready
254 * Returns true if all cpus coupled to this target state are in the ready loop
263 * cpuidle_coupled_cpus_waiting - check if all cpus in a coupled set are waiting
266 * Returns true if all cpus coupled to this target state are in the wait loop
278 * Returns true if all of the cpus in a coupled set are out of the waiting loop.
291 * Returns the deepest idle state that all coupled cpus can enter
341 * cpuidle_coupled_poke_others - wake up all other cpus that may be waiting
345 * Calls cpuidle_coupled_poke on all other online cpus.
457 * all the other cpus to call this function. Once all coupled cpus are idle,
458 * the second stage will start. Each coupled cpu will spin until all cpus have
495 * all the other cpus out of their waiting state so they can in cpuidle_enter_state_coupled()
507 * Wait for all coupled cpus to be idle, using the deepest state in cpuidle_enter_state_coupled()
547 * All coupled cpus are probably idle. There is a small chance that in cpuidle_enter_state_coupled()
549 * and spin until all coupled cpus have incremented the counter. Once a in cpuidle_enter_state_coupled()
551 * spin until either all cpus have incremented the ready counter, or in cpuidle_enter_state_coupled()
566 * Make sure read of all cpus ready is done before reading pending pokes in cpuidle_enter_state_coupled()
572 * cpu saw that all cpus were waiting. The cpu that reentered idle will in cpuidle_enter_state_coupled()
578 * coupled idle state of all cpus and retry. in cpuidle_enter_state_coupled()
582 /* Wait for all cpus to see the pending pokes */ in cpuidle_enter_state_coupled()
587 /* all cpus have acked the coupled state */ in cpuidle_enter_state_coupled()
599 * exiting the idle enter function and decrementing ready_count. All in cpuidle_enter_state_coupled()
602 * all other cpus will loop back into the safe idle state instead of in cpuidle_enter_state_coupled()
612 * Wait until all coupled cpus have exited idle. There is no risk that in cpuidle_enter_state_coupled()
706 /* Force all cpus out of the waiting loop. */ in cpuidle_coupled_prevent_idle()