Lines Matching full:rcu
6 * This is used by RCU to remove its dependency on the timer tick while a CPU
16 * RCU extended quiescent state bits imported from kernel/rcu/tree.c
26 #include <trace/events/rcu.h>
41 /* Record the current task on exiting RCU-tasks (dyntick-idle entry). */
49 /* Record no current task on entering RCU-tasks (dyntick-idle exit). */
57 /* Turn on heavyweight RCU tasks trace readers on kernel exit. */
66 /* Turn off heavyweight RCU tasks trace readers on kernel entry. */
78 * RCU is watching prior to the call to this function and is no longer
86 * CPUs seeing atomic_add_return() must see prior RCU read-side in ct_kernel_exit_state()
92 // RCU is no longer watching. Better be in extended quiescent state! in ct_kernel_exit_state()
98 * called from an extended quiescent state, that is, RCU is not watching
107 * and we also must force ordering with the next RCU read-side in ct_kernel_enter_state()
111 // RCU is now watching. Better not be in an extended quiescent state! in ct_kernel_enter_state()
117 * Enter an RCU extended quiescent state, which can be either the
133 // RCU will still be watching, so just do accounting and leave. in ct_kernel_exit()
149 // RCU is watching here ... in ct_kernel_exit()
156 * Exit an RCU extended quiescent state, which can be either the
172 // RCU was already watching, so just do accounting and leave. in ct_kernel_enter()
177 // RCU is not watching here ... in ct_kernel_enter()
194 * ct_nmi_exit - inform RCU of exit from NMI context
197 * RCU-idle period, update ct->state and ct->nmi_nesting
198 * to let the RCU grace-period handling know that the CPU is back to
199 * being RCU-idle.
211 * (We are exiting an NMI handler, so RCU better be paying attention in ct_nmi_exit()
218 * If the nesting level is not 1, the CPU wasn't RCU-idle, so in ct_nmi_exit()
219 * leave it in non-RCU-idle state. in ct_nmi_exit()
230 /* This NMI interrupted an RCU-idle CPU, restore RCU-idleness. */ in ct_nmi_exit()
238 // RCU is watching here ... in ct_nmi_exit()
247 * ct_nmi_enter - inform RCU of entry to NMI context
249 * If the CPU was idle from RCU's viewpoint, update ct->state and
250 * ct->nmi_nesting to let the RCU grace-period handling know
267 * If idle from RCU viewpoint, atomically increment CT state in ct_nmi_enter()
271 * to be in the outermost NMI handler that interrupted an RCU-idle in ct_nmi_enter()
279 // RCU is not watching here ... in ct_nmi_enter()
307 * ct_idle_enter - inform RCU that current CPU is entering idle
309 * Enter idle mode, in other words, -leave- the mode in which RCU
310 * read-side critical sections can occur. (Though RCU read-side
325 * ct_idle_exit - inform RCU that current CPU is leaving idle
327 * Exit idle mode, in other words, -enter- the mode in which RCU
344 * ct_irq_enter - inform RCU that current CPU is entering irq away from idle
355 * irq_exit() functions), RCU will give you what you deserve, good and hard.
372 * ct_irq_exit - inform RCU that current CPU is exiting irq towards idle
380 * architecture's idle loop violates this assumption, RCU will give you what
465 * instructions to execute won't use any RCU read side critical section
466 * because this function sets RCU in extended quiescent state.
484 * any RCU read-side critical section until the next call to in __ct_user_enter()
485 * user_exit() or ct_irq_enter(). Let's remove RCU's dependency in __ct_user_enter()
502 * Enter RCU idle mode right before resuming userspace. No use of RCU in __ct_user_enter()
504 * CPU doesn't need to maintain the tick for RCU maintenance purposes in __ct_user_enter()
511 * cputime accounting but we don't support RCU extended quiescent state. in __ct_user_enter()
527 * OTOH we can spare the calls to vtime and RCU when context_tracking.active in __ct_user_enter()
531 /* Tracking for vtime only, no concurrent RCU EQS accounting */ in __ct_user_enter()
535 * Tracking for vtime and RCU EQS. Make sure we don't race in __ct_user_enter()
537 * RCU only requires CT_RCU_WATCHING increments to be fully in __ct_user_enter()
551 * unsafe because it involves illegal RCU uses through tracing and lockdep.
566 * helpers are enough to protect RCU uses inside the exception. So in ct_user_enter()
585 * local_irq_restore(), involving illegal RCU uses through tracing and lockdep.
603 * guest space before any use of RCU read side critical section. This
620 * Exit RCU idle mode while entering the kernel because it can in __ct_user_exit()
621 * run a RCU read side critical section anytime. in __ct_user_exit()
633 * cputime accounting but we don't support RCU extended quiescent state. in __ct_user_exit()
641 /* Tracking for vtime only, no concurrent RCU EQS accounting */ in __ct_user_exit()
645 * Tracking for vtime and RCU EQS. Make sure we don't race in __ct_user_exit()
647 * RCU only requires CT_RCU_WATCHING increments to be fully in __ct_user_exit()
661 * unsafe because it involves illegal RCU uses through tracing and lockdep.
687 * involving illegal RCU uses through tracing and lockdep. This is unlikely