Lines Matching full:rcu
50 // not-yet-completed RCU grace periods.
164 * Note a quasi-voluntary context switch for RCU-tasks's benefit.
234 * rcu_trace_implies_rcu_gp - does an RCU Tasks Trace grace period imply an RCU grace period?
236 * As an accident of implementation, an RCU Tasks Trace grace period also
237 * acts as an RCU grace period. However, this could change at any time.
246 * cond_resched_tasks_rcu_qs - Report potential quiescent states to RCU
249 * report potential quiescent states to RCU-tasks even if the cond_resched()
259 * rcu_softirq_qs_periodic - Report RCU and RCU-Tasks quiescent states
266 * provide both RCU and RCU-Tasks quiescent states. Note that this macro
269 * Because regions of code that have disabled softirq act as RCU read-side
276 * effect because cond_resched() does not provide RCU-Tasks quiescent states.
299 #error "Unknown RCU implementation specified to kernel configuration"
400 * ("rcu: Reject RCU_LOCKDEP_WARN() false positives") for more detail.
416 "Illegal context switch in RCU read-side critical section"); in rcu_preempt_sleep_check()
427 "Illegal context switch in RCU-bh read-side critical section"); \
429 "Illegal context switch in RCU-sched read-side critical section"); \
470 * lockdep_assert_in_rcu_reader - WARN if not within some type of RCU reader
472 * Splats if lockdep is enabled and there is no RCU reader of any
475 * as RCU readers.
503 * multiple pointers markings to match different RCU implementations
521 * unrcu_pointer - mark a pointer as not being RCU protected
527 #define unrcu_pointer(p) __unrcu_pointer(p, __UNIQUE_ID(rcu))
555 #define rcu_dereference_raw(p) __rcu_dereference_raw(p, __UNIQUE_ID(rcu))
558 * RCU_INITIALIZER() - statically initialize an RCU-protected global variable
564 * rcu_assign_pointer() - assign to RCU-protected pointer
568 * Assigns the specified value to the specified RCU-protected
569 * pointer, ensuring that any concurrent RCU readers will see
576 * will be dereferenced by RCU read-side code.
606 * rcu_replace_pointer() - replace an RCU pointer, returning its old value
607 * @rcu_ptr: RCU pointer, whose old value is returned
611 * Perform a replacement, where @rcu_ptr is an RCU-annotated
624 * rcu_access_pointer() - fetch RCU pointer with no dereferencing
627 * Return the value of the specified RCU-protected pointer, but omit the
628 * lockdep checks for being in an RCU read-side critical section. This is
630 * not dereferenced, for example, when testing an RCU-protected pointer
634 * Within an RCU read-side critical section, there is little reason to
645 * the case in the context of the RCU callback that is freeing up the data,
650 #define rcu_access_pointer(p) __rcu_access_pointer((p), __UNIQUE_ID(rcu), __rcu)
661 * An implicit check for being in an RCU read-side critical section
682 * which pointers are protected by RCU and checks that the pointer is
686 __rcu_dereference_check((p), __UNIQUE_ID(rcu), \
694 * This is the RCU-bh counterpart to rcu_dereference_check(). However,
695 * please note that starting in v5.0 kernels, vanilla RCU grace periods
702 __rcu_dereference_check((p), __UNIQUE_ID(rcu), \
710 * This is the RCU-sched counterpart to rcu_dereference_check().
711 * However, please note that starting in v5.0 kernels, vanilla RCU grace
718 __rcu_dereference_check((p), __UNIQUE_ID(rcu), \
723 * The tracing infrastructure traces RCU (we want that), but unfortunately
724 * some of the RCU checks causes tracing to lock up the system.
730 __rcu_dereference_check((p), __UNIQUE_ID(rcu), 1, __rcu)
733 * rcu_dereference_protected() - fetch RCU pointer when updates prevented
737 * Return the value of the specified RCU-protected pointer, but omit
749 __rcu_dereference_protected((p), __UNIQUE_ID(rcu), (c), __rcu)
753 * rcu_dereference() - fetch RCU-protected pointer for dereferencing
761 * rcu_dereference_bh() - fetch an RCU-bh-protected pointer for dereferencing
769 * rcu_dereference_sched() - fetch RCU-sched-protected pointer for dereferencing
777 * rcu_pointer_handoff() - Hand off a pointer from RCU to other mechanism
781 * is handed off from RCU to some other synchronization mechanism, for
799 * rcu_read_lock() - mark the beginning of an RCU read-side critical section
802 * are within RCU read-side critical sections, then the
805 * on one CPU while other CPUs are within RCU read-side critical
806 * sections, invocation of the corresponding RCU callback is deferred
815 * Note, however, that RCU callbacks are permitted to run concurrently
816 * with new RCU read-side critical sections. One way that this can happen
817 * is via the following sequence of events: (1) CPU 0 enters an RCU
819 * an RCU callback, (3) CPU 0 exits the RCU read-side critical section,
820 * (4) CPU 2 enters a RCU read-side critical section, (5) the RCU
821 * callback is invoked. This is legal, because the RCU read-side critical
823 * therefore might be referencing something that the corresponding RCU
825 * RCU callback is invoked.
827 * RCU read-side critical sections may be nested. Any deferred actions
828 * will be deferred until the outermost RCU read-side critical section
832 * following this rule: don't put anything in an rcu_read_lock() RCU
836 * In non-preemptible RCU implementations (pure TREE_RCU and TINY_RCU),
837 * it is illegal to block while in an RCU read-side critical section.
838 * In preemptible RCU implementations (PREEMPT_RCU) in CONFIG_PREEMPTION
839 * kernel builds, RCU read-side critical sections may be preempted,
840 * but explicit blocking is illegal. Finally, in preemptible RCU
841 * implementations in real-time (with -rt patchset) kernel builds, RCU
848 __acquire(RCU); in rcu_read_lock()
856 * way for writers to lock out RCU readers. This is a feature, not
857 * a bug -- this property is what provides RCU's performance benefits.
860 * used as well. RCU does not care how the writers keep out of each
865 * rcu_read_unlock() - marks the end of an RCU read-side critical section.
881 __release(RCU); in rcu_read_unlock()
886 * rcu_read_lock_bh() - mark the beginning of an RCU-bh critical section
889 * Note that anything else that disables softirqs can also serve as an RCU
909 * rcu_read_unlock_bh() - marks the end of a softirq-only RCU critical section
923 * rcu_read_lock_sched() - mark the beginning of a RCU-sched critical section
954 * rcu_read_unlock_sched() - marks the end of a RCU-classic critical section
975 * RCU_INIT_POINTER() - initialize an RCU protected pointer
979 * Initialize an RCU-protected pointer in special cases where readers
985 * RCU readers from concurrently accessing this pointer *or*
998 * will look OK in crash dumps, but any concurrent RCU readers might
1002 * If you are creating an RCU-protected linked structure that is accessed
1003 * by a single external-to-structure RCU-protected pointer, then you may
1004 * use RCU_INIT_POINTER() to initialize the internal RCU-protected
1019 * RCU_POINTER_INITIALIZER() - statically initialize an RCU protected pointer
1023 * GCC-style initialization for an RCU-protected pointer in a structure field.
1039 * Many rcu callbacks functions just call kfree() on the base structure.
1143 * in an RCU read-side critical section that includes a read-side fetch
1161 DEFINE_LOCK_GUARD_0(rcu,
1171 __release(RCU);