Lines Matching +full:in +full:- +full:between

1 /* SPDX-License-Identifier: GPL-2.0 */
7 * This implements a refcount with similar semantics to atomic_t - atomic_inc(),
8 * atomic_dec_and_test() - but percpu.
10 * There's one important difference between percpu refs and normal atomic_t
15 * The refcount will have a range of 0 to ((1U << 31) - 1), i.e. one bit less
16 * than an atomic_t - this is because of the way shutdown works, see
20 * refcount hitting 0 - it can't, if it was in percpu mode. percpu_ref_kill()
21 * puts the ref back in single atomic_t mode, collecting the per cpu refs and
32 * In the aio code, kill_ioctx() is called when we wish to destroy a kioctx; it
44 * once - percpu_ref_kill() does this for you, it returns true once and false if
62 /* flags set in the lower bits of percpu_ref->percpu_count_ptr */
64 __PERCPU_REF_ATOMIC = 1LU << 0, /* operating in atomic mode */
74 * Start w/ ref == 1 in atomic mode. Can be switched to percpu
76 * with this flag, the ref will stay in atomic mode until
83 * Start dead w/ ref == 0 in atomic mode. Must be revived with
107 * The low bit of the pointer indicates whether the ref is in percpu
114 * 'percpu_count_ptr' is required in fast path, move other fields
115 * into 'percpu_ref_data', so we can reduce memory footprint in
136 * percpu_ref_kill - drop the initial ref
145 * There are no implied RCU grace periods between kill and release.
153 * Internal helper. Don't use outside percpu-refcount proper. The
156 * branches as it can't assume that @ref->percpu_count is not NULL.
164 * The value of @ref->percpu_count_ptr is tested for in __ref_is_percpu()
167 * when using it as a pointer, __PERCPU_REF_ATOMIC may be set in in __ref_is_percpu()
168 * between contaminating the pointer value, meaning that in __ref_is_percpu()
172 * with smp_store_release() in __percpu_ref_switch_to_percpu(). in __ref_is_percpu()
174 percpu_ptr = READ_ONCE(ref->percpu_count_ptr); in __ref_is_percpu()
190 * percpu_ref_get_many - increment a percpu refcount
196 * This function is safe to call as long as @ref is between init and exit.
207 atomic_long_add(nr, &ref->data->count); in percpu_ref_get_many()
213 * percpu_ref_get - increment a percpu refcount
218 * This function is safe to call as long as @ref is between init and exit.
226 * percpu_ref_tryget_many - try to increment a percpu refcount
227 * @ref: percpu_ref to try-get
233 * This function is safe to call as long as @ref is between init and exit.
247 ret = atomic_long_add_unless(&ref->data->count, nr, 0); in percpu_ref_tryget_many()
256 * percpu_ref_tryget - try to increment a percpu refcount
257 * @ref: percpu_ref to try-get
262 * This function is safe to call as long as @ref is between init and exit.
270 * percpu_ref_tryget_live_rcu - same as percpu_ref_tryget_live() but the
273 * This function is safe to call as long as @ref is between init and exit.
285 } else if (!(ref->percpu_count_ptr & __PERCPU_REF_DEAD)) { in percpu_ref_tryget_live_rcu()
286 ret = atomic_long_inc_not_zero(&ref->data->count); in percpu_ref_tryget_live_rcu()
292 * percpu_ref_tryget_live - try to increment a live percpu refcount
293 * @ref: percpu_ref to try-get
298 * Completion of percpu_ref_kill() in itself doesn't guarantee that this
304 * This function is safe to call as long as @ref is between init and exit.
317 * percpu_ref_put_many - decrement a percpu refcount
324 * This function is safe to call as long as @ref is between init and exit.
334 else if (unlikely(atomic_long_sub_and_test(nr, &ref->data->count))) in percpu_ref_put_many()
335 ref->data->release(ref); in percpu_ref_put_many()
341 * percpu_ref_put - decrement a percpu refcount
347 * This function is safe to call as long as @ref is between init and exit.
355 * percpu_ref_is_dying - test whether a percpu refcount is dying or dead
360 * This function is safe to call as long as @ref is between init and exit
365 return ref->percpu_count_ptr & __PERCPU_REF_DEAD; in percpu_ref_is_dying()