Lines Matching full:readers
8 * 2) Remove the reader BIAS to force readers into the slow path
9 * 3) Wait until all readers have left the critical section
14 * 2) Set the reader BIAS, so readers can use the fast path again
15 * 3) Unlock rtmutex, to release blocked readers
34 * active readers. A blocked writer would force all newly incoming readers
45 * The lock/unlock of readers can run in fast paths: lock and unlock are only
58 * Increment reader count, if sem->readers < 0, i.e. READER_BIAS is in rwbase_read_trylock()
61 for (r = atomic_read(&rwb->readers); r < 0;) { in rwbase_read_trylock()
62 if (likely(atomic_try_cmpxchg_acquire(&rwb->readers, &r, r + 1))) in rwbase_read_trylock()
123 atomic_inc(&rwb->readers); in __rwbase_read_lock()
155 * clean up rwb->readers it needs to acquire rtm->wait_lock. The in __rwbase_read_unlock()
172 * rwb->readers can only hit 0 when a writer is waiting for the in rwbase_read_unlock()
173 * active readers to leave the critical section. in rwbase_read_unlock()
177 if (unlikely(atomic_dec_and_test(&rwb->readers))) in rwbase_read_unlock()
190 (void)atomic_add_return_release(READER_BIAS - bias, &rwb->readers); in __rwbase_write_unlock()
223 if (!atomic_read_acquire(&rwb->readers)) { in __rwbase_write_trylock()
224 atomic_set(&rwb->readers, WRITER_BIAS); in __rwbase_write_trylock()
241 /* Force readers into slow path */ in rwbase_write_lock()
242 atomic_sub(READER_BIAS, &rwb->readers); in rwbase_write_lock()
288 atomic_sub(READER_BIAS, &rwb->readers); in rwbase_write_trylock()