Lines Matching +full:well +full:- +full:known

12 ----------
18 - more efficient memory utilization by sharing ring buffer across CPUs;
19 - preserving ordering of events that happen sequentially in time, even across
23 Both are a result of a choice to have per-CPU perf ring buffer. Both can be
25 problem could technically be solved for perf buffer with some in-kernel
30 ------------------
56 The approach chosen has an advantage of re-using existing BPF map
62 combined with ``ARRAY_OF_MAPS`` and ``HASH_OF_MAPS`` map-in-maps to implement
75 - variable-length records;
76 - if there is no more space left in ring buffer, reservation fails, no
78 - memory-mappable data area for user-space applications for ease of
80 - epoll notifications for new incoming data;
81 - but still the ability to do busy polling for new data to achieve the
86 - ``bpf_ringbuf_output()`` allows to *copy* data from one place to a ring
88 - ``bpf_ringbuf_reserve()``/``bpf_ringbuf_commit()``/``bpf_ringbuf_discard()``
98 submit records of the length that's not known to verifier beforehand. It also
104 than BPF stack space allows, so many programs have use extra per-CPU array as
106 completely. But in exchange, it only allows a known constant size of memory to
114 code. Discard is useful for some advanced use-cases, such as ensuring
115 all-or-nothing multi-record submission, or emulating temporary
119 reference-tracking logic, similar to socket ref-tracking. It is thus
125 - ``BPF_RB_AVAIL_DATA`` returns amount of unconsumed data in ring buffer;
126 - ``BPF_RB_RING_SIZE`` returns the size of ring buffer;
127 - ``BPF_RB_CONS_POS``/``BPF_RB_PROD_POS`` returns current logical position
133 into account highly-changeable nature of some of those characteristics.
135 One such heuristic might involve more fine-grained control over poll/epoll
139 efficient batched notifications. Default self-balancing strategy, though,
144 -------------------------
152 applies to NMI context as well, except that due to using a spinlock during
156 The ring buffer itself internally is implemented as a power-of-2 sized
157 circular buffer, with two logical and ever-increasing counters (which might
158 wrap around on 32-bit architectures, that's not a problem):
160 - consumer counter shows up to which logical position consumer consumed the
162 - producer counter denotes amount of data reserved by all producers.
167 length of reserved record, as well as two extra bits: busy bit to denote that
175 header. This significantly simplifies verifier, as well as improving API
186 speeds up as well) implementation of both producers and consumers is how data
187 area is mapped twice contiguously back-to-back in the virtual memory. This
195 a self-pacing notifications of new data being availability.