Lines Matching full:flow
19 - RFS: Receive Flow Steering
20 - Accelerated Receive Flow Steering
31 of logical flows. Packets for each flow are steered to a separate receive
51 both directions of the flow to land on the same Rx queue (and CPU). The
143 RX flow hash indirection table for eth0 with 13 RX ring(s):
149 RX flow hash indirection table for eth0 with 13 RX ring(s):
157 # ethtool -N eth0 flow-type tcp6 dst-port 22 context 1
188 flow hash over the packet’s addresses or ports (2-tuple or 4-tuple hash
190 associated flow of the packet. The hash is either provided by hardware
195 packet’s flow.
199 an index into the list is computed from the flow hash modulo the size
240 RPS Flow Limit
244 reordering. The trade-off to sending all packets from the same flow
246 In the extreme case a single flow dominates traffic. Especially on
251 Flow Limit is an optional RPS feature that prioritizes small flows
256 net.core.netdev_max_backlog), the kernel starts a per-flow packet
257 count over the last 256 packets. If a flow exceeds a set ratio (by
262 the threshold, so flow limit does not sever connections outright:
269 Flow limit is compiled in by default (CONFIG_NET_FLOW_LIMIT), but not
277 Per-flow rate is calculated by hashing each packet into a hashtable
280 be much larger than the number of CPUs, flow limit has finer-grained
293 Flow limit is useful on systems with many concurrent connections,
299 the flow limit threshold (50%) + the flow history length (256).
304 RFS: Receive Flow Steering
309 application locality. This is accomplished by Receive Flow Steering
317 but the hash is used as index into a flow lookup table. This table maps
318 flows to the CPUs where those flows are being processed. The flow hash
320 The CPU recorded in each entry is the one which last processed the flow.
324 a single application thread handles flows with many different flow hashes.
326 rps_sock_flow_table is a global flow table that contains the *desired* CPU
327 for flows: the CPU that is currently processing the flow in userspace.
334 avoid this, RFS uses a second flow table to track outstanding packets
335 for each flow: rps_dev_flow_table is a table specific to each hardware
338 for this flow are enqueued for further kernel processing. Ideally, kernel
345 CPU's backlog when a packet in this flow was last enqueued. Each backlog
348 in rps_dev_flow[i] records the last element in flow i that has
349 been enqueued onto the currently designated CPU for flow i (of course,
356 are compared. If the desired CPU for the flow (found in the
368 CPU. These rules aim to ensure that a flow only moves to a new CPU when
379 configured. The number of entries in the global flow table is set through::
383 The number of entries in the per-queue flow table are set through::
393 suggested flow count depends on the expected number of active connections
412 the application thread consuming the packets of each flow is running.
420 queue for packets matching a particular flow. The network stack
421 automatically calls this function every time a flow entry in
425 The hardware queue for a flow is derived from the CPU recorded in
497 transmitting the first packet in a flow, the function get_xps_queue() is
503 queues match, one is selected by using the flow hash to compute an index
508 The queue chosen for transmitting a particular flow is saved in the
509 corresponding socket structure for the flow (e.g. a TCP connection).
510 This transmit queue is used for subsequent packets sent on the flow to
512 of calling get_xps_queues() over all packets in the flow. To avoid
513 ooo packets, the queue for a flow can subsequently only be changed if
514 skb->ooo_okay is set for a packet in the flow. This flag indicates that
515 there are no outstanding packets in the flow, so the transmit queue can