Lines Matching +full:tx +full:- +full:queues +full:- +full:config
1 .. SPDX-License-Identifier: GPL-2.0
5 Multi-PF Netdev
11 - `Background`_
12 - `Overview`_
13 - `mlx5 implementation`_
14 - `Channels distribution`_
15 - `Observability`_
16 - `Steering`_
17 - `Mutually exclusive features`_
22 The Multi-PF NIC technology enables several CPUs within a multi-socket server to connect directly to
32 The feature adds support for combining multiple PFs of the same port in a Multi-PF environment under
33 one netdev instance. It is implemented in the netdev layer. Lower-layer instances like pci func,
35 Passing traffic through different devices belonging to different NUMA sockets saves cross-NUMA
42 Multi-PF or Socket-direct in mlx5 is achieved by grouping PFs together which belong to the same
43 NIC and has the socket-direct property enabled, once all PFs are probed, we create a single netdev
51 mode, no south <-> north traffic flowing directly through a secondary PF. It needs the assistance of
52 the leader PF (east <-> west traffic) to function. All Rx/Tx traffic is steered through the primary
63 Each combined channel works against one specific PF, creating all its datapath queues against it. We
64 distribute channels to PFs in a round-robin policy.
69 +--------+--------+
71 +--------+--------+
77 +--------+--------+
80 The reason we prefer round-robin is, it is less influenced by changes in the number of channels. The
86 all using the same instance under "priv->mdev".
92 …$ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml --dump queue-get --json='{…
93 [{'id': 0, 'ifindex': 13, 'napi-id': 539, 'type': 'rx'},
94 {'id': 1, 'ifindex': 13, 'napi-id': 540, 'type': 'rx'},
95 {'id': 2, 'ifindex': 13, 'napi-id': 541, 'type': 'rx'},
96 {'id': 3, 'ifindex': 13, 'napi-id': 542, 'type': 'rx'},
97 {'id': 4, 'ifindex': 13, 'napi-id': 543, 'type': 'rx'},
98 {'id': 0, 'ifindex': 13, 'napi-id': 539, 'type': 'tx'},
99 {'id': 1, 'ifindex': 13, 'napi-id': 540, 'type': 'tx'},
100 {'id': 2, 'ifindex': 13, 'napi-id': 541, 'type': 'tx'},
101 {'id': 3, 'ifindex': 13, 'napi-id': 542, 'type': 'tx'},
102 {'id': 4, 'ifindex': 13, 'napi-id': 543, 'type': 'tx'}]
104 …$ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml --dump napi-get --json='{"…
113 $ ls /proc/irq/{36,39,40,41,42}/mlx5* -d -1
125 traffic to other PFs, via cross-vhca steering capabilities. Still maintain a single default RSS tab…
126 that is capable of pointing to the receive queues of a different PF.
128 In Tx, the primary PF creates a new Tx flow table, which is aliased by the secondaries, so they can
134 XPS default config example:
137 NUMA node0 CPU(s): 0-11
138 NUMA node1 CPU(s): 12-23
142 - /sys/class/net/eth2/queues/tx-0/xps_cpus:000001
143 - /sys/class/net/eth2/queues/tx-1/xps_cpus:001000
144 - /sys/class/net/eth2/queues/tx-2/xps_cpus:000002
145 - /sys/class/net/eth2/queues/tx-3/xps_cpus:002000
146 - /sys/class/net/eth2/queues/tx-4/xps_cpus:000004
147 - /sys/class/net/eth2/queues/tx-5/xps_cpus:004000
148 - /sys/class/net/eth2/queues/tx-6/xps_cpus:000008
149 - /sys/class/net/eth2/queues/tx-7/xps_cpus:008000
150 - /sys/class/net/eth2/queues/tx-8/xps_cpus:000010
151 - /sys/class/net/eth2/queues/tx-9/xps_cpus:010000
152 - /sys/class/net/eth2/queues/tx-10/xps_cpus:000020
153 - /sys/class/net/eth2/queues/tx-11/xps_cpus:020000
154 - /sys/class/net/eth2/queues/tx-12/xps_cpus:000040
155 - /sys/class/net/eth2/queues/tx-13/xps_cpus:040000
156 - /sys/class/net/eth2/queues/tx-14/xps_cpus:000080
157 - /sys/class/net/eth2/queues/tx-15/xps_cpus:080000
158 - /sys/class/net/eth2/queues/tx-16/xps_cpus:000100
159 - /sys/class/net/eth2/queues/tx-17/xps_cpus:100000
160 - /sys/class/net/eth2/queues/tx-18/xps_cpus:000200
161 - /sys/class/net/eth2/queues/tx-19/xps_cpus:200000
162 - /sys/class/net/eth2/queues/tx-20/xps_cpus:000400
163 - /sys/class/net/eth2/queues/tx-21/xps_cpus:400000
164 - /sys/class/net/eth2/queues/tx-22/xps_cpus:000800
165 - /sys/class/net/eth2/queues/tx-23/xps_cpus:800000
170 The nature of Multi-PF, where different channels work with different PFs, conflicts with
172 For example, in the TLS device-offload feature, special context objects are created per connection