Lines Matching +full:on +full:- +full:device

1 .. SPDX-License-Identifier: GPL-2.0
3 PCI pass-thru devices
5 In a Hyper-V guest VM, PCI pass-thru devices (also called
8 Guest device drivers can interact directly with the hardware
10 provides higher bandwidth access to the device with lower
12 hypervisor. The device should appear to the guest just as it
13 would when running on bare metal, so no changes are required
14 to the Linux device drivers for the device.
16 Hyper-V terminology for vPCI devices is "Discrete Device
17 Assignment" (DDA). Public documentation for Hyper-V DDA is
20 …tps://learn.microsoft.com/en-us/windows-server/virtualization/hyper-v/plan/plan-for-deploying-devi…
23 and for GPUs. A similar mechanism for NICs is called SR-IOV
24 and produces the same benefits by allowing a guest device
25 driver to interact directly with the hardware. See Hyper-V
26 public documentation here: `SR-IOV`_
28 .. _SR-IOV: https://learn.microsoft.com/en-us/windows-hardware/drivers/network/overview-of-single-r…
30 This discussion of vPCI devices includes DDA and SR-IOV
33 Device Presentation
34 -------------------
35 Hyper-V provides full PCI functionality for a vPCI device when
36 it is operating, so the Linux device driver for the device can
39 with Linux. But the initial detection of the PCI device and
40 its integration with the Linux PCI subsystem must use Hyper-V
41 specific mechanisms. Consequently, vPCI devices on Hyper-V
46 drivers/pci/controller/pci-hyperv.c handles a newly introduced
47 vPCI device by fabricating a PCI bus topology and creating all
48 the normal PCI device data structures in Linux that would
49 exist if the PCI device were discovered via ACPI on a bare-
51 device also has a normal PCI identity in Linux, and the normal
52 Linux device driver for the vPCI device can function as if it
53 were running in Linux on bare-metal. Because vPCI devices are
59 With this approach, the vPCI device is a VMBus device and a
60 PCI device at the same time. In response to the VMBus offer
62 VMBus connection to the vPCI VSP on the Hyper-V host. That
65 up and configuring the vPCI device in Linux. Once the device
66 is fully configured in Linux as a PCI device, the VMBus
68 in the guest, or if the vPCI device is removed from
70 device happens directly between the Linux device driver for
71 the device and the hardware, with VMBus and the VMBus channel
74 PCI Device Setup
75 ----------------
76 PCI device setup follows a sequence that Hyper-V originally
77 created for Windows guests, and that can be ill-suited for
80 with a bit of hackery in the Hyper-V virtual PCI driver for
81 Linux, the virtual PCI device is setup in Linux so that
83 device "just work".
85 Each vPCI device is set up in Linux to be in its own PCI
88 device. The Hyper-V host does not guarantee that these bytes
96 config space for the device. This MMIO range is communicated
97 to the Hyper-V host over the VMBus channel as part of telling
98 the host that the device is ready to enter d0. See
100 MMIO range, the Hyper-V host intercepts the accesses and maps
101 them to the physical device PCI config space.
103 hv_pci_probe() also gets BAR information for the device from
104 the Hyper-V host, and uses this information to allocate MMIO
110 point the Hyper-V virtual PCI driver hackery is done, and the
112 detect the device, to perform driver matching, and to
113 initialize the driver and device.
115 PCI Device Removal
116 ------------------
117 A Hyper-V host may initiate removal of a vPCI device from a
119 is instigated by an admin action taken on the Hyper-V host and
124 channel associated with the vPCI device. Upon receipt of such
125 a message, the Hyper-V virtual PCI driver in Linux
127 shutdown and remove the device. When those calls are
129 Hyper-V over the VMBus channel indicating that the device has
130 been removed. At this point, Hyper-V sends a VMBus rescind
132 processes by removing the VMBus identity for the device. Once
133 that processing is complete, all vestiges of the device having
135 message also indicates to the guest that Hyper-V has stopped
136 providing support for the vPCI device in the guest. If the
137 guest were to attempt to access that device's MMIO space, it
138 would be an invalid reference. Hypercalls affecting the device
142 After sending the Eject message, Hyper-V allows the guest VM
143 60 seconds to cleanly shutdown the device and respond with
146 within the allowed 60 seconds, the Hyper-V host forcibly
148 cascading errors in the guest because the device is now no
150 device MMIO space will fail.
154 Hyper-V virtual PCI driver is very tricky. Ejection has been
155 observed even before a newly offered vPCI device has been
156 fully setup. The Hyper-V virtual PCI driver has been updated
159 modifying this code to prevent re-introducing such problems.
163 --------------------
164 The Hyper-V virtual PCI driver supports vPCI devices using
165 MSI, multi-MSI, or MSI-X. Assigning the guest vCPU that will
166 receive the interrupt for a particular MSI or MSI-X message is
168 the Hyper-V interfaces. For the single-MSI and MSI-X cases,
172 (on x86) or the GICD registers are set (on arm64) to specify
174 with Hyper-V, which must decide which physical CPU should
176 Unfortunately, the Hyper-V decision-making process is a bit
178 interrupts on a single CPU, causing a performance bottleneck.
182 The Hyper-V virtual PCI driver implements the
184 Unfortunately, on Hyper-V the implementation requires sending
185 a VMBus message to the Hyper-V host and awaiting an interrupt
191 further complexity, the vPCI device could be ejected/rescinded
196 Most of the code in the Hyper-V virtual PCI driver (pci-
197 hyperv.c) applies to Hyper-V and Linux guests running on x86
198 and on arm64 architectures. But there are differences in how
199 interrupt assignments are managed. On x86, the Hyper-V
201 Hyper-V which guest vCPU should be interrupted by each
202 MSI/MSI-X interrupt, and the x86 interrupt vector number that
204 hypercall is made by hv_arch_irq_unmask(). On arm64, the
205 Hyper-V virtual PCI driver manages the allocation of an SPI
206 for each MSI/MSI-X interrupt. The Hyper-V virtual PCI driver
208 which Hyper-V emulates, so no hypercall is necessary as with
209 x86. Hyper-V does not support using LPIs for vPCI devices in
212 The Hyper-V virtual PCI driver in Linux supports vPCI devices
215 interface, the Hyper-V virtual PCI driver is called to tell
216 the Hyper-V host to change the interrupt targeting and
217 everything works properly. However, on x86 if the x86_vector
219 running out of vectors on a CPU, there's no path to inform the
220 Hyper-V host of the change, and things break. Fortunately,
221 guest VMs operate in a constrained device environment where
222 using all the vectors on a CPU doesn't happen. Since such a
227 ---
228 By default, Hyper-V pins all guest VM memory in the host
234 DMA to memory belonging to the host or to other VMs on the
236 are in "direct" mode since Hyper-V does not provide a virtual
239 Hyper-V assumes that physical PCI devices always perform
240 cache-coherent DMA. When running on x86, this behavior is
241 required by the architecture. When running on arm64, the
242 architecture allows for both cache-coherent and
243 non-cache-coherent devices, with the behavior of each device
244 specified in the ACPI DSDT. But when a PCI device is assigned
245 to a guest VM, that device does not appear in the DSDT, so the
246 Hyper-V VMBus driver propagates cache-coherency information
249 device and as a PCI device). See vmbus_dma_configure().
250 Current Hyper-V versions always indicate that the VMBus is
251 cache coherent, so vPCI devices on arm64 always get marked as
256 ----------------------
257 As previously described, during vPCI device setup and teardown
258 messages are passed over a VMBus channel between the Hyper-V
259 host and the Hyper-v vPCI driver in the Linux guest. Some
260 messages have been revised in newer versions of Hyper-V, so
261 the guest and host must agree on the vPCI protocol version to
266 additional information about the vPCI device, such as the
271 ------------------------
273 node affinity of the vPCI device is stored as part of the Linux
274 device information for subsequent use by the Linux driver. See
277 the Linux guest defaults the device NUMA node to 0. But even
280 information depends on certain host configuration options. If
287 ------------------------------------
288 Linux PCI device drivers access PCI config space using a
290 In Hyper-V guests these standard functions map to functions
292 in the Hyper-V virtual PCI driver. In normal VMs,
294 space, and the accesses trap to Hyper-V to be handled.
295 But in CoCo VMs, memory encryption prevents Hyper-V
301 Config Block back-channel
302 -------------------------
303 The Hyper-V host and Hyper-V virtual PCI driver in Linux
304 together implement a non-standard back-channel communication
305 path between the host and guest. The back-channel path uses
307 device. The functions hyperv_read_cfg_blk() and
311 diagnostic data to a Hyper-V host running in the Azure public
314 (pci-hyperv-intf.c, under CONFIG_PCI_HYPERV_INTERFACE) that
315 effectively stubs them out when running in non-Hyper-V