Lines Matching refs:V

5 In a Hyper-V guest VM, PCI pass-thru devices (also called
16 Hyper-V terminology for vPCI devices is "Discrete Device
17 Assignment" (DDA). Public documentation for Hyper-V DDA is
25 driver to interact directly with the hardware. See Hyper-V
35 Hyper-V provides full PCI functionality for a vPCI device when
40 its integration with the Linux PCI subsystem must use Hyper-V
41 specific mechanisms. Consequently, vPCI devices on Hyper-V
62 VMBus connection to the vPCI VSP on the Hyper-V host. That
76 PCI device setup follows a sequence that Hyper-V originally
80 with a bit of hackery in the Hyper-V virtual PCI driver for
88 device. The Hyper-V host does not guarantee that these bytes
97 to the Hyper-V host over the VMBus channel as part of telling
100 MMIO range, the Hyper-V host intercepts the accesses and maps
104 the Hyper-V host, and uses this information to allocate MMIO
110 point the Hyper-V virtual PCI driver hackery is done, and the
117 A Hyper-V host may initiate removal of a vPCI device from a
119 is instigated by an admin action taken on the Hyper-V host and
125 a message, the Hyper-V virtual PCI driver in Linux
129 Hyper-V over the VMBus channel indicating that the device has
130 been removed. At this point, Hyper-V sends a VMBus rescind
135 message also indicates to the guest that Hyper-V has stopped
142 After sending the Eject message, Hyper-V allows the guest VM
146 within the allowed 60 seconds, the Hyper-V host forcibly
154 Hyper-V virtual PCI driver is very tricky. Ejection has been
156 fully setup. The Hyper-V virtual PCI driver has been updated
164 The Hyper-V virtual PCI driver supports vPCI devices using
168 the Hyper-V interfaces. For the single-MSI and MSI-X cases,
174 with Hyper-V, which must decide which physical CPU should
176 Unfortunately, the Hyper-V decision-making process is a bit
182 The Hyper-V virtual PCI driver implements the
184 Unfortunately, on Hyper-V the implementation requires sending
185 a VMBus message to the Hyper-V host and awaiting an interrupt
196 Most of the code in the Hyper-V virtual PCI driver (pci-
197 hyperv.c) applies to Hyper-V and Linux guests running on x86
199 interrupt assignments are managed. On x86, the Hyper-V
201 Hyper-V which guest vCPU should be interrupted by each
205 Hyper-V virtual PCI driver manages the allocation of an SPI
206 for each MSI/MSI-X interrupt. The Hyper-V virtual PCI driver
208 which Hyper-V emulates, so no hypercall is necessary as with
209 x86. Hyper-V does not support using LPIs for vPCI devices in
212 The Hyper-V virtual PCI driver in Linux supports vPCI devices
215 interface, the Hyper-V virtual PCI driver is called to tell
216 the Hyper-V host to change the interrupt targeting and
220 Hyper-V host of the change, and things break. Fortunately,
228 By default, Hyper-V pins all guest VM memory in the host
236 are in "direct" mode since Hyper-V does not provide a virtual
239 Hyper-V assumes that physical PCI devices always perform
246 Hyper-V VMBus driver propagates cache-coherency information
250 Current Hyper-V versions always indicate that the VMBus is
258 messages are passed over a VMBus channel between the Hyper-V
260 messages have been revised in newer versions of Hyper-V, so
290 In Hyper-V guests these standard functions map to functions
292 in the Hyper-V virtual PCI driver. In normal VMs,
294 space, and the accesses trap to Hyper-V to be handled.
295 But in CoCo VMs, memory encryption prevents Hyper-V
303 The Hyper-V host and Hyper-V virtual PCI driver in Linux
311 diagnostic data to a Hyper-V host running in the Azure public
315 effectively stubs them out when running in non-Hyper-V