Lines Matching full:ve

200 implemented using a Virtualization Exception (#VE) that is handled by the
201 guest kernel. A #VE is handled entirely inside the guest kernel, but some
212 #VE or #GP exceptions.
217 Instruction-based #VE
245 - #VE generated
252 The #VE MSRs are typically able to be handled by the hypervisor. Guests
253 can make a hypercall to the hypervisor to handle the #VE.
276 A #VE is generated for CPUID leaves and sub-leaves that the TDX module does
280 #VE on Memory Accesses
294 #VE on Shared Memory
297 Access to shared mappings can cause a #VE. The hypervisor ultimately
298 controls whether a shared memory access causes a #VE, so the guest must be
299 careful to only reference shared pages it can safely handle a #VE. For
301 #VE handler before it reads the #VE info structure (TDG.VP.VEINFO.GET).
312 handle a #VE.
314 #VE on Private Pages
317 An access to private mappings can also cause a #VE. Since all kernel
319 handle a #VE on arbitrary kernel memory accesses. This is not feasible, so
325 being subjected to a #VE.
329 #VE. It will, instead, cause a "TD Exit" where the hypervisor is required
332 Linux #VE handler
335 Just like page faults or #GP's, #VE exceptions can be either handled or be
336 fatal. Typically, an unhandled userspace #VE results in a SIGSEGV.
337 An unhandled kernel #VE results in an oops.
339 Handling nested exceptions on x86 is typically nasty business. A #VE
340 could be interrupted by an NMI which triggers another #VE and hilarity
341 ensues. The TDX #VE architecture anticipated this scenario and includes a
344 During #VE handling, the TDX module ensures that all interrupts (including
347 or a new #VE can be delivered.
350 #VE-triggering actions (discussed above) while this block is in place.
351 While the block is in place, any #VE is elevated to a double fault (#DF)
363 In TDX, MMIO regions typically trigger a #VE exception in the guest. The
364 guest #VE handler then emulates the MMIO instruction inside the guest and