Lines Matching full:nmi

18 #include <linux/nmi.h>
31 #include <asm/nmi.h>
41 #include <trace/events/nmi.h>
91 * Prevent NMI reason port (0x61) being accessed simultaneously, can
92 * only be used in NMI handler.
128 "INFO: NMI handler (%ps) took too long to run: %lld.%03d msecs\n", in nmi_check_duration()
161 /* return total number of NMI events handled */ in nmi_handle()
178 * internal NMI handler call chains (SERR and IO_CHECK). in __register_nmi_handler()
207 * the name passed in to describe the nmi handler in unregister_nmi_handler()
212 "Trying to free NMI (%s) from NMI context!\n", n->name); in unregister_nmi_handler()
234 pr_emerg("NMI: PCI system error (SERR) for reason %02x on CPU %d.\n", in pci_serr_error()
238 nmi_panic(regs, "NMI: Not continuing"); in pci_serr_error()
258 "NMI: IOCK error (debug interrupt?) for reason %02x on CPU %d.\n", in io_check_error()
263 nmi_panic(regs, "NMI IOCK error: Not continuing"); in io_check_error()
266 * If we end up here, it means we have received an NMI while in io_check_error()
297 * if it caused the NMI) in unknown_nmi_error()
307 pr_emerg_ratelimited("Uhhuh. NMI received for unknown reason %02x on CPU %d.\n", in unknown_nmi_error()
311 nmi_panic(regs, "NMI: Not continuing"); in unknown_nmi_error()
327 * CPU-specific NMI must be processed before non-CPU-specific in default_do_nmi()
328 * NMI, otherwise we may lose it, because the CPU-specific in default_do_nmi()
329 * NMI can not be detected/processed on other CPUs. in default_do_nmi()
334 * be two NMI or more than two NMIs (any thing over two is dropped in default_do_nmi()
335 * due to NMI being edge-triggered). If this is the second half in default_do_nmi()
336 * of the back-to-back NMI, assume we dropped things and process in default_do_nmi()
337 * more handlers. Otherwise reset the 'swallow' NMI behaviour in default_do_nmi()
355 * There are cases when a NMI handler handles multiple in default_do_nmi()
356 * events in the current NMI. One of these events may in default_do_nmi()
357 * be queued for in the next NMI. Because the event is in default_do_nmi()
358 * already handled, the next NMI will result in an unknown in default_do_nmi()
359 * NMI. Instead lets flag this for a potential NMI to in default_do_nmi()
368 * Non-CPU-specific NMI: NMI sources can be processed on any CPU. in default_do_nmi()
389 * Reassert NMI in case it became active in default_do_nmi()
401 * Only one NMI can be latched at a time. To handle in default_do_nmi()
402 * this we may process multiple nmi handlers at once to in default_do_nmi()
403 * cover the case where an NMI is dropped. The downside in default_do_nmi()
404 * to this approach is we may process an NMI prematurely, in default_do_nmi()
405 * while its real NMI is sitting latched. This will cause in default_do_nmi()
406 * an unknown NMI on the next run of the NMI processing. in default_do_nmi()
411 * of a back-to-back NMI, so we flag that condition too. in default_do_nmi()
414 * NMI previously and we swallow it. Otherwise we reset in default_do_nmi()
418 * a 'real' unknown NMI. For example, while processing in default_do_nmi()
419 * a perf NMI another perf NMI comes in along with a in default_do_nmi()
420 * 'real' unknown NMI. These two NMIs get combined into in default_do_nmi()
421 * one (as described above). When the next NMI gets in default_do_nmi()
423 * no one will know that there was a 'real' unknown NMI sent in default_do_nmi()
425 * perf NMI returns two events handled then the second in default_do_nmi()
426 * NMI will get eaten by the logic below, again losing a in default_do_nmi()
427 * 'real' unknown NMI. But this is the best we can do in default_do_nmi()
441 * its NMI context with the CPU when the breakpoint or page fault does an IRET.
444 * NMI processing. On x86_64, the asm glue protects us from nested NMIs
445 * if the outer NMI came from kernel mode, but we can still nest if the
446 * outer NMI came from user mode.
454 * When no NMI is in progress, it is in the "not running" state.
455 * When an NMI comes in, it goes into the "executing" state.
456 * Normally, if another NMI is triggered, it does not interrupt
457 * the running NMI and the HW will simply latch it so that when
458 * the first NMI finishes, it will restart the second NMI.
460 * when one is running, are ignored. Only one NMI is restarted.)
462 * If an NMI executes an iret, another NMI can preempt it. We do not
463 * want to allow this new NMI to run, but we want to execute it when the
465 * the first NMI will perform a dec_return, if the result is zero
466 * (NOT_RUNNING), then it will simply exit the NMI handler. If not, the
469 * rerun the NMI handler again, and restart the 'latched' NMI.
476 * In case the NMI takes a page fault, we need to save off the CR2
477 * because the NMI could have preempted another page fault and corrupt
480 * CR2 must be done before converting the nmi state back to NOT_RUNNING.
481 * Otherwise, there would be a race of another nested NMI coming in
583 /* +--------- nmi_seq & 0x1: CPU is currently in NMI handler. */
586 /* | | | NMI handler has been invoked. */
630 msgp = "CPU entered NMI handler function, but has not exited"; in nmi_backtrace_stall_check()
640 msghp = " (CPU exited one NMI handler function)"; in nmi_backtrace_stall_check()
642 msghp = " (CPU currently in NMI handler function)"; in nmi_backtrace_stall_check()
644 msghp = " (CPU was never in an NMI handler function)"; in nmi_backtrace_stall_check()
660 * And NMI unblocking only happens when the stack frame indicates
663 * Thus, the NMI entry stub for FRED is really straightforward and
665 * during NMI handling.
678 * Save CR2 for eventual restore to cover the case where the NMI in DEFINE_FREDENTRY_NMI()
680 * prevents guest state corruption in case that the NMI handler in DEFINE_FREDENTRY_NMI()
707 /* reset the back-to-back NMI logic */