Lines Matching refs:kprobe
37 (also called return probes). A kprobe can be inserted on virtually
64 When a kprobe is registered, Kprobes makes a copy of the probed
71 associated with the kprobe, passing the handler the addresses of the
72 kprobe struct and the saved registers.
81 "post_handler," if any, that is associated with the kprobe.
110 When you call register_kretprobe(), Kprobes establishes a kprobe at
115 At boot time, Kprobes registers a kprobe at the trampoline.
147 field of the kretprobe struct. Whenever the kprobe placed by kretprobe at the
182 Kprobes inserts an ordinary, breakpoint-based kprobe at the specified
235 If the kprobe can be optimized, Kprobes enqueues the kprobe to an
236 optimizing list, and kicks the kprobe-optimizer workqueue to optimize
251 of kprobe optimization supports only kernels with CONFIG_PREEMPT=n [4]_.
260 When an optimized kprobe is unregistered, disabled, or blocked by
261 another kprobe, it will be unoptimized. If this happens before
262 the optimization is complete, the kprobe is just dequeued from the
278 The jump optimization changes the kprobe's pre_handler behavior.
285 - Specify an empty function for the kprobe's post_handler.
340 kprobe address resolution code.
363 int register_kprobe(struct kprobe *kp);
373 1. With the introduction of the "symbol_name" field to struct kprobe,
382 2. Use the "offset" field of struct kprobe if the offset into the symbol
386 3. Specify either the kprobe "symbol_name" OR the "addr". If both are
387 specified, kprobe registration will fail with -EINVAL.
390 does not validate if the kprobe.addr is at an instruction boundary.
399 int pre_handler(struct kprobe *p, struct pt_regs *regs);
401 Called with p pointing to the kprobe associated with the breakpoint,
409 void post_handler(struct kprobe *p, struct pt_regs *regs,
438 regs is as described for kprobe.pre_handler. ri points to the
460 void unregister_kprobe(struct kprobe *kp);
477 int register_kprobes(struct kprobe **kps, int num);
499 void unregister_kprobes(struct kprobe **kps, int num);
517 int disable_kprobe(struct kprobe *kp);
529 int enable_kprobe(struct kprobe *kp);
540 So if you install a kprobe with a post_handler, at an optimized
568 handlers won't be run in that instance, and the kprobe.nmissed member
579 kretprobe handlers and optimized kprobe handlers run without interrupt
629 of the kprobe, because the bytes in DCR are replaced by
643 On a typical CPU in use in 2005, a kprobe hit takes 0.5 to 1.0
647 hit typically takes 50-75% longer than a kprobe hit.
648 When you have a return probe set on a function, adding a kprobe at
653 k = kprobe; r = return probe; kr = kprobe + return probe
668 Typically, an optimized kprobe hit takes 0.07 to 0.1 microseconds to
671 k = unoptimized kprobe, b = boosted (single-step skipped), o = optimized kprobe,
719 - Use ftrace dynamic events (kprobe event) with perf-probe.
745 The second column identifies the type of probe (k - kprobe and r - kretprobe)