Lines Matching +full:auto +full:- +full:detects
1 /* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
6 * Copyright (C) 2013-2015 Alexei Starovoitov <ast@kernel.org>
100 * @brief **libbpf_set_print()** sets user-provided log callback function to
108 * This function is thread-safe.
119 * - for object open from file, this will override setting object
121 * - for object open from memory buffer, this will specify an object
122 * name and will override default "<addr>-<buf-size>" name;
125 /* parse map definitions non-strictly, allowing extra attributes/data */
129 * auto-pinned to that path on load; defaults to "/sys/fs/bpf".
139 /* Path to the custom BTF to be used for BPF CO-RE relocations.
141 * for the purpose of CO-RE relocations.
148 * passed-through to bpf() syscall. Keep in mind that kernel might
149 * fail operation with -ENOSPC error if provided buffer is too small
155 * - each BPF progral load (BPF_PROG_LOAD) attempt, unless overridden
156 * with bpf_program__set_log() on per-program level, to get
158 * - during BPF object's BTF load into kernel (BPF_BTF_LOAD) to get
162 * previous contents, so if you need more fine-grained control, set
163 * per-program buffer with bpf_program__set_log_buf() to preserve each
177 * could be either libbpf's own auto-allocated log buffer, if
178 * kernel_log_buffer is NULL, or user-provided custom kernel_log_buf.
301 * @return BPF token FD or -1, if it wasn't set
348 * @brief **bpf_program__insns()** gives read-only access to BPF program's
362 * instructions will be CO-RE-relocated, BPF subprograms instructions will be
465 * a BPF program based on auto-detection of program type, attach type,
473 * - kprobe/kretprobe (depends on SEC() definition)
474 * - uprobe/uretprobe (depends on SEC() definition)
475 * - tracepoint
476 * - raw tracepoint
477 * - tracing programs (typed raw TP/fentry/fexit/fmod_ret)
485 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
501 * enum probe_attach_mode - the mode to attach kprobe/uprobe
503 * force libbpf to attach kprobe/uprobe in specific mode, -ENOTSUP will
520 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
547 /* array of user-provided values fetchable through bpf_get_attach_cookie */
596 * - syms and offsets are mutually exclusive
597 * - ref_ctr_offsets and cookies are optional
602 * -1 for all processes
619 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
644 * system supports compat syscalls or defines 32-bit syscalls in 64-bit
649 * compat and 32-bit interfaces is required.
666 * a6ca88b241d5 ("trace_uprobe: support reference counter in fd-based uprobe")
669 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
673 /* Function name to attach to. Could be an unqualified ("abc") or library-qualified
697 * -1 for all processes
715 * -1 for all processes
730 /* custom user-provided value accessible through usdt_cookie() */
738 * bpf_program__attach_uprobe_opts() except it covers USDT (User-space
740 * user-space function entry or exit.
744 * -1 for all processes
761 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
794 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
901 * auto-detection of attachment when programs are loaded.
917 /* Per-program log level and log buffer getters/setters.
927 * @brief **bpf_program__set_attach_target()** sets BTF-based attach target
929 * - BTF-aware raw tracepoints (tp_btf);
930 * - fentry/fexit/fmod_ret;
931 * - lsm;
932 * - freplace.
968 * @brief **bpf_map__set_autocreate()** sets whether libbpf has to auto-create
972 * @return 0 on success; -EBUSY if BPF object was already loaded
974 * **bpf_map__set_autocreate()** allows to opt-out from libbpf auto-creating
979 * This API allows to opt-out of this process for specific map instance. This
983 * BPF-side code that expects to use such missing BPF map is recognized by BPF
990 * @brief **bpf_map__set_autoattach()** sets whether libbpf has to auto-attach
1000 * auto-attach during BPF skeleton attach phase.
1002 * @return true if map is set to auto-attach during skeleton attach phase; false, otherwise
1010 * @return the file descriptor; or -EINVAL in case of an error
1038 * There is a special case for maps with associated memory-mapped regions, like
1041 * adjust the corresponding BTF info. This attempt is best-effort and can only
1134 * definition's **value_size**. For per-CPU BPF maps value size has to be
1137 * per-CPU values value size has to be aligned up to closest 8 bytes for
1143 * **bpf_map__lookup_elem()** is high-level equivalent of
1158 * definition's **value_size**. For per-CPU BPF maps value size has to be
1161 * per-CPU values value size has to be aligned up to closest 8 bytes for
1167 * **bpf_map__update_elem()** is high-level equivalent of
1183 * **bpf_map__delete_elem()** is high-level equivalent of
1197 * definition's **value_size**. For per-CPU BPF maps value size has to be
1200 * per-CPU values value size has to be aligned up to closest 8 bytes for
1206 * **bpf_map__lookup_and_delete_elem()** is high-level equivalent of
1221 * @return 0, on success; -ENOENT if **cur_key** is the last key in BPF map;
1224 * **bpf_map__get_next_key()** is high-level equivalent of
1337 * manager object. The index is 0-based and corresponds to the order in which
1367 * should still show the correct trend over the long-term.
1437 * @return A pointer to an 8-byte aligned reserved region of the user ring
1460 * should block when waiting for a sample. -1 causes the caller to block
1462 * @return A pointer to an 8-byte aligned reserved region of the user ring
1468 * If **timeout_ms** is -1, the function will block indefinitely until a sample
1469 * becomes available. Otherwise, **timeout_ms** must be non-negative, or errno
1545 * code to send data over to user-space
1546 * @param page_cnt number of memory pages allocated for each per-CPU buffer
1549 * @param ctx user-provided extra context passed into *sample_cb* and *lost_cb*
1560 LIBBPF_PERF_EVENT_ERROR = -1,
1561 LIBBPF_PERF_EVENT_CONT = -2,
1580 /* if cpu_cnt > 0, map_keys specify map keys to set per-CPU FDs for */
1600 * @brief **perf_buffer__buffer()** returns the per-cpu raw mmap()'ed underlying
1639 * @brief **libbpf_probe_bpf_prog_type()** detects if host kernel supports
1652 * @brief **libbpf_probe_bpf_map_type()** detects if host kernel supports
1665 * @brief **libbpf_probe_bpf_helper()** detects if host kernel supports the
1822 * auto-attach is not supported, callback should return 0 and set link to
1834 /* User-provided value that is passed to prog_setup_fn,
1864 * @return Non-negative handler ID is returned on success. This handler ID has
1870 * - if *sec* is just a plain string (e.g., "abc"), it will match only
1873 * - if *sec* is of the form "abc/", proper SEC() form is
1876 * - if *sec* is of the form "abc+", it will successfully match both
1878 * - if *sec* is NULL, custom handler is registered for any BPF program that
1886 * (i.e., it's possible to have custom SEC("perf_event/LLC-load-misses")
1890 * libbpf_set_strict_mode(), etc)) these APIs are not thread-safe. User needs
1906 * libbpf_set_strict_mode(), etc)) these APIs are not thread-safe. User needs