Searched full:optimized (Results 1 – 25 of 704) sorted by relevance
12345678910>>...29
/linux-6.12.1/Documentation/trace/ |
D | kprobes.rst | 192 instruction (the "optimized region") lies entirely within one function. 197 jump into the optimized region. Specifically: 202 optimized region -- Kprobes checks the exception tables to verify this); 203 - there is no near jump to the optimized region (other than to the first 206 - For each instruction in the optimized region, Kprobes verifies that 218 - the instructions from the optimized region 228 - Other instructions in the optimized region are probed. 235 If the kprobe can be optimized, Kprobes enqueues the kprobe to an 237 it. If the to-be-optimized probepoint is hit before being optimized, 248 optimized region [3]_. As you know, synchronize_rcu() can ensure [all …]
|
/linux-6.12.1/drivers/opp/ |
D | ti-opp-supply.c | 26 * struct ti_opp_supply_optimum_voltage_table - optimized voltage table 28 * @optimized_uv: Optimized voltage from efuse 37 * @vdd_table: Optimized voltage mapping table 69 * _store_optimized_voltages() - store optimized voltages 73 * Picks up efuse based optimized voltages for VDD unique per device and 158 * Some older samples might not have optimized efuse in _store_optimized_voltages() 193 * Return: if a match is found, return optimized voltage, else return 216 dev_err_ratelimited(dev, "%s: Failed optimized voltage match for %d\n", in _get_optimal_vdd_voltage() 388 /* If we need optimized voltage */ in ti_opp_supply_probe()
|
/linux-6.12.1/Documentation/devicetree/bindings/opp/ |
D | ti,omap-opp-supply.yaml | 37 - description: OMAP5+ optimized voltages in efuse(Class 0) VDD along with 40 - description: OMAP5+ optimized voltages in efuse(class0) VDD but no VBB 54 optimized efuse configuration. 63 - description: efuse offset where the optimized voltage is located
|
/linux-6.12.1/fs/crypto/ |
D | Kconfig | 26 # algorithms, not any per-architecture optimized implementations. It is 27 # strongly recommended to enable optimized implementations too. It is safe to 28 # disable these generic implementations if corresponding optimized
|
/linux-6.12.1/arch/arm/kernel/ |
D | io.c | 43 * This needs to be optimized. 59 * This needs to be optimized. 75 * This needs to be optimized.
|
/linux-6.12.1/drivers/video/fbdev/aty/ |
D | atyfb.h | 230 /* Hack for bloc 1, should be cleanly optimized by compiler */ in aty_ld_le32() 243 /* Hack for bloc 1, should be cleanly optimized by compiler */ in aty_st_le32() 257 /* Hack for bloc 1, should be cleanly optimized by compiler */ in aty_st_le16() 269 /* Hack for bloc 1, should be cleanly optimized by compiler */ in aty_ld_8() 281 /* Hack for bloc 1, should be cleanly optimized by compiler */ in aty_st_8()
|
/linux-6.12.1/arch/x86/kernel/kprobes/ |
D | opt.c | 46 /* This function only handles jump-optimized kprobe */ in __recover_optprobed_insn() 49 /* If op is optimized or under unoptimizing */ in __recover_optprobed_insn() 58 * If the kprobe can be optimized, original bytes which can be in __recover_optprobed_insn() 175 /* Optimized kprobe call back function: called from optinsn */ 340 /* Check optimized_kprobe can actually be optimized. */ 355 /* Check the addr is within the optimized instructions. */ 363 /* Free optimized instruction slot */ 545 /* This kprobe is really able to run optimized path. */ in setup_detour_execution()
|
/linux-6.12.1/tools/testing/selftests/ftrace/test.d/kprobe/ |
D | kprobe_opt_types.tc | 4 # description: Register/unregister optimized probe 28 if echo $PROBE | grep -q OPTIMIZED; then
|
/linux-6.12.1/Documentation/locking/ |
D | percpu-rw-semaphore.rst | 6 optimized for locking for reading. 26 The idea of using RCU for optimized rw-lock was introduced by
|
/linux-6.12.1/Documentation/devicetree/bindings/memory-controllers/ |
D | atmel,ebi.txt | 67 - atmel,smc-tdf-mode: "normal" or "optimized". When set to 68 "optimized" the data float time is optimized
|
/linux-6.12.1/arch/x86/include/asm/ |
D | qspinlock_paravirt.h | 12 * and restored. So an optimized version of __pv_queued_spin_unlock() is 21 * Optimized assembly version of __raw_callee_save___pv_queued_spin_unlock
|
/linux-6.12.1/include/linux/ |
D | omap-gpmc.h | 34 * gpmc_omap_onenand_set_timings - set optimized sync timings. 40 * Sets optimized timings for the @cs region based on @freq and @latency.
|
/linux-6.12.1/drivers/staging/media/atomisp/pci/isp/kernels/tdf/tdf_1.0/ |
D | ia_css_tdf_types.h | 34 s32 thres_flat_table[64]; /** Final optimized strength table of NR for flat region. */ 35 s32 thres_detail_table[64]; /** Final optimized strength table of NR for detail region. */
|
/linux-6.12.1/arch/sparc/lib/ |
D | strlen.S | 2 /* strlen.S: Sparc optimized strlen code 3 * Hand optimized from GNU libc's strlen
|
D | M7memset.S | 2 * M7memset.S: SPARC M7 optimized memset. 8 * M7memset.S: M7 optimized memset. 100 * (can create a more optimized version later.) 114 * (can create a more optimized version later.)
|
/linux-6.12.1/Documentation/devicetree/bindings/clock/ |
D | renesas,5p35023.yaml | 26 must be done via the full register map, including optimized settings. 50 Optimized settings for the device must be provided in full
|
/linux-6.12.1/kernel/ |
D | kprobes.c | 420 * This must be called from arch-dep optimized caller. 436 /* Free optimized instructions and optimized_kprobe */ 488 * Return an optimized kprobe whose optimizing code replaces 675 /* Optimize kprobe if p is ready to be optimized */ 685 /* kprobes with 'post_handler' can not be optimized */ in optimize_kprobe() 691 /* Check there is no other kprobes at the optimized instructions */ in optimize_kprobe() 695 /* Check if it is already optimized. */ in optimize_kprobe() 707 * 'op' must have OPTIMIZED flag in optimize_kprobe() 724 /* Unoptimize a kprobe if p is optimized */ 730 return; /* This is not an optprobe nor optimized */ in unoptimize_kprobe() [all …]
|
/linux-6.12.1/arch/x86/crypto/ |
D | twofish_glue.c | 2 * Glue Code for assembler optimized version of TWOFISH 98 MODULE_DESCRIPTION ("Twofish Cipher Algorithm, asm optimized");
|
D | camellia_aesni_avx_glue.c | 3 * Glue Code for x86_64/AVX/AES-NI assembler optimized version of Camellia 135 MODULE_DESCRIPTION("Camellia Cipher Algorithm, AES-NI/AVX optimized");
|
D | serpent_avx2_glue.c | 3 * Glue Code for x86_64/AVX2 assembler optimized version of Serpent 128 MODULE_DESCRIPTION("Serpent Cipher Algorithm, AVX2 optimized");
|
D | camellia_aesni_avx2_glue.c | 3 * Glue Code for x86_64/AVX2/AES-NI assembler optimized version of Camellia 136 MODULE_DESCRIPTION("Camellia Cipher Algorithm, AES-NI/AVX2 optimized");
|
D | sm4_aesni_avx2_glue.c | 3 * SM4 Cipher Algorithm, AES-NI/AVX2 optimized. 141 MODULE_DESCRIPTION("SM4 Cipher Algorithm, AES-NI/AVX2 optimized");
|
/linux-6.12.1/mm/ |
D | hugetlb_vmemmap.c | 486 * hugetlb_vmemmap_restore_folio - restore previously optimized (by 533 /* Add non-optimized folios to output list */ in hugetlb_vmemmap_restore_folios() 544 /* Return true iff a HugeTLB whose vmemmap should and can be optimized. */ 583 * page could be to the OLD struct pages. Set the vmemmap optimized in __hugetlb_vmemmap_optimize_folio() 614 * @folio: the folio whose vmemmap pages will be optimized. 619 * vmemmap pages have been optimized.
|
/linux-6.12.1/arch/m68k/include/asm/ |
D | delay.h | 72 * the const factor (4295 = 2**32 / 1000000) can be optimized out when 88 * first constant multiplications gets optimized away if the delay is
|
/linux-6.12.1/arch/arc/lib/ |
D | memset-archs.S | 10 * The memset implementation below is optimized to use prefetchw and prealloc 12 * If you want to implement optimized memset for other possible L1 data cache
|
12345678910>>...29