Home
last modified time | relevance | path

Searched full:multiply (Results 1 – 25 of 474) sorted by relevance

12345678910>>...19

/linux-6.12.1/tools/perf/pmu-events/arch/x86/amdzen4/
Dfloating-point.json11 "BriefDescription": "Retired x87 floating-point multiply ops.",
35 "BriefDescription": "Retired SSE and AVX floating-point multiply ops.",
47 …"BriefDescription": "Retired SSE and AVX floating-point multiply-accumulate ops (each operation is…
53 …"BriefDescription": "Retired SSE and AVX floating-point bfloat multiply-accumulate ops (each opera…
149 "BriefDescription": "Retired scalar floating-point multiply ops.",
155 "BriefDescription": "Retired scalar floating-point multiply-accumulate ops.",
215 "BriefDescription": "Retired vector floating-point multiply ops.",
221 "BriefDescription": "Retired vector floating-point multiply-accumulate ops.",
299 "BriefDescription": "Retired MMX integer multiply ops.",
305 "BriefDescription": "Retired MMX integer multiply-accumulate ops.",
[all …]
/linux-6.12.1/tools/perf/pmu-events/arch/x86/amdzen1/
Dfloating-point.json94 "BriefDescription": "Multiply Ops.",
95 … Ops that have retired. The number of events logged per cycle can vary from 0 to 8. Multiply Ops.",
115 "BriefDescription": "Double precision multiply-add FLOPS. Multiply-add counts as 2 FLOPS.",
116 …from 0 to 64. This event can count above 15. Double precision multiply-add FLOPS. Multiply-add cou…
129 "BriefDescription": "Double precision multiply FLOPS.",
130 … per cycle can vary from 0 to 64. This event can count above 15. Double precision multiply FLOPS.",
143 "BriefDescription": "Single precision multiply-add FLOPS. Multiply-add counts as 2 FLOPS.",
144 …from 0 to 64. This event can count above 15. Single precision multiply-add FLOPS. Multiply-add cou…
157 "BriefDescription": "Single-precision multiply FLOPS.",
158 … per cycle can vary from 0 to 64. This event can count above 15. Single-precision multiply FLOPS.",
/linux-6.12.1/tools/perf/pmu-events/arch/x86/amdzen5/
Dfloating-point.json11 "BriefDescription": "Retired x87 floating-point multiply ops.",
35 "BriefDescription": "Retired SSE and AVX floating-point multiply ops.",
47 …"BriefDescription": "Retired SSE and AVX floating-point multiply-accumulate ops (each operation is…
143 "BriefDescription": "Retired scalar floating-point multiply ops.",
149 "BriefDescription": "Retired scalar floating-point multiply-accumulate ops.",
209 "BriefDescription": "Retired vector floating-point multiply ops.",
215 "BriefDescription": "Retired vector floating-point multiply-accumulate ops.",
293 "BriefDescription": "Retired MMX integer multiply ops.",
299 "BriefDescription": "Retired MMX integer multiply-accumulate ops.",
341 "BriefDescription": "Retired MMX integer multiply ops of other types.",
[all …]
/linux-6.12.1/arch/parisc/math-emu/
Dfmpyfadd.c15 * Double Floating-point Multiply Fused Add
16 * Double Floating-point Multiply Negate Fused Add
17 * Single Floating-point Multiply Fused Add
18 * Single Floating-point Multiply Negate Fused Add
41 * Double Floating-point Multiply Fused Add
68 * set sign bit of result of multiply in dbl_fmpyfadd()
75 * Generate multiply exponent in dbl_fmpyfadd()
100 * sign opposite of the multiply result in dbl_fmpyfadd()
178 * invalid since multiply operands are in dbl_fmpyfadd()
191 * sign opposite of the multiply result in dbl_fmpyfadd()
[all …]
Dsfmpy.c15 * Single Precision Floating-point Multiply
33 * Single Precision Floating-point Multiply
192 /* Multiply two source mantissas together */ in sgl_fmpy()
198 * simple shift and add multiply algorithm is used. in sgl_fmpy()
Ddfmpy.c15 * Double Precision Floating-point Multiply
33 * Double Precision Floating-point Multiply
194 /* Multiply two source mantissas together */ in dbl_fmpy()
201 * simple shift and add multiply algorithm is used. in dbl_fmpy()
/linux-6.12.1/arch/m68k/include/asm/
Ddelay.h50 * multiply instruction. So we need to handle them a little differently.
51 * We use a bit of shifting and a single 32*32->32 multiply to get close.
109 * multiply instruction. So we need to handle them a little differently.
110 * We use a bit of shifting and a single 32*32->32 multiply to get close.
112 * multiply and shift.
Dhash.h13 * entirely, let's keep it simple and just use an optimized multiply
16 * The best way to do that appears to be to multiply by 0x8647 with
17 * shifts and adds, and use mulu.w to multiply the high half by 0x61C8.
/linux-6.12.1/arch/microblaze/lib/
Dmulsi3.S5 * Multiply operation for 32 bit integers.
18 beqi r5, result_is_zero /* multiply by zero */
19 beqi r6, result_is_zero /* multiply by zero */
/linux-6.12.1/lib/crypto/mpi/
Dmpih-mul.c37 /* Multiply the natural numbers u (pointed to by UP) and v (pointed to by VP),
61 /* Multiply by the first limb in V separately, as the result can be in mul_n_basecase()
76 /* For each iteration in the outer loop, multiply one limb from in mul_n_basecase()
100 * Multiply the least significant (size - 1) limbs with a recursive in mul_n()
213 /* Multiply by the first limb in V separately, as the result can be in mpih_sqr_n_basecase()
228 /* For each iteration in the outer loop, multiply one limb from in mpih_sqr_n_basecase()
249 * Multiply the least significant (size - 1) limbs with a recursive in mpih_sqr_n()
411 /* Multiply the natural numbers u (pointed to by UP, with USIZE limbs)
443 /* Multiply by the first limb in V separately, as the result can be in mpihelp_mul()
458 /* For each iteration in the outer loop, multiply one limb from in mpihelp_mul()
/linux-6.12.1/arch/mips/lib/
Dmulti3.c14 /* multiply 64-bit values, low 64-bits returned */
23 /* multiply 64-bit unsigned values, high 64-bits of 128-bit result returned */
32 /* multiply 128-bit values, low 128-bits returned */
/linux-6.12.1/tools/perf/pmu-events/arch/s390/cf_z16/
Dpai_crypto.json839 "BriefDescription": "PCC SCALAR MULTIPLY P256",
840 "PublicDescription": "PCC-Scalar-Multiply-P256 function ending with CC=0"
846 "BriefDescription": "PCC SCALAR MULTIPLY P384",
847 "PublicDescription": "PCC-Scalar-Multiply-P384 function ending with CC=0"
853 "BriefDescription": "PCC SCALAR MULTIPLY P521",
854 "PublicDescription": "PCC-Scalar-Multiply-P521 function ending with CC=0"
860 "BriefDescription": "PCC SCALAR MULTIPLY ED25519",
861 "PublicDescription": "PCC-Scalar-Multiply-Ed25519 function ending with CC=0"
867 "BriefDescription": "PCC SCALAR MULTIPLY ED448",
868 "PublicDescription": "PCC-Scalar-Multiply-Ed448 function ending with CC=0"
[all …]
/linux-6.12.1/include/crypto/internal/
Decc.h233 * @left: vli number to multiply with @right
234 * @right: vli number to multiply with @left
282 * @x: scalar to multiply with @p
283 * @p: point to multiply with @x
284 * @y: scalar to multiply with @q
285 * @q: point to multiply with @y
/linux-6.12.1/arch/m68k/fpsp040/
Dbinstr.S28 | A3. Multiply the fraction in d2:d3 by 8 using bit-field
32 | A4. Multiply the fraction in d4:d5 by 2 using shifts. The msb
87 | A3. Multiply d2:d3 by 8; extract msbs into d1.
95 | A4. Multiply d4:d5 by 2; add carry out to d1.
/linux-6.12.1/tools/include/linux/
Dhash.h38 * which is very slightly easier to multiply by and makes no
77 /* 64x64-bit multiply is efficient on all 64-bit processors */ in hash_64_generic()
80 /* Hash 64 bits using only 32x32-bit multiply. */ in hash_64_generic()
/linux-6.12.1/include/linux/
Dhash.h38 * which is very slightly easier to multiply by and makes no
77 /* 64x64-bit multiply is efficient on all 64-bit processors */ in hash_64_generic()
80 /* Hash 64 bits using only 32x32-bit multiply. */ in hash_64_generic()
/linux-6.12.1/arch/sparc/include/asm/
Delf_64.h73 #define AV_SPARC_MUL32 0x00000100 /* 32x32 multiply is efficient */
81 #define AV_SPARC_FMAF 0x00010000 /* fused multiply-add */
86 #define AV_SPARC_FJFMAU 0x00200000 /* unfused multiply-add */
87 #define AV_SPARC_IMA 0x00400000 /* integer multiply-add */
/linux-6.12.1/arch/xtensa/lib/
Dumulsidi3.S47 /* a0 and a8 will be clobbered by calling the multiply function
97 #else /* no multiply hardware */
118 #endif /* no multiply hardware */
190 /* For Xtensa processors with no multiply hardware, this simplified
/linux-6.12.1/arch/parisc/include/asm/
Dhash.h6 * HP-PA only implements integer multiply in the FPU. However, for
19 * This is a multiply by GOLDEN_RATIO_32 = 0x61C88647 optimized for the
109 * Multiply by GOLDEN_RATIO_64 = 0x0x61C8864680B583EB using a heavily
112 * Without the final shift, the multiply proper is 19 instructions,
/linux-6.12.1/arch/m68k/ifpsp060/
Dilsp.doc34 module can be used to emulate 64-bit divide and multiply,
78 For example, to use a 64-bit multiply instruction,
81 for unsigned multiply could look like:
90 bsr.l _060LISP_TOP+0x18 # branch to multiply routine
/linux-6.12.1/Documentation/arch/arm/nwfpe/
Dnotes.rst22 emulator sees a multiply of a double and extended, it promotes the double to
23 extended, then does the multiply in extended precision.
/linux-6.12.1/arch/arc/include/asm/
Ddelay.h43 * -Mathematically if we multiply and divide a number by same value the
50 * -We simply need to ensure that the multiply per above eqn happens in
/linux-6.12.1/tools/perf/pmu-events/arch/x86/amdzen2/
Dfloating-point.json46 …"BriefDescription": "Multiply-add FLOPS. Multiply-add counts as 2 FLOPS. This is a retire-based ev…
59 …"BriefDescription": "Multiply FLOPS. This is a retire-based event. The number of retired SSE/AVX F…
/linux-6.12.1/arch/microblaze/include/asm/
Dhash.h11 * multiply using shifts and adds. GCC can find a 9-step solution, but
31 /* Multiply by GOLDEN_RATIO_32 = 0x61C88647 */
/linux-6.12.1/drivers/media/platform/renesas/vsp1/
Dvsp1_rpf.c175 * The Gen3+ RPF has extended alpha capability and can both multiply the in rpf_configure_stream()
176 * alpha channel by a fixed global alpha value, and multiply the pixel in rpf_configure_stream()
204 * need to multiply both the alpha channel and the pixel in rpf_configure_stream()
206 * premultiplied. Otherwise multiply the alpha channel in rpf_configure_stream()

12345678910>>...19