Home
last modified time | relevance | path

Searched refs:workloads (Results 1 – 25 of 116) sorted by relevance

12345

/linux-6.12.1/drivers/crypto/intel/qat/
DKconfig24 for accelerating crypto and compression workloads.
35 for accelerating crypto and compression workloads.
46 for accelerating crypto and compression workloads.
57 for accelerating crypto and compression workloads.
68 for accelerating crypto and compression workloads.
81 Virtual Function for accelerating crypto and compression workloads.
93 Virtual Function for accelerating crypto and compression workloads.
105 Virtual Function for accelerating crypto and compression workloads.
/linux-6.12.1/tools/perf/tests/shell/lib/
Dperf_metric_validation.py13 self.workloads = [wl] # multiple workloads possible
33 .format(self.metric, self.collectedValue, self.workloads,
47 self.workloads = [x for x in workload.split(",") if x]
202 … [TestError([m], self.workloads[self.wlidx], negmetric[m], 0) for m in negmetric.keys()])
277 …self.errlist.append(TestError([m['Name'] for m in rule['Metrics']], self.workloads[self.wlidx], [],
280 …self.errlist.append(TestError([m['Name'] for m in rule['Metrics']], self.workloads[self.wlidx], [v…
332 self.errlist.extend([TestError([name], self.workloads[self.wlidx], val,
344 allres = [{"Workload": self.workloads[i], "Results": self.allresults[i]}
345 for i in range(0, len(self.workloads))]
423 workload = self.workloads[self.wlidx]
[all …]
/linux-6.12.1/Documentation/mm/damon/
Dindex.rst16 of the size of target workloads).
21 their workloads can write personalized applications for better understanding
22 and optimizations of their workloads and systems.
/linux-6.12.1/drivers/accel/qaic/
DKconfig15 designed to accelerate Deep Learning inference workloads.
18 for users to submit workloads to the devices.
/linux-6.12.1/drivers/accel/habanalabs/
DKconfig18 designed to accelerate Deep Learning inference and training workloads.
21 the user to submit workloads to the devices.
/linux-6.12.1/Documentation/driver-api/
Ddma-buf.rst292 randomly hangs workloads until the timeout kicks in. Workloads, which from
305 workloads. This also means no implicit fencing for shared buffers in these
327 faults on GPUs are limited to pure compute workloads.
343 - Compute workloads can always be preempted, even when a page fault is pending
346 - DMA fence workloads and workloads which need page fault handling have
349 reservations for DMA fence workloads.
352 hardware resources for DMA fence workloads when they are in-flight. This must
357 all workloads must be flushed from the GPU when switching between jobs
361 made visible anywhere in the system, all compute workloads must be preempted
372 Note that workloads that run on independent hardware like copy engines or other
[all …]
/linux-6.12.1/Documentation/accel/qaic/
Daic100.rst13 inference workloads. They are AI accelerators.
16 (x8). An individual SoC on a card can have up to 16 NSPs for running workloads.
20 performance. AIC100 cards are multi-user capable and able to execute workloads
82 the processors that run the workloads on AIC100. Each NSP is a Qualcomm Hexagon
85 one workload, AIC100 is limited to 16 concurrent workloads. Workload
93 in and out of workloads. AIC100 has one of these. The DMA Bridge has 16
103 This DDR is used to store workloads, data for the workloads, and is used by the
114 for generic compute workloads.
160 ready to process workloads.
210 | | | | managing workloads. |
[all …]
/linux-6.12.1/Documentation/timers/
Dno_hz.rst26 workloads, you will normally -not- want this option.
39 right approach, for example, in heavy workloads with lots of tasks
42 hundreds of microseconds). For these types of workloads, scheduling
56 are running light workloads, you should therefore read the following
118 computationally intensive short-iteration workloads: If any CPU is
228 aggressive real-time workloads, which have the option of disabling
230 some workloads will no doubt want to use adaptive ticks to
232 options for these workloads:
252 workloads, which have few such transitions. Careful benchmarking
253 will be required to determine whether or not other workloads
/linux-6.12.1/drivers/cpuidle/
DKconfig33 Some workloads benefit from using it and it generally should be safe
45 Some virtualized workloads benefit from using it.
/linux-6.12.1/Documentation/admin-guide/
Dworkload-tracing.rst34 to evaluate safety considerations. We use strace tool to trace workloads.
67 We used strace to trace the perf, stress-ng, paxtest workloads to illustrate
69 be applied to trace other workloads.
101 paxtest workloads to show how to analyze a workload and identify Linux
102 subsystems used by these workloads. Let's start with an overview of these
103 three workloads to get a better understanding of what they do and how to
173 by three workloads we have chose for this analysis.
312 Tracing workloads
315 Now that we understand the workloads, let's start tracing them.
595 information on the resources in use by workloads using strace.
/linux-6.12.1/drivers/crypto/cavium/nitrox/
DKconfig18 for accelerating crypto workloads.
/linux-6.12.1/drivers/infiniband/hw/mana/
DKconfig8 for workloads (e.g. DPDK, MPI etc) that uses RDMA verbs to directly
/linux-6.12.1/Documentation/networking/device_drivers/ethernet/intel/
Didpf.rst81 Driver defaults are meant to fit a wide variety of workloads, but if further
89 is tuned for general workloads. The user can customize the interrupt rate
90 control for specific workloads, via ethtool, adjusting the number of
/linux-6.12.1/tools/perf/tests/
Dbuiltin-test.c148 static struct test_workload *workloads[] = { variable
510 for (i = 0; i < ARRAY_SIZE(workloads); i++) { in run_workload()
511 twl = workloads[i]; in run_workload()
/linux-6.12.1/drivers/gpu/drm/i915/gvt/
Dscheduler.c1330 kmem_cache_destroy(s->workloads); in intel_vgpu_clean_submission()
1422 s->workloads = kmem_cache_create_usercopy("gvt-g_vgpu_workload", in intel_vgpu_setup_submission()
1429 if (!s->workloads) { in intel_vgpu_setup_submission()
1538 kmem_cache_free(s->workloads, workload); in intel_vgpu_destroy_workload()
1547 workload = kmem_cache_zalloc(s->workloads, GFP_KERNEL); in alloc_workload()
1721 kmem_cache_free(s->workloads, workload); in intel_vgpu_create_workload()
1735 kmem_cache_free(s->workloads, workload); in intel_vgpu_create_workload()
1746 kmem_cache_free(s->workloads, workload); in intel_vgpu_create_workload()
/linux-6.12.1/Documentation/admin-guide/pm/
Dintel_uncore_frequency_scaling.rst23 Users may have some latency sensitive workloads where they do not want any
24 change to uncore frequency. Also, users may have workloads which require
123 latency sensitive workloads further tuning can be done by SW to
/linux-6.12.1/Documentation/accounting/
Dpsi.rst10 When CPU, memory or IO devices are contended, workloads experience
19 such resource crunches and the time impact it has on complex workloads
23 scarcity aids users in sizing workloads to hardware--or provisioning
/linux-6.12.1/security/
DKconfig.hardening177 sees a 1% slowdown, other systems and workloads may vary and you
217 your workloads.
238 workloads have measured as high as 7%.
256 synthetic workloads have measured as high as 8%.
276 workloads. Image size growth depends on architecture, and should
/linux-6.12.1/Documentation/scheduler/
Dsched-design-CFS.rst104 "server" (i.e., good batching) workloads. It defaults to a setting suitable
105 for desktop workloads. SCHED_BATCH is handled by the CFS scheduler module too.
108 base_slice_ns will have little to no impact on the workloads.
116 than the previous vanilla scheduler: both types of workloads are isolated much
/linux-6.12.1/Documentation/gpu/
Ddrm-vm-bind-async.rst103 exec functions. For long-running workloads, such pipelining of a bind
109 operations for long-running workloads will not allow for pipelining
110 anyway since long-running workloads don't allow for dma-fences as
121 deeply pipelined behind other VM_BIND operations and workloads
/linux-6.12.1/Documentation/tools/rtla/
Dcommon_timerlat_options.rst40 Set timerlat to run without a workload, and then dispatches user-space workloads
/linux-6.12.1/Documentation/filesystems/ext4/
Dorphan.rst18 global single linked list is a scalability bottleneck for workloads that result
/linux-6.12.1/drivers/cpufreq/
DKconfig.x86191 the CPUs' workloads are. CPU-bound workloads will be more sensitive
193 workloads will be less sensitive -- they will not necessarily perform
/linux-6.12.1/Documentation/driver-api/md/
Draid5-cache.rst58 completely avoid the overhead, so it's very helpful for some workloads. A
74 mode depending on the workloads. It's recommended to use a cache disk with at
/linux-6.12.1/kernel/configs/
Dhardening.config5 # no) performance impact on most workloads, and have a reasonable level

12345