Lines Matching +full:run +full:- +full:time

1 .. SPDX-License-Identifier: GPL-2.0
13 be loaded to run a torture test. The test periodically outputs
19 Documentation/admin-guide/kernel-parameters.txt.
26 …rcu-torture:--- Start of test: nreaders=16 nfakewriters=4 stat_interval=30 verbose=0 test_no_idle_…
27 …rcu-torture: rtc: (null) ver: 155441 tfle: 0 rta: 155441 rtaf: 8884 rtf: 155440 rtmbe: 0…
28 rcu-torture: Reader Pipe: 727860534 34213 0 0 0 0 0 0 0 0 0
29 rcu-torture: Reader Batch: 727877838 17003 0 0 0 0 0 0 0 0 0
30 …rcu-torture: Free-Block Circulation: 155440 155440 155440 155440 155440 155440 155440 155440 1554…
31 …rcu-torture:--- End of test: SUCCESS: nreaders=16 nfakewriters=4 stat_interval=30 verbose=0 test_n…
37 be evident. ;-)
51 * "tfle": If non-zero, indicates that the "torture freelist"
54 that RCU is working when it is not. :-/
60 to be non-zero, but it is bad for it to be a large fraction of
65 * "rtmbe": A non-zero value indicates that rcutorture believes that
69 * "rtbe": A non-zero value indicates that one of the rcu_barrier()
72 * "rtbke": rcutorture was unable to create the real-time kthreads
77 to the real-time priority level of 1. This value should be zero.
85 value should be non-zero.
87 * "nt": The number of times rcutorture ran RCU read-side code from
88 within a timer handler. This value should be non-zero only
92 If any entries past the first two are non-zero, RCU is broken.
96 incremented once per grace period subsequently -- and is freed
97 after passing through (RCU_TORTURE_PIPE_LEN-2) grace periods.
101 it yourself. ;-)
105 than in terms of grace periods. The legal number of non-zero
110 * "Free-Block Circulation": Shows the number of torture structures
119 Different implementations of RCU can provide implementation-specific
123 …srcud-torture: Tree SRCU per-CPU(idx=0): 0(35,-21) 1(-4,24) 2(1,1) 3(-26,20) 4(28,-47) 5(-9,4) 6(-
125 This line shows the per-CPU counter state, in this case for Tree SRCU
126 using a dynamically allocated srcu_struct (hence "srcud-" rather than
127 "srcu-"). The numbers in parentheses are the values of the "old" and
153 two are self-explanatory, while the last indicates that while there
154 were no RCU failures, CPU-hotplug problems were detected.
164 of modprobe and rmmod can be quite time-consuming and error-prone.
168 powerpc. By default, it will run the series of tests specified by
176 --cpus argument to kvm.sh. For example, on a 64-CPU system, "--cpus 43"
177 would use up to 43 CPUs to run tests concurrently, which as of v5.4 would
178 complete all the scenarios in two batches, reducing the time to complete
179 from about eight hours to about one hour (not counting the time to build
180 the sixteen kernels). The "--dryrun sched" argument will not run tests,
182 can be useful when working out how many CPUs to specify in the --cpus
185 Not all changes require that all scenarios be run. For example, a change
186 to Tree SRCU might run only the SRCU-N and SRCU-P scenarios using the
187 --configs argument to kvm.sh as follows: "--configs 'SRCU-N SRCU-P'".
188 Large systems can run multiple copies of the full set of scenarios,
189 for example, a system with 448 hardware threads can run five instances
192 kvm.sh --cpus 448 --configs '5*CFLIST'
194 Alternatively, such a system can run 56 concurrent instances of a single
195 eight-CPU scenario::
197 kvm.sh --cpus 448 --configs '56*TREE04'
199 Or 28 concurrent instances of each of two eight-CPU scenarios::
201 kvm.sh --cpus 448 --configs '28*TREE03 28*TREE04'
204 limited using the --memory argument, which defaults to 512M. Small
205 values for memory may require disabling the callback-flooding tests
206 using the --bootargs parameter discussed below.
208 Sometimes additional debugging is useful, and in such cases the --kconfig
209 parameter to kvm.sh may be used, for example, ``--kconfig 'CONFIG_RCU_EQS_DEBUG=y'``.
210 In addition, there are the --gdb, --kasan, and --kcsan parameters.
211 Note that --gdb limits you to one scenario per kvm.sh run and requires
212 that you have another window open from which to run ``gdb`` as instructed
217 CPU stall-warning code, use "--bootargs 'rcutorture.stall_cpu=30'".
220 require disabling rcutorture's callback-flooding tests::
222 kvm.sh --cpus 448 --configs '56*TREE04' --memory 128M \
223 --bootargs 'rcutorture.fwd_progress=0'
226 what the --buildonly parameter does.
228 The --duration parameter can override the default run time of 30 minutes.
229 For example, ``--duration 2d`` would run for two days, ``--duration 3h``
230 would run for three hours, ``--duration 5m`` would run for five minutes,
231 and ``--duration 45s`` would run for 45 seconds. This last can be useful
232 for tracking down rare boot-time failures.
234 Finally, the --trust-make parameter allows each kernel build to reuse what
236 --trust-make parameter, your tags files may be demolished.
241 If a run contains failures, the number of buildtime and runtime failures
243 to a file. The build products and console output of each run is kept in
245 given directory can be supplied to kvm-find-errors.sh in order to have
248 tools/testing/selftests/rcutorture/bin/kvm-find-errors.sh \
249 tools/testing/selftests/rcutorture/res/2020.01.20-15.54.23
252 Files pertaining to all scenarios in a run reside in the top-level
253 directory (2020.01.20-15.54.23 in the example above), while per-scenario
255 "TREE04"). If a given scenario ran more than once (as in "--configs
260 The most frequently used file in the top-level directory is testid.txt.
264 The most frequently used files in each per-scenario-run directory are:
284 As of v5.4, a successful run with the default set of scenarios produces
285 the following summary at the end of the run on a 12-CPU system::
287 SRCU-N ------- 804233 GPs (148.932/s) [srcu: g10008272 f0x0 ]
288 SRCU-P ------- 202320 GPs (37.4667/s) [srcud: g1809476 f0x0 ]
289 SRCU-t ------- 1122086 GPs (207.794/s) [srcu: g0 f0x0 ]
290 SRCU-u ------- 1111285 GPs (205.794/s) [srcud: g1 f0x0 ]
291 TASKS01 ------- 19666 GPs (3.64185/s) [tasks: g0 f0x0 ]
292 TASKS02 ------- 20541 GPs (3.80389/s) [tasks: g0 f0x0 ]
293 TASKS03 ------- 19416 GPs (3.59556/s) [tasks: g0 f0x0 ]
294 TINY01 ------- 836134 GPs (154.84/s) [rcu: g0 f0x0 ] n_max_cbs: 34198
295 TINY02 ------- 850371 GPs (157.476/s) [rcu: g0 f0x0 ] n_max_cbs: 2631
296 TREE01 ------- 162625 GPs (30.1157/s) [rcu: g1124169 f0x0 ]
297 TREE02 ------- 333003 GPs (61.6672/s) [rcu: g2647753 f0x0 ] n_max_cbs: 35844
298 TREE03 ------- 306623 GPs (56.782/s) [rcu: g2975325 f0x0 ] n_max_cbs: 1496497
300 TREE04 ------- 246149 GPs (45.5831/s) [rcu: g1695737 f0x0 ] n_max_cbs: 434961
301 TREE05 ------- 314603 GPs (58.2598/s) [rcu: g2257741 f0x2 ] n_max_cbs: 193997
302 TREE07 ------- 167347 GPs (30.9902/s) [rcu: g1079021 f0x0 ] n_max_cbs: 478732
304 TREE09 ------- 752238 GPs (139.303/s) [rcu: g13075057 f0x0 ] n_max_cbs: 99011
310 Suppose that you are chasing down a rare boot-time failure. Although you
311 could use kvm.sh, doing so will rebuild the kernel on each run. If you
315 This is why kvm-again.sh exists.
317 Suppose that a previous kvm.sh run left its output in this directory::
319 tools/testing/selftests/rcutorture/res/2022.11.03-11.26.28
321 Then this run can be re-run without rebuilding as follow::
323 kvm-again.sh tools/testing/selftests/rcutorture/res/2022.11.03-11.26.28
325 A few of the original run's kvm.sh parameters may be overridden, perhaps
326 most notably --duration and --bootargs. For example::
328 kvm-again.sh tools/testing/selftests/rcutorture/res/2022.11.03-11.26.28 \
329 --duration 45s
331 would re-run the previous test, but for only 45 seconds, thus facilitating
332 tracking down the aforementioned rare boot-time failure.
340 (say) 5 instances of kvm.sh to run on your 5 systems, but this will very
343 painstaking and error-prone.
345 And this is why the kvm-remote.sh script exists.
354 kvm-remote.sh "system0 system1 system2 system3 system4 system5" \
355 --cpus 64 --duration 8h --configs "5*CFLIST"
361 that kvm.sh will accept can be passed to kvm-remote.sh, but the list of
364 The kvm.sh ``--dryrun scenarios`` argument is useful for working out
365 how many scenarios may be run in one batch across a group of systems.
367 You can also re-run a previous remote run in a manner similar to kvm.sh:
369 kvm-remote.sh "system0 system1 system2 system3 system4 system5" \
370 tools/testing/selftests/rcutorture/res/2022.11.03-11.26.28-remote \
371 --duration 24h
373 In this case, most of the kvm-again.sh parameters may be supplied following
374 the pathname of the old run-results directory.