Lines Matching +full:test +full:- +full:cpu
1 // SPDX-License-Identifier: GPL-2.0-only
33 * Any bug related to task migration is likely to be timing-dependent; perform
51 static int next_cpu(int cpu) in next_cpu() argument
54 * Advance to the next CPU, skipping those that weren't in the original in next_cpu()
58 * burn a lot cycles and the test will take longer than normal to in next_cpu()
62 cpu++; in next_cpu()
63 if (cpu > max_cpu) { in next_cpu()
64 cpu = min_cpu; in next_cpu()
65 TEST_ASSERT(CPU_ISSET(cpu, &possible_mask), in next_cpu()
66 "Min CPU = %d must always be usable", cpu); in next_cpu()
69 } while (!CPU_ISSET(cpu, &possible_mask)); in next_cpu()
71 return cpu; in next_cpu()
78 int r, i, cpu; in migration_worker() local
82 for (i = 0, cpu = min_cpu; i < NR_TASK_MIGRATIONS; i++, cpu = next_cpu(cpu)) { in migration_worker()
83 CPU_SET(cpu, &allowed_mask); in migration_worker()
88 * CPU ID reads. An odd sequence count indicates a migration in migration_worker()
89 * is in-progress, while a completely different count indicates in migration_worker()
96 * stable, i.e. while changing affinity is in-progress. in migration_worker()
105 CPU_CLR(cpu, &allowed_mask); in migration_worker()
108 * Wait 1-10us before proceeding to the next iteration and more in migration_worker()
121 * exit to userspace is necessary to give the test a chance in migration_worker()
122 * to check the rseq CPU ID (see #2). in migration_worker()
128 * 2. To let ioctl(KVM_RUN) make its way back to the test in migration_worker()
129 * before the next round of migration. The test's check on in migration_worker()
130 * the rseq CPU ID must wait for migration to complete in in migration_worker()
135 * 3. To ensure the read-side makes efficient forward progress, in migration_worker()
136 * e.g. if getcpu() involves a syscall. Stalling the read-side in migration_worker()
137 * means the test will spend more time waiting for getcpu() in migration_worker()
138 * to stabilize and less time trying to hit the timing-dependent in migration_worker()
141 * Because any bug in this area is likely to be timing-dependent, in migration_worker()
143 * as a best effort to avoid tuning the test to the point where in migration_worker()
148 * x86-64, but starts to require more iterations to reproduce in migration_worker()
151 * at 10us to keep test runtime reasonable while minimizing in migration_worker()
155 * e.g. failures occur on x86-64 with nanosleep(0), but at that in migration_worker()
173 * CPU_SET doesn't provide a FOR_EACH helper, get the min/max CPU that in calc_min_max_cpu()
179 min_cpu = -1; in calc_min_max_cpu()
180 max_cpu = -1; in calc_min_max_cpu()
186 if (min_cpu == -1) in calc_min_max_cpu()
193 "Only one usable CPU, task migration not possible"); in calc_min_max_cpu()
199 printf("usage: %s [-h] [-u]\n", name); in help()
200 printf(" -u: Don't sanity check the number of successful KVM_RUNs\n"); in help()
211 u32 cpu, rseq_cpu; in main() local
214 while ((opt = getopt(argc, argv, "hu")) != -1) { in main()
239 * CPU affinity. in main()
252 * Verify rseq's CPU matches sched's CPU. Ensure migration in main()
255 * count is odd (migration in-progress). in main()
260 * i.e. if a migration is in-progress. in main()
270 r = sys_getcpu(&cpu, NULL); in main()
277 TEST_ASSERT(rseq_cpu == cpu, in main()
278 "rseq CPU = %d, sched CPU = %d", rseq_cpu, cpu); in main()
282 * Sanity check that the test was able to enter the guest a reasonable in main()
285 * conservative ratio on x86-64, which can do _more_ KVM_RUNs than in main()
290 * quite often that the scheduler is not able to wake up the target CPU in main()
291 * before the vCPU thread is scheduled to another CPU. in main()
295 " Try disabling deep sleep states to reduce CPU wakeup latency,\n" in main()
297 " or run with -u to disable this sanity check.", i); in main()