Lines Matching full:p0
199 +- p0 +- p3 +- p4
279 p0->uclamp[UCLAMP_MIN] = 300
280 p0->uclamp[UCLAMP_MAX] = 900
285 then assuming both p0 and p1 are enqueued to the same rq, both UCLAMP_MIN
404 p0->uclamp[UCLAMP_MIN] = // system default;
405 p0->uclamp[UCLAMP_MAX] = // system default;
416 when p0 and p1 are attached to cgroup0, the values become:
420 p0->uclamp[UCLAMP_MIN] = cgroup0->cpu.uclamp.min = 20% * 1024;
421 p0->uclamp[UCLAMP_MAX] = cgroup0->cpu.uclamp.max = 60% * 1024;
426 when p0 and p1 are attached to cgroup1, these instead become:
430 p0->uclamp[UCLAMP_MIN] = cgroup1->cpu.uclamp.min = 60% * 1024;
431 p0->uclamp[UCLAMP_MAX] = cgroup1->cpu.uclamp.max = 100% * 1024;
602 If task p0 is capped to run at 512:
606 p0->uclamp[UCLAMP_MAX] = 512
621 Assuming both p0 and p1 have UCLAMP_MIN = 0, then the frequency selection for
624 If p1 is a small task but p0 is a CPU intensive task, then due to the fact that
645 p0->util_avg = 300
646 p0->uclamp[UCLAMP_MAX] = 0
670 p0->util_avg = 300 + small_error
683 p0->util_avg = 1024
693 aggregation rule. But since the capped p0 task was running and throttled
698 p0->util_avg = 1024
704 Hence lead to a frequency spike since if p0 wasn't throttled we should get:
708 p0->util_avg = 300