Lines Matching refs:cgroup

11 conventions of cgroup v2.  It describes all userland-visible aspects
12 of cgroup including core and specific controller behaviors. All
14 v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`.
20 1-2. What is cgroup?
70 5.9-1 Miscellaneous cgroup Interface Files
75 5-N-1. CPU controller root cgroup process behaviour
76 5-N-2. IO controller root cgroup process behaviour
100 "cgroup" stands for "control group" and is never capitalized. The
102 qualifier as in "cgroup controllers". When explicitly referring to
106 What is cgroup?
109 cgroup is a mechanism to organize processes hierarchically and
113 cgroup is largely composed of two parts - the core and controllers.
114 cgroup core is primarily responsible for hierarchically organizing
115 processes. A cgroup controller is usually responsible for
121 to one and only one cgroup. All threads of a process belong to the
122 same cgroup. On creation, all processes are put in the cgroup that
124 to another cgroup. Migration of a process doesn't affect already
128 disabled selectively on a cgroup. All controller behaviors are
129 hierarchical - if a controller is enabled on a cgroup, it affects all
131 sub-hierarchy of the cgroup. When a controller is enabled on a nested
132 cgroup, it always restricts the resource distribution further. The
143 Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2
156 is no longer referenced in its current hierarchy. Because per-cgroup
173 automount the v1 cgroup filesystem and so hijack all controllers
178 cgroup v2 currently supports the following mount options.
181 Consider cgroup namespaces as delegation boundaries. This
188 Reduce the latencies of dynamic cgroup modifications such as
191 The static usage pattern of creating a cgroup, enabling
196 Only populate memory.events with data for the current cgroup,
214 Count HugeTLB memory usage towards the cgroup's overall
226 memory controller. It is only charged to a cgroup when it is
233 still has pages available (but the cgroup limit is hit and
239 will not be tracked by the memory controller (even if cgroup
244 local (inside cgroup proper) fork failures are counted. Without this
246 cgroup's subtree.
256 Initially, only the root cgroup exists to which all processes belong.
257 A child cgroup can be created by creating a sub-directory::
261 A given cgroup may have multiple child cgroups forming a tree
262 structure. Each cgroup has a read-writable interface file
263 "cgroup.procs". When read, it lists the PIDs of all processes which
264 belong to the cgroup one-per-line. The PIDs are not ordered and the
266 another cgroup and then back or the PID got recycled while reading.
268 A process can be migrated into a cgroup by writing its PID to the
269 target cgroup's "cgroup.procs" file. Only one process can be migrated
275 cgroup that the forking process belongs to at the time of the
276 operation. After exit, a process stays associated with the cgroup
278 zombie process does not appear in "cgroup.procs" and thus can't be
279 moved to another cgroup.
281 A cgroup which doesn't have any children or live processes can be
282 destroyed by removing the directory. Note that a cgroup which doesn't
288 "/proc/$PID/cgroup" lists a process's cgroup membership. If legacy
289 cgroup is in use in the system, this file may contain multiple lines,
290 one for each hierarchy. The entry for cgroup v2 is always in the
293 # cat /proc/842/cgroup
295 0::/test-cgroup/test-cgroup-nested
297 If the process becomes a zombie and the cgroup it was associated with
300 # cat /proc/842/cgroup
302 0::/test-cgroup/test-cgroup-nested (deleted)
308 cgroup v2 supports thread granularity for a subset of controllers to
311 process belong to the same cgroup, which also serves as the resource
319 Marking a cgroup threaded makes it join the resource domain of its
320 parent as a threaded cgroup. The parent may be another threaded
321 cgroup whose resource domain is further up in the hierarchy. The root
331 As the threaded domain cgroup hosts all the domain resource
335 root cgroup is not subject to no internal process constraint, it can
338 The current operation mode or type of the cgroup is shown in the
339 "cgroup.type" file which indicates whether the cgroup is a normal
341 or a threaded cgroup.
343 On creation, a cgroup is always a domain cgroup and can be made
344 threaded by writing "threaded" to the "cgroup.type" file. The
347 # echo threaded > cgroup.type
349 Once threaded, the cgroup can't be made a domain again. To enable the
352 - As the cgroup will join the parent's resource domain. The parent
353 must either be a valid (threaded) domain or a threaded cgroup.
359 Topology-wise, a cgroup can be in an invalid state. Please consider
366 threaded cgroup. "cgroup.type" file will report "domain (invalid)" in
370 A domain cgroup is turned into a threaded domain when one of its child
371 cgroup becomes threaded or threaded controllers are enabled in the
372 "cgroup.subtree_control" file while there are processes in the cgroup.
376 When read, "cgroup.threads" contains the list of the thread IDs of all
377 threads in the cgroup. Except that the operations are per-thread
378 instead of per-process, "cgroup.threads" has the same format and
379 behaves the same way as "cgroup.procs". While "cgroup.threads" can be
380 written to in any cgroup, as it can only move threads inside the same
384 The threaded domain cgroup serves as the resource domain for the whole
386 all the processes are considered to be in the threaded domain cgroup.
387 "cgroup.procs" in a threaded domain cgroup contains the PIDs of all
389 However, "cgroup.procs" can be written to from anywhere in the subtree
390 to migrate all threads of the matching process to the cgroup.
395 threads in the cgroup and its descendants. All consumptions which
396 aren't tied to a specific thread belong to the threaded domain cgroup.
400 between threads in a non-leaf cgroup and its child cgroups. Each
404 in a threaded cgroup::
414 Each non-root cgroup has a "cgroup.events" file which contains
415 "populated" field indicating whether the cgroup's sub-hierarchy has
417 the cgroup and its descendants; otherwise, 1. poll and [id]notify
423 in each cgroup::
430 file modified events will be generated on the "cgroup.events" files of
440 Each cgroup has a "cgroup.controllers" file which lists all
441 controllers available for the cgroup to enable::
443 # cat cgroup.controllers
447 disabled by writing to the "cgroup.subtree_control" file::
449 # echo "+cpu +memory -io" > cgroup.subtree_control
451 Only controllers which are listed in "cgroup.controllers" can be
456 Enabling a controller in a cgroup indicates that the distribution of
470 the cgroup's children, enabling it creates the controller's interface
476 "cgroup." are owned by the parent rather than the cgroup itself.
482 Resources are distributed top-down and a cgroup can further distribute
484 parent. This means that all non-root "cgroup.subtree_control" files
486 "cgroup.subtree_control" file. A controller can be enabled only if
497 controllers enabled in their "cgroup.subtree_control" files.
504 The root cgroup is exempt from this restriction. Root contains
507 controllers. How resource consumption in the root cgroup is governed
513 enabled controller in the cgroup's "cgroup.subtree_control". This is
515 populated cgroup. To control resource distribution of a cgroup, the
516 cgroup must create children and transfer all its processes to the
517 children before enabling controllers in its "cgroup.subtree_control"
527 A cgroup can be delegated in two ways. First, to a less privileged
528 user by granting write access of the directory and its "cgroup.procs",
529 "cgroup.threads" and "cgroup.subtree_control" files to the user.
531 cgroup namespace on namespace creation.
539 files on a namespace root from inside the cgroup namespace, except for
540 those files listed in "/sys/kernel/cgroup/delegate" (including
541 "cgroup.procs", "cgroup.threads", "cgroup.subtree_control", etc.).
551 Currently, cgroup doesn't impose any restrictions on the number of
564 to migrate a target process into a cgroup by writing its PID to the
565 "cgroup.procs" file.
567 - The writer must have write access to the "cgroup.procs" file.
569 - The writer must have write access to the "cgroup.procs" file of the
581 ~ cgroup ~ \ C01
586 currently in C10 into "C00/cgroup.procs". U0 has write access to the
587 file; however, the common ancestor of the source cgroup C10 and the
588 destination cgroup C00 is above the points of delegation and U0 would
589 not have write access to its "cgroup.procs" files and thus the write
612 should be assigned to a cgroup according to the system's logical and
621 Interface files for a cgroup and its children cgroups occupy the same
625 All cgroup core interface files are prefixed with "cgroup." and each
633 cgroup doesn't do anything to prevent name collisions and it's the
640 cgroup controllers implement several resource distribution schemes
682 "io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
690 A cgroup is protected up to the configured amount of the resource
711 A cgroup is exclusively allocated a certain amount of a finite
775 - The root cgroup should be exempt from resource control and thus
813 # cat cgroup-example-interface-file
819 # echo 125 > cgroup-example-interface-file
823 # echo "default 125" > cgroup-example-interface-file
827 # echo "8:16 170" > cgroup-example-interface-file
831 # echo "8:0 default" > cgroup-example-interface-file
832 # cat cgroup-example-interface-file
845 All cgroup core files are prefixed with "cgroup."
847 cgroup.type
851 When read, it indicates the current type of the cgroup, which
854 - "domain" : A normal valid domain cgroup.
856 - "domain threaded" : A threaded domain cgroup which is
859 - "domain invalid" : A cgroup which is in an invalid state.
861 be allowed to become a threaded cgroup.
863 - "threaded" : A threaded cgroup which is a member of a
866 A cgroup can be turned into a threaded cgroup by writing
869 cgroup.procs
874 the cgroup one-per-line. The PIDs are not ordered and the
876 to another cgroup and then back or the PID got recycled while
880 the PID to the cgroup. The writer should match all of the
883 - It must have write access to the "cgroup.procs" file.
885 - It must have write access to the "cgroup.procs" file of the
891 In a threaded cgroup, reading this file fails with EOPNOTSUPP
893 supported and moves every thread of the process to the cgroup.
895 cgroup.threads
900 the cgroup one-per-line. The TIDs are not ordered and the
902 another cgroup and then back or the TID got recycled while
906 TID to the cgroup. The writer should match all of the
909 - It must have write access to the "cgroup.threads" file.
911 - The cgroup that the thread is currently in must be in the
912 same resource domain as the destination cgroup.
914 - It must have write access to the "cgroup.procs" file of the
920 cgroup.controllers
925 the cgroup. The controllers are not ordered.
927 cgroup.subtree_control
933 cgroup to its children.
942 cgroup.events
949 1 if the cgroup or its descendants contains any live
952 1 if the cgroup is frozen; otherwise, 0.
954 cgroup.max.descendants
959 an attempt to create a new cgroup in the hierarchy will fail.
961 cgroup.max.depth
964 Maximum allowed descent depth below the current cgroup.
966 an attempt to create a new child cgroup will fail.
968 cgroup.stat
975 Total number of dying descendant cgroups. A cgroup becomes
976 dying after being deleted by a user. The cgroup will remain
980 A process can't enter a dying cgroup under any circumstances,
981 a dying cgroup can't revive.
983 A dying cgroup can consume system resources not exceeding
984 limits, which were active at the moment of cgroup deletion.
987 Total number of live cgroup subsystems (e.g memory
988 cgroup) at and beneath the current cgroup.
991 Total number of dying cgroup subsystems (e.g. memory
992 cgroup) at and beneath the current cgroup.
994 cgroup.freeze
998 Writing "1" to the file causes freezing of the cgroup and all
1000 be stopped and will not run until the cgroup will be explicitly
1001 unfrozen. Freezing of the cgroup may take some time; when this action
1002 is completed, the "frozen" value in the cgroup.events control file
1006 A cgroup can be frozen either by its own settings, or by settings
1008 cgroup will remain frozen.
1010 Processes in the frozen cgroup can be killed by a fatal signal.
1011 They also can enter and leave a frozen cgroup: either by an explicit
1012 move by a user, or if freezing of the cgroup races with fork().
1013 If a process is moved to a frozen cgroup, it stops. If a process is
1014 moved out of a frozen cgroup, it becomes running.
1016 Frozen status of a cgroup doesn't affect any cgroup tree operations:
1017 it's possible to delete a frozen (and empty) cgroup, as well as
1020 cgroup.kill
1024 Writing "1" to the file causes the cgroup and all descendant cgroups to
1025 be killed. This means that all processes located in the affected cgroup
1028 Killing a cgroup tree will deal with concurrent forks appropriately and
1031 In a threaded cgroup, writing this file fails with EOPNOTSUPP as
1035 cgroup.pressure
1039 Writing "0" to the file will disable the cgroup PSI accounting.
1040 Writing "1" to the file will re-enable the cgroup PSI accounting.
1043 accounting in a cgroup does not affect PSI accounting in descendants
1047 each cgroup separately and aggregates it at each level of the hierarchy.
1081 when all RT processes are in the root cgroup. This limitation does
1085 to be moved to the root cgroup before the cpu controller can be enabled
1119 If the cgroup has been configured to be SCHED_IDLE (cpu.idle = 1),
1188 This is the cgroup analog of the per-task SCHED_IDLE sched policy.
1190 cgroup SCHED_IDLE. The threads inside the cgroup will retain their
1191 own relative priorities, but the cgroup itself will be treated as
1206 cgroup are tracked so that the total memory consumption can be
1230 The total amount of memory currently being used by the cgroup
1237 Hard memory protection. If the memory usage of a cgroup
1238 is within its effective min boundary, the cgroup's memory
1248 (child cgroup or cgroups are requiring more protected memory
1249 than parent will allow), then each child cgroup will get
1256 If a memory cgroup is not populated with processes,
1264 cgroup is within its effective low boundary, the cgroup's
1274 (child cgroup or cgroups are requiring more protected memory
1275 than parent will allow), then each child cgroup will get
1286 Memory usage throttle limit. If a cgroup's usage goes
1287 over the high boundary, the processes of the cgroup are
1293 monitors the limited cgroup to alleviate heavy reclaim
1301 memory usage of a cgroup. If a cgroup's memory usage reaches
1303 the cgroup. Under certain circumstances, the usage may go
1317 target cgroup.
1324 the target cgroup. If less bytes are reclaimed than the
1329 memory cgroup. Therefore socket memory balancing triggered by
1348 The max memory usage recorded for the cgroup and its descendants since
1349 either the creation of the cgroup or the most recent reset for that FD.
1359 Determines whether the cgroup should be treated as
1361 all tasks belonging to the cgroup or to its descendants
1362 (if the memory cgroup is not a leaf cgroup) are killed
1369 If the OOM killer is invoked in a cgroup, it's not going
1370 to kill any tasks outside of this cgroup, regardless
1381 hierarchy. For the local events at the cgroup level see
1385 The number of times the cgroup is reclaimed due to
1391 The number of times processes of the cgroup are
1394 cgroup whose memory usage is capped by the high limit
1399 The number of times the cgroup's memory usage was
1401 fails to bring it down, the cgroup goes to OOM state.
1404 The number of time the cgroup's memory usage was
1412 The number of processes belonging to this cgroup
1420 to the cgroup i.e. not hierarchical. The file modified event
1426 This breaks down the cgroup's memory footprint into different
1661 This breaks down the cgroup's memory footprint into different
1687 The total amount of swap currently being used by the cgroup
1694 Swap usage throttle limit. If a cgroup's swap usage exceeds
1698 This limit marks a point of no return for the cgroup. It is NOT
1701 prohibits swapping past a set amount, but lets the cgroup
1709 The max swap usage recorded for the cgroup and its descendants since
1710 the creation of the cgroup or the most recent reset for that FD.
1720 Swap usage hard limit. If a cgroup's swap usage reaches this
1721 limit, anonymous memory of the cgroup will not be swapped out.
1730 The number of times the cgroup's swap usage was over
1734 The number of times the cgroup's swap usage was about
1759 Zswap usage hard limit. If a cgroup's zswap pool reaches this
1797 throttles the offending cgroup, a management agent has ample
1801 Determining whether a cgroup has enough memory is not trivial as
1815 A memory area is charged to the cgroup which instantiated it and stays
1816 charged to the cgroup until the area is released. Migrating a process
1817 to a different cgroup doesn't move the memory usages that it
1818 instantiated while in the previous cgroup to the new cgroup.
1821 To which cgroup the area will be charged is in-deterministic; however,
1822 over time, the memory area is likely to end up in a cgroup which has
1825 If a cgroup sweeps a considerable amount of memory which is expected
1866 cgroup.
1921 cgroup.
1958 If needed, tools/cgroup/iocost_coef_gen.py can be used to
1969 the cgroup can use in relation to its siblings.
2041 per-cgroup dirty memory states are examined and the more restrictive
2044 cgroup writeback requires explicit support from the underlying
2045 filesystem. Currently, cgroup writeback is implemented on ext2, ext4,
2047 attributed to the root cgroup.
2050 which affects how cgroup ownership is tracked. Memory is tracked per
2052 inode is assigned to a cgroup and all IO requests to write dirty pages
2053 from the inode are attributed to that cgroup.
2055 As cgroup ownership for memory is tracked per page, there can be pages
2059 cgroup becomes the majority over a certain period of time, switches
2060 the ownership of the inode to that cgroup.
2063 mostly dirtied by a single cgroup even when the main writing cgroup
2073 The sysctl knobs which affect writeback behavior are applied to cgroup
2077 These ratios apply the same to cgroup writeback with the
2082 For cgroup writeback, this is calculated into ratio against
2090 This is a cgroup v2 controller for IO workload protection. You provide a group
2169 A single attribute controls the behavior of the I/O priority cgroup policy,
2231 The process number controller is used to allow a cgroup to stop any
2235 The number of tasks in a cgroup can be exhausted in ways which other
2256 The number of processes currently in the cgroup and its
2262 The maximum value that the number of processes in the cgroup and its
2271 The number of times the cgroup's total number of processes hit the pids.max
2276 to the cgroup i.e. not hierarchical. The file modified event
2279 Organisational operations are not blocked by cgroup policies, so it is
2282 processes to the cgroup such that pids.current is larger than
2283 pids.max. However, it is not possible to violate a cgroup PID policy
2285 of a new process would cause a cgroup policy to be violated.
2293 specified in the cpuset interface files in a task's current cgroup.
2311 cgroup. The actual list of CPUs to be granted, however, is
2321 An empty value indicates that the cgroup is using the same
2322 setting as the nearest cgroup ancestor with a non-empty
2333 cgroup by its parent. These CPUs are allowed to be used by
2334 tasks within the current cgroup.
2337 all the CPUs from the parent cgroup that can be available to
2338 be used by this cgroup. Otherwise, it should be a subset of
2350 this cgroup. The actual list of memory nodes granted, however,
2360 An empty value indicates that the cgroup is using the same
2361 setting as the nearest cgroup ancestor with a non-empty
2369 tasks within the cgroup to be migrated to the designated nodes if
2384 this cgroup by its parent. These memory nodes are allowed to
2385 be used by tasks within the current cgroup.
2388 parent cgroup that will be available to be used by this cgroup.
2401 unless the cgroup becomes a valid partition root. See the
2405 When the cgroup becomes a partition root, the actual exclusive
2415 of its sibling. If "cpuset.cpus.exclusive" of a sibling cgroup
2420 For a parent cgroup, any one of its exclusive CPUs can only
2426 The root cgroup is a partition root and all its available CPUs
2437 cgroup. It will also be a subset of "cpuset.cpus.exclusive"
2443 A read-only and root cgroup only multiple values file.
2451 cpuset-enabled cgroups. This flag is owned by the parent cgroup
2470 partition is one whose parent cgroup is also a valid partition
2471 root. A remote partition is one whose parent cgroup is not a
2476 proper "cpuset.cpus.exclusive" values down the cgroup hierarchy
2482 the root cgroup cannot be a partition root.
2484 The root cgroup is always a partition root and its state cannot
2487 When set to "root", the current cgroup is the root of a new
2522 1) The parent cgroup is a valid partition root.
2534 moved to a cgroup with empty "cpuset.cpus.effective".
2569 on top of cgroup BPF. To control access to device files, a user may
2617 It exists for all the cgroup except root.
2635 the cgroup except root.
2639 The default value is "max". It exists for all the cgroup except root.
2649 are local to the cgroup i.e. not hierarchical. The file modified event
2654 hugetlb pages of <hugepagesize> in this cgroup. Only active in
2660 The Miscellaneous cgroup provides the resource limiting and tracking
2662 cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config
2667 in the kernel/cgroup/misc.c file. Provider of the resource must set its
2680 A read-only flat-keyed file shown only in the root cgroup. It shows
2690 the current usage of the resources in the cgroup and its children.::
2698 historical maximum usage of the resources in the cgroup and its
2707 maximum usage of the resources in the cgroup and its children.::
2731 The number of times the cgroup's resource usage was
2736 cgroup i.e. not hierarchical. The file modified event generated on
2742 A miscellaneous scalar resource is charged to the cgroup in which it is used
2743 first, and stays charged to that cgroup until that resource is freed. Migrating
2744 a process to a different cgroup does not move the charge to the destination
2745 cgroup where the process has moved.
2755 always be filtered by cgroup v2 path. The controller can still be
2766 CPU controller root cgroup process behaviour
2769 When distributing CPU cycles in the root cgroup each thread in this
2770 cgroup is treated as if it was hosted in a separate child cgroup of the
2771 root cgroup. This child cgroup weight is dependent on its thread nice
2779 IO controller root cgroup process behaviour
2782 Root cgroup processes are hosted in an implicit leaf child node.
2784 account as if it was a normal child cgroup of the root cgroup with a
2794 cgroup namespace provides a mechanism to virtualize the view of the
2795 "/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone
2796 flag can be used with clone(2) and unshare(2) to create a new cgroup
2797 namespace. The process running inside the cgroup namespace will have
2798 its "/proc/$PID/cgroup" output restricted to cgroupns root. The
2799 cgroupns root is the cgroup of the process at the time of creation of
2800 the cgroup namespace.
2802 Without cgroup namespace, the "/proc/$PID/cgroup" file shows the
2803 complete path of the cgroup of a process. In a container setup where
2805 "/proc/$PID/cgroup" file may leak potential system level information
2808 # cat /proc/self/cgroup
2812 and undesirable to expose to the isolated processes. cgroup namespace
2814 creating a cgroup namespace, one would see::
2816 # ls -l /proc/self/ns/cgroup
2817 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]
2818 # cat /proc/self/cgroup
2823 # ls -l /proc/self/ns/cgroup
2824 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]
2825 # cat /proc/self/cgroup
2828 When some thread from a multi-threaded process unshares its cgroup
2833 A cgroup namespace is alive as long as there are processes inside or
2834 mounts pinning it. When the last usage goes away, the cgroup
2842 The 'cgroupns root' for a cgroup namespace is the cgroup in which the
2844 /batchjobs/container_id1 cgroup calls unshare, cgroup
2846 init_cgroup_ns, this is the real root ('/') cgroup.
2848 The cgroupns root cgroup does not change even if the namespace creator
2849 process later moves to a different cgroup::
2851 # ~/unshare -c # unshare cgroupns in some cgroup
2852 # cat /proc/self/cgroup
2855 # echo 0 > sub_cgrp_1/cgroup.procs
2856 # cat /proc/self/cgroup
2859 Each process gets its namespace-specific view of "/proc/$PID/cgroup"
2861 Processes running inside the cgroup namespace will be able to see
2862 cgroup paths (in /proc/self/cgroup) only inside their root cgroup.
2867 # echo 7353 > sub_cgrp_1/cgroup.procs
2868 # cat /proc/7353/cgroup
2871 From the initial cgroup namespace, the real cgroup path will be
2874 $ cat /proc/7353/cgroup
2877 From a sibling cgroup namespace (that is, a namespace rooted at a
2878 different cgroup), the cgroup path relative to its own cgroup
2879 namespace root will be shown. For instance, if PID 7353's cgroup
2882 # cat /proc/7353/cgroup
2886 its relative to the cgroup namespace root of the caller.
2892 Processes inside a cgroup namespace can move into and out of the
2898 # cat /proc/7353/cgroup
2900 # echo 7353 > batchjobs/container_id2/cgroup.procs
2901 # cat /proc/7353/cgroup
2904 Note that this kind of setup is not encouraged. A task inside cgroup
2907 setns(2) to another cgroup namespace is allowed when:
2910 (b) the process has CAP_SYS_ADMIN against the target cgroup
2913 No implicit cgroup changes happen with attaching to another cgroup
2915 process under the target cgroup namespace root.
2921 Namespace specific cgroup hierarchy can be mounted by a process
2922 running inside a non-init cgroup namespace::
2926 This will mount the unified cgroup hierarchy with cgroupns root as the
2930 The virtualization of /proc/self/cgroup file combined with restricting
2931 the view of cgroup hierarchy by namespace-private cgroupfs mount
2932 provides a properly isolated cgroup view inside the container.
2939 where interacting with cgroup is necessary. cgroup core and
2946 A filesystem can support cgroup writeback by updating
2952 associates the bio with the inode's owner cgroup and the
2963 With writeback bio's annotated, cgroup support can be enabled per
2965 selective disabling of cgroup writeback support which is helpful when
2969 wbc_init_bio() binds the specified bio to its cgroup. Depending on
2985 - The "tasks" file is removed and "cgroup.procs" is not sorted.
2987 - "cgroup.clone_children" is removed.
2989 - /proc/cgroups is meaningless for v2. Use "cgroup.controllers" or
2990 "cgroup.stat" files at the root instead.
2999 cgroup v1 allowed an arbitrary number of hierarchies and each
3021 It greatly complicated cgroup core implementation but more importantly
3022 the support for multiple hierarchies restricted how cgroup could be
3026 that a thread's cgroup membership couldn't be described in finite
3052 cgroup v1 allowed threads of a process to belong to different cgroups.
3063 cgroup v1 had an ambiguously defined delegation model which got abused
3067 effectively raised cgroup to the status of a syscall-like API exposed
3070 First of all, cgroup has a fundamentally inadequate interface to be
3072 extract the path on the target hierarchy from /proc/self/cgroup,
3079 cgroup controllers implemented a number of knobs which would never be
3081 system-management pseudo filesystem. cgroup ended up with interface
3085 effectively abusing cgroup as a shortcut to implementing public APIs
3096 cgroup v1 allowed threads to be in any cgroups which created an
3097 interesting problem where threads belonging to a parent cgroup and its
3103 mapped nice levels to cgroup weights. This worked for some cases but
3112 cgroup to host the threads. The hidden leaf had its own copies of all
3128 made cgroup as a whole highly inconsistent.
3130 This clearly is a problem which needs to be addressed from cgroup core
3137 cgroup v1 grew without oversight and developed a large number of
3138 idiosyncrasies and inconsistencies. One issue on the cgroup core side
3139 was how an empty cgroup was notified - a userland helper binary was
3148 cgroup. Some controllers exposed a large amount of inconsistent
3151 There also was no consistency across controllers. When a new cgroup
3159 cgroup v2 establishes common conventions where appropriate and updates
3184 reserve. A cgroup enjoys reclaim protection when it's within its
3227 cgroup design was that global or parental pressure would always be
3236 that cgroup controllers should account and limit specific physical