Lines Matching +full:cpu +full:- +full:read
19 documentation at tools/memory-model/. Nevertheless, even this memory
37 Note also that it is possible that a barrier may be a no-op for an
48 - Device operations.
49 - Guarantees.
53 - Varieties of memory barrier.
54 - What may not be assumed about memory barriers?
55 - Address-dependency barriers (historical).
56 - Control dependencies.
57 - SMP barrier pairing.
58 - Examples of memory barrier sequences.
59 - Read memory barriers vs load speculation.
60 - Multicopy atomicity.
64 - Compiler barrier.
65 - CPU memory barriers.
69 - Lock acquisition functions.
70 - Interrupt disabling functions.
71 - Sleep and wake-up functions.
72 - Miscellaneous functions.
74 (*) Inter-CPU acquiring barrier effects.
76 - Acquires vs memory accesses.
80 - Interprocessor interaction.
81 - Atomic operations.
82 - Accessing devices.
83 - Interrupts.
89 (*) The effects of the cpu cache.
91 - Cache coherency vs DMA.
92 - Cache coherency vs MMIO.
96 - And then there's the Alpha.
97 - Virtual Machine Guests.
101 - Circular buffers.
115 +-------+ : +--------+ : +-------+
118 | CPU 1 |<----->| Memory |<----->| CPU 2 |
121 +-------+ : +--------+ : +-------+
126 | : +--------+ : |
129 +---------->| Device |<----------+
132 : +--------+ :
135 Each CPU executes a program that generates memory access operations. In the
136 abstract CPU, memory operation ordering is very relaxed, and a CPU may actually
143 CPU are perceived by the rest of the system as the operations cross the
144 interface between the CPU and rest of the system (the dotted lines).
149 CPU 1 CPU 2
158 STORE A=3, STORE B=4, y=LOAD A->3, x=LOAD B->4
159 STORE A=3, STORE B=4, x=LOAD B->4, y=LOAD A->3
160 STORE A=3, y=LOAD A->3, STORE B=4, x=LOAD B->4
161 STORE A=3, y=LOAD A->3, x=LOAD B->2, STORE B=4
162 STORE A=3, x=LOAD B->2, STORE B=4, y=LOAD A->3
163 STORE A=3, x=LOAD B->2, y=LOAD A->3, STORE B=4
164 STORE B=4, STORE A=3, y=LOAD A->3, x=LOAD B->4
176 Furthermore, the stores committed by a CPU to the memory system may not be
177 perceived by the loads made by another CPU in the same order as the stores were
183 CPU 1 CPU 2
190 on the address retrieved from P by CPU 2. At the end of the sequence, any of
197 Note that CPU 2 will never try and load C into D because the CPU will load P
202 -----------------
208 port register (D). To read internal register 5, the following code might then
220 the address _after_ attempting to read the register.
224 ----------
226 There are some minimal guarantees that may be expected of a CPU:
228 (*) On any given CPU, dependent memory accesses will be issued in order, with
233 the CPU will issue the following memory operations:
238 emits a memory-barrier instruction, so that a DEC Alpha CPU will
246 (*) Overlapping loads and stores within a particular CPU will appear to be
247 ordered within that CPU. This means that for:
251 the CPU will only issue the following sequence of memory operations:
259 the CPU will only issue:
309 And there are anti-guarantees:
312 generate code to modify these using non-atomic read-modify-write
319 non-atomic read-modify-write sequences can cause an update to one
326 "char", two-byte alignment for "short", four-byte alignment for
327 "int", and either four-byte or eight-byte alignment for "long",
328 on 32-bit and 64-bit systems, respectively. Note that these
330 using older pre-C11 compilers (for example, gcc 4.6). The portion
336 of adjacent bit-fields all having nonzero width
342 NOTE 2: A bit-field and an adjacent non-bit-field member
344 to two bit-fields, if one is declared inside a nested
346 are separated by a zero-length bit-field declaration,
347 or if they are separated by a non-bit-field member
349 bit-fields in the same structure if all members declared
350 between them are also bit-fields, no matter what the
351 sizes of those intervening bit-fields happen to be.
359 in random order, but this can be a problem for CPU-CPU interaction and for I/O.
361 CPU to restrict the order.
375 ---------------------------
389 A CPU can be viewed as committing a sequence of store operations to the
393 [!] Note that write barriers should normally be paired with read or
394 address-dependency barriers; see the "SMP barrier pairing" subsection.
397 (2) Address-dependency barriers (historical).
398 [!] This section is marked as HISTORICAL: it covers the long-obsolete
400 implicit in all marked accesses. For more up-to-date information,
404 An address-dependency barrier is a weaker form of read barrier. In the
407 the second load will be directed), an address-dependency barrier would
411 An address-dependency barrier is a partial ordering on interdependent
416 committing sequences of stores to the memory system that the CPU being
417 considered can then perceive. An address-dependency barrier issued by
418 the CPU under consideration guarantees that for any load preceding it,
419 if that load touches one of a sequence of stores from another CPU, then
422 the address-dependency barrier.
431 a full read barrier or better is required. See the "Control dependencies"
434 [!] Note that address-dependency barriers should normally be paired with
437 [!] Kernel release v5.9 removed kernel APIs for explicit address-
440 address-dependency barriers.
442 (3) Read (or load) memory barriers.
444 A read barrier is an address-dependency barrier plus a guarantee that all
449 A read barrier is a partial ordering on loads only; it is not required to
452 Read memory barriers imply address-dependency barriers, and so can
455 [!] Note that read barriers should normally be paired with write barriers;
468 General memory barriers imply both read and write memory barriers, and so
476 This acts as a one-way permeable barrier. It guarantees that all memory
491 This also acts as a one-way permeable barrier. It guarantees that all
502 -not- guaranteed to act as a full memory barrier. However, after an
513 RELEASE variants in addition to fully-ordered and relaxed (no barrier
519 between two CPUs or between a CPU and a device. If it can be guaranteed that
530 ----------------------------------------------
536 instruction; the barrier can be considered to draw a line in that CPU's
539 (*) There is no guarantee that issuing a memory barrier on one CPU will have
540 any direct effect on another CPU or any other hardware in the system. The
541 indirect effect will be the order in which the second CPU sees the effects
542 of the first CPU's accesses occur, but see the next point:
544 (*) There is no guarantee that a CPU will see the correct order of effects
545 from a second CPU's accesses, even _if_ the second CPU uses a memory
546 barrier, unless the first CPU _also_ uses a matching memory barrier (see
549 (*) There is no guarantee that some intervening piece of off-the-CPU
550 hardware[*] will not reorder the memory accesses. CPU cache coherency
554 [*] For information on bus mastering DMA and coherency please read:
556 Documentation/driver-api/pci/pci.rst
557 Documentation/core-api/dma-api-howto.rst
558 Documentation/core-api/dma-api.rst
561 ADDRESS-DEPENDENCY BARRIERS (HISTORICAL)
562 ----------------------------------------
563 [!] This section is marked as HISTORICAL: it covers the long-obsolete
565 in all marked accesses. For more up-to-date information, including
571 to this section are those working on DEC Alpha architecture-specific code
574 address-dependency barriers.
576 [!] While address dependencies are observed in both load-to-load and
577 load-to-store relations, address-dependency barriers are not necessary
578 for load-to-store situations.
580 The requirement of address-dependency barriers is a little subtle, and
584 CPU 1 CPU 2
593 [!] READ_ONCE_OLD() corresponds to READ_ONCE() of pre-4.15 kernel, which
594 doesn't imply an address-dependency barrier.
602 But! CPU 2's perception of P may be updated _before_ its perception of B, thus
611 To deal with this, READ_ONCE() provides an implicit address-dependency barrier
614 CPU 1 CPU 2
621 <implicit address-dependency barrier>
630 even-numbered cache lines and the other bank processes odd-numbered cache
631 lines. The pointer P might be stored in an odd-numbered cache line, and the
632 variable B might be stored in an even-numbered cache line. Then, if the
633 even-numbered bank of the reading CPU's cache is extremely busy while the
634 odd-numbered bank is idle, one can see the new value of the pointer P (&B),
638 An address-dependency barrier is not required to order dependent writes
642 But please carefully read the "CONTROL DEPENDENCIES" section and the
646 CPU 1 CPU 2
655 Therefore, no address-dependency barrier is required to order the read into
657 even without an implicit address-dependency barrier of modern READ_ONCE():
662 of dependency ordering is to -prevent- writes to the data structure, along
669 the CPU containing it. See the section on "Multicopy atomicity" for
673 The address-dependency barrier is very important to the RCU system,
681 --------------------
687 A load-load control dependency requires a full read memory barrier, not
688 simply an (implicit) address-dependency barrier to make it work correctly.
692 <implicit address-dependency barrier>
699 dependency, but rather a control dependency that the CPU may short-circuit
706 <read barrier>
710 However, stores are not speculated. This means that ordering -is- provided
711 for load-store control dependencies, as in the following example:
726 variable 'a' is always non-zero, it would be well within its rights
731 b = 1; /* BUG: Compiler and CPU can both reorder!!! */
756 /* WRITE_ONCE(b, 1); -- moved up, BUG!!! */
759 /* WRITE_ONCE(b, 1); -- moved up, BUG!!! */
764 'b', which means that the CPU is within its rights to reorder them:
779 In contrast, without explicit memory barriers, two-legged-if control
815 Given this transformation, the CPU is not required to respect the ordering
836 You must also be careful not to rely too much on boolean short-circuit
851 out-guess your code. More generally, although READ_ONCE() does force
855 In addition, control dependencies apply only to the then-clause and
856 else-clause of the if-statement in question. In particular, it does
857 not necessarily apply to code following the if-statement:
865 WRITE_ONCE(c, 1); /* BUG: No ordering against the read from 'a'. */
871 conditional-move instructions, as in this fanciful pseudo-assembly
881 A weakly ordered CPU would have no dependency of any sort between the load
884 In short, control dependencies apply only to the stores in the then-clause
885 and else-clause of the if-statement in question (including functions
886 invoked by those two clauses), not to code following that if-statement.
890 to the CPU containing it. See the section on "Multicopy atomicity"
897 However, they do -not- guarantee any other sort of ordering:
906 to carry out the stores. Please note that it is -not- sufficient
912 (*) Control dependencies require at least one run-time conditional
924 (*) Control dependencies apply only to the then-clause and else-clause
925 of the if-statement containing the control dependency, including
927 do -not- apply to code following the if-statement containing the
932 (*) Control dependencies do -not- provide multicopy atomicity. If you
940 -------------------
942 When dealing with CPU-CPU interactions, certain types of memory barrier should
949 with an address-dependency barrier, a control dependency, an acquire barrier,
950 a release barrier, a read barrier, or a general barrier. Similarly a
951 read barrier, control dependency, or an address-dependency barrier pairs
955 CPU 1 CPU 2
960 <read barrier>
965 CPU 1 CPU 2
970 <implicit address-dependency barrier>
975 CPU 1 CPU 2
986 Basically, the read barrier always has to be there, even though it can be of
990 match the loads after the read barrier or the address-dependency barrier, and
993 CPU 1 CPU 2
995 WRITE_ONCE(a, 1); }---- --->{ v = READ_ONCE(c);
997 <write barrier> \ <read barrier>
999 WRITE_ONCE(d, 4); }---- --->{ y = READ_ONCE(b);
1003 ------------------------------------
1008 CPU 1
1022 +-------+ : :
1023 | | +------+
1024 | |------>| C=3 | } /\
1025 | | : +------+ }----- \ -----> Events perceptible to
1027 | | : +------+ }
1028 | CPU 1 | : | B=2 | }
1029 | | +------+ }
1030 | | wwwwwwwwwwwwwwww } <--- At this point the write barrier
1031 | | +------+ } requires all stores prior to the
1033 | | : +------+ } further stores may take place
1034 | |------>| D=4 | }
1035 | | +------+
1036 +-------+ : :
1039 | memory system by CPU 1
1043 Secondly, address-dependency barriers act as partial orderings on address-
1046 CPU 1 CPU 2
1056 Without intervention, CPU 2 may perceive the events on CPU 1 in some
1057 effectively random order, despite the write barrier issued by CPU 1:
1059 +-------+ : : : :
1060 | | +------+ +-------+ | Sequence of update
1061 | |------>| B=2 |----- --->| Y->8 | | of perception on
1062 | | : +------+ \ +-------+ | CPU 2
1063 | CPU 1 | : | A=1 | \ --->| C->&Y | V
1064 | | +------+ | +-------+
1066 | | +------+ | : :
1067 | | : | C=&B |--- | : : +-------+
1068 | | : +------+ \ | +-------+ | |
1069 | |------>| D=4 | ----------->| C->&B |------>| |
1070 | | +------+ | +-------+ | |
1071 +-------+ : : | : : | |
1073 | : : | CPU 2 |
1074 | +-------+ | |
1075 Apparently incorrect ---> | | B->7 |------>| |
1076 perception of B (!) | +-------+ | |
1078 | +-------+ | |
1079 The load of X holds ---> \ | X->9 |------>| |
1080 up the maintenance \ +-------+ | |
1081 of coherence of B ----->| B->2 | +-------+
1082 +-------+
1086 In the above example, CPU 2 perceives that B is 7, despite the load of *C
1089 If, however, an address-dependency barrier were to be placed between the load
1090 of C and the load of *C (ie: B) on CPU 2:
1092 CPU 1 CPU 2
1100 <address-dependency barrier>
1105 +-------+ : : : :
1106 | | +------+ +-------+
1107 | |------>| B=2 |----- --->| Y->8 |
1108 | | : +------+ \ +-------+
1109 | CPU 1 | : | A=1 | \ --->| C->&Y |
1110 | | +------+ | +-------+
1112 | | +------+ | : :
1113 | | : | C=&B |--- | : : +-------+
1114 | | : +------+ \ | +-------+ | |
1115 | |------>| D=4 | ----------->| C->&B |------>| |
1116 | | +------+ | +-------+ | |
1117 +-------+ : : | : : | |
1119 | : : | CPU 2 |
1120 | +-------+ | |
1121 | | X->9 |------>| |
1122 | +-------+ | |
1123 Makes sure all effects ---> \ aaaaaaaaaaaaaaaaa | |
1124 prior to the store of C \ +-------+ | |
1125 are perceptible to ----->| B->2 |------>| |
1126 subsequent loads +-------+ | |
1127 : : +-------+
1130 And thirdly, a read barrier acts as a partial order on loads. Consider the
1133 CPU 1 CPU 2
1142 Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in
1143 some effectively random order, despite the write barrier issued by CPU 1:
1145 +-------+ : : : :
1146 | | +------+ +-------+
1147 | |------>| A=1 |------ --->| A->0 |
1148 | | +------+ \ +-------+
1149 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1150 | | +------+ | +-------+
1151 | |------>| B=2 |--- | : :
1152 | | +------+ \ | : : +-------+
1153 +-------+ : : \ | +-------+ | |
1154 ---------->| B->2 |------>| |
1155 | +-------+ | CPU 2 |
1156 | | A->0 |------>| |
1157 | +-------+ | |
1158 | : : +-------+
1160 \ +-------+
1161 ---->| A->1 |
1162 +-------+
1166 If, however, a read barrier were to be placed between the load of B and the
1167 load of A on CPU 2:
1169 CPU 1 CPU 2
1176 <read barrier>
1179 then the partial ordering imposed by CPU 1 will be perceived correctly by CPU
1182 +-------+ : : : :
1183 | | +------+ +-------+
1184 | |------>| A=1 |------ --->| A->0 |
1185 | | +------+ \ +-------+
1186 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1187 | | +------+ | +-------+
1188 | |------>| B=2 |--- | : :
1189 | | +------+ \ | : : +-------+
1190 +-------+ : : \ | +-------+ | |
1191 ---------->| B->2 |------>| |
1192 | +-------+ | CPU 2 |
1195 At this point the read ----> \ rrrrrrrrrrrrrrrrr | |
1196 barrier causes all effects \ +-------+ | |
1197 prior to the storage of B ---->| A->1 |------>| |
1198 to be perceptible to CPU 2 +-------+ | |
1199 : : +-------+
1203 contained a load of A either side of the read barrier:
1205 CPU 1 CPU 2
1213 <read barrier>
1219 +-------+ : : : :
1220 | | +------+ +-------+
1221 | |------>| A=1 |------ --->| A->0 |
1222 | | +------+ \ +-------+
1223 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1224 | | +------+ | +-------+
1225 | |------>| B=2 |--- | : :
1226 | | +------+ \ | : : +-------+
1227 +-------+ : : \ | +-------+ | |
1228 ---------->| B->2 |------>| |
1229 | +-------+ | CPU 2 |
1232 | +-------+ | |
1233 | | A->0 |------>| 1st |
1234 | +-------+ | |
1235 At this point the read ----> \ rrrrrrrrrrrrrrrrr | |
1236 barrier causes all effects \ +-------+ | |
1237 prior to the storage of B ---->| A->1 |------>| 2nd |
1238 to be perceptible to CPU 2 +-------+ | |
1239 : : +-------+
1242 But it may be that the update to A from CPU 1 becomes perceptible to CPU 2
1243 before the read barrier completes anyway:
1245 +-------+ : : : :
1246 | | +------+ +-------+
1247 | |------>| A=1 |------ --->| A->0 |
1248 | | +------+ \ +-------+
1249 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1250 | | +------+ | +-------+
1251 | |------>| B=2 |--- | : :
1252 | | +------+ \ | : : +-------+
1253 +-------+ : : \ | +-------+ | |
1254 ---------->| B->2 |------>| |
1255 | +-------+ | CPU 2 |
1258 \ +-------+ | |
1259 ---->| A->1 |------>| 1st |
1260 +-------+ | |
1262 +-------+ | |
1263 | A->1 |------>| 2nd |
1264 +-------+ | |
1265 : : +-------+
1273 READ MEMORY BARRIERS VS LOAD SPECULATION
1274 ----------------------------------------
1278 other loads, and so do the load in advance - even though they haven't actually
1280 actual load instruction to potentially complete immediately because the CPU
1283 It may turn out that the CPU didn't actually need the value - perhaps because a
1284 branch circumvented the load - in which case it can discard the value or just
1289 CPU 1 CPU 2
1298 : : +-------+
1299 +-------+ | |
1300 --->| B->2 |------>| |
1301 +-------+ | CPU 2 |
1303 +-------+ | |
1304 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1305 division speculates on the +-------+ ~ | |
1309 Once the divisions are complete --> : : ~-->| |
1310 the CPU can then perform the : : | |
1311 LOAD with immediate effect : : +-------+
1314 Placing a read barrier or an address-dependency barrier just before the second
1317 CPU 1 CPU 2
1322 <read barrier>
1329 : : +-------+
1330 +-------+ | |
1331 --->| B->2 |------>| |
1332 +-------+ | CPU 2 |
1334 +-------+ | |
1335 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1336 division speculates on the +-------+ ~ | |
1343 : : ~-->| |
1345 : : +-------+
1348 but if there was an update or an invalidation from another CPU pending, then
1351 : : +-------+
1352 +-------+ | |
1353 --->| B->2 |------>| |
1354 +-------+ | CPU 2 |
1356 +-------+ | |
1357 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1358 division speculates on the +-------+ ~ | |
1364 +-------+ | |
1365 The speculation is discarded ---> --->| A->1 |------>| |
1366 and an updated value is +-------+ | |
1367 retrieved : : +-------+
1371 --------------------
1380 time to all -other- CPUs. The remainder of this document discusses this
1385 CPU 1 CPU 2 CPU 3
1389 <general barrier> <read barrier>
1392 Suppose that CPU 2's load from X returns 1, which it then stores to Y,
1393 and CPU 3's load from Y returns 1. This indicates that CPU 1's store
1394 to X precedes CPU 2's load from X and that CPU 2's store to Y precedes
1395 CPU 3's load from Y. In addition, the memory barriers guarantee that
1396 CPU 2 executes its load before its store, and CPU 3 loads from Y before
1397 it loads from X. The question is then "Can CPU 3's load from X return 0?"
1399 Because CPU 3's load from X in some sense comes after CPU 2's load, it
1400 is natural to expect that CPU 3's load from X must therefore return 1.
1402 on CPU B follows a load from the same variable executing on CPU A (and
1403 CPU A did not originally store the value which it read), then on
1404 multicopy-atomic systems, CPU B's load must return either the same value
1405 that CPU A's load did or some later value. However, the Linux kernel
1409 for any lack of multicopy atomicity. In the example, if CPU 2's load
1410 from X returns 1 and CPU 3's load from Y returns 1, then CPU 3's load
1413 However, dependencies, read barriers, and write barriers are not always
1414 able to compensate for non-multicopy atomicity. For example, suppose
1415 that CPU 2's general barrier is removed from the above example, leaving
1418 CPU 1 CPU 2 CPU 3
1422 <data dependency> <read barrier>
1425 This substitution allows non-multicopy atomicity to run rampant: in
1426 this example, it is perfectly legal for CPU 2's load from X to return 1,
1427 CPU 3's load from Y to return 1, and its load from X to return 0.
1429 The key point is that although CPU 2's data dependency orders its load
1430 and store, it does not guarantee to order CPU 1's store. Thus, if this
1431 example runs on a non-multicopy-atomic system where CPUs 1 and 2 share a
1432 store buffer or a level of cache, CPU 2 might have early access to CPU 1's
1436 General barriers can compensate not only for non-multicopy atomicity,
1437 but can also generate additional ordering that can ensure that -all-
1438 CPUs will perceive the same order of -all- operations. In contrast, a
1439 chain of release-acquire pairs do not provide this additional ordering,
1480 Furthermore, because of the release-acquire relationship between cpu0()
1486 However, the ordering provided by a release-acquire chain is local
1497 writes in order, CPUs not involved in the release-acquire chain might
1499 the weak memory-barrier instructions used to implement smp_load_acquire()
1502 store to u as happening -after- cpu1()'s load from v, even though
1508 -not- ensure that any particular value will be read. Therefore, the
1529 (*) CPU memory barriers.
1533 ----------------
1540 This is a general barrier -- there are no read-read or write-write
1550 interrupt-handler code and the code that was interrupted.
1556 optimizations that, while perfectly safe in single-threaded code, can
1561 to the same variable, and in some cases, the CPU is within its
1569 Prevent both the compiler and the CPU from doing this as follows:
1585 for single-threaded code, is almost certainly not what the developer
1606 single-threaded code, but can be fatal in concurrent code:
1613 a was modified by some other CPU between the "while" statement and
1624 single-threaded code, so you need to tell the compiler about cases
1638 This transformation is a win for single-threaded code because it
1640 will carry out its proof assuming that the current CPU is the only
1657 the code into near-nonexistence. (It will still load from the
1662 Again, the compiler assumes that the current CPU is the only one
1673 surprise if some other CPU might have stored to variable 'a' in the
1685 between process-level code and an interrupt handler:
1701 win for single-threaded code:
1746 though the CPU of course need not do so.
1762 In single-threaded code, this is not only safe, but also saves
1764 could cause some other CPU to see a spurious value of 42 -- even
1765 if variable 'a' was never zero -- when loading variable 'b'.
1774 damaging, but they can result in cache-line bouncing and thus in
1779 with a single memory-reference instruction, prevents "load tearing"
1782 16-bit store instructions with 7-bit immediate fields, the compiler
1783 might be tempted to use two 16-bit store-immediate instructions to
1784 implement the following 32-bit store:
1791 This optimization can therefore be a win in single-threaded code.
1815 implement these three assignment statements as a pair of 32-bit
1816 loads followed by a pair of 32-bit stores. This would result in
1831 Please note that these compiler barriers have no direct effect on the CPU,
1835 CPU MEMORY BARRIERS
1836 -------------------
1838 The Linux kernel has seven basic CPU memory barriers:
1844 READ rmb() smp_rmb()
1848 All memory barriers except the address-dependency barriers imply a compiler
1862 systems because it is assumed that a CPU will appear to be self-consistent,
1873 windows. These barriers are required even on non-SMP systems as they affect
1875 compiler and the CPU from reordering them.
1904 obj->dead = 1;
1906 atomic_dec(&obj->ref_count);
1919 of writes or reads of shared memory accessible to both the CPU and a
1920 DMA capable device. See Documentation/core-api/dma-api.rst file for more
1925 to the device or the CPU, and a doorbell to notify it when new
1928 if (desc->status != DEVICE_OWN) {
1929 /* do not read data until we own descriptor */
1932 /* read/modify data */
1933 read_data = desc->data;
1934 desc->data = write_data;
1940 desc->status = DEVICE_OWN;
1949 before we read the data from the descriptor, and the dma_wmb() allows
1964 For example, after a non-temporal write to pmem region, we use pmem_wmb()
1970 For load from persistent memory, existing read memory barriers are sufficient
1971 to ensure read ordering.
1975 For memory accesses with write-combining attributes (e.g. those returned
1976 by ioremap_wc()), the CPU may wait for prior accesses to be merged with
1978 write-combining memory accesses before this macro with those after it when
1994 --------------------------
2041 one-way barriers is that the effects of instructions outside of a critical
2061 another CPU not holding that lock. In short, a ACQUIRE followed by an
2062 RELEASE may -not- be assumed to be a full memory barrier.
2065 not imply a full memory barrier. Therefore, the CPU's execution of the
2084 One key point is that we are only talking about the CPU doing
2087 -could- occur.
2089 But suppose the CPU reordered the operations. In this case,
2090 the unlock precedes the lock in the assembly code. The CPU
2093 try to sleep, but more on that later). The CPU will eventually
2102 a sleep-unlock race, but the locking primitive needs to resolve
2107 anything at all - especially with respect to I/O accesses - unless combined
2110 See also the section on "Inter-CPU acquiring barrier effects".
2140 -----------------------------
2148 SLEEP AND WAKE-UP FUNCTIONS
2149 ---------------------------
2170 CPU 1
2174 STORE current->state
2213 CPU 1 (Sleeper) CPU 2 (Waker)
2217 STORE current->state ...
2219 LOAD event_indicated if ((LOAD task->state) & TASK_NORMAL)
2220 STORE task->state
2222 where "task" is the thread being woken up and it equals CPU 1's "current".
2229 CPU 1 CPU 2
2265 order multiple stores before the wake-up with respect to loads of those stored
2301 -----------------------
2309 INTER-CPU ACQUIRING BARRIER EFFECTS
2318 ---------------------------
2323 CPU 1 CPU 2
2332 Then there is no guarantee as to what order CPU 3 will see the accesses to *A
2351 be a problem as a single-threaded linear piece of code will still appear to
2365 --------------------------
2367 When there's a system with more than one processor, more than one CPU in the
2392 (1) read the next pointer from this waiter's record to know as to where the
2395 (2) read the pointer to the waiter's task structure;
2405 LOAD waiter->list.next;
2406 LOAD waiter->task;
2407 STORE waiter->task;
2417 if the task pointer is cleared _before_ the next pointer in the list is read,
2418 another CPU might start processing the waiter and might clobber the waiter's
2419 stack before the up*() function has a chance to read the next pointer.
2423 CPU 1 CPU 2
2429 LOAD waiter->task;
2430 STORE waiter->task;
2438 LOAD waiter->list.next;
2439 --- OOPS ---
2446 LOAD waiter->list.next;
2447 LOAD waiter->task;
2449 STORE waiter->task;
2459 On a UP system - where this wouldn't be a problem - the smp_mb() is just a
2461 right order without actually intervening in the CPU. Since there's only one
2462 CPU, that CPU's dependency ordering logic will take care of everything else.
2466 -----------------
2477 -----------------
2479 Many devices can be memory mapped, and so appear to the CPU as if they're just
2483 However, having a clever CPU or a clever compiler creates a potential problem
2485 device in the requisite order if the CPU or the compiler thinks it is more
2486 efficient to reorder, combine or merge accesses - something that would cause
2490 routines - such as inb() or writel() - which know how to make such accesses
2496 See Documentation/driver-api/device-io.rst for more information.
2500 ----------
2506 This may be alleviated - at least in part - by disabling local interrupts (a
2508 the interrupt-disabled section in the driver. While the driver's interrupt
2509 routine is executing, the driver's core may not run on the same CPU, and its
2515 under interrupt-disablement and then the driver's interrupt handler is invoked:
2534 accesses performed in an interrupt - and vice versa - unless implicit or
2544 likely, then interrupt-disabling locks should be used to guarantee ordering.
2552 specific. Therefore, drivers which are inherently non-portable may rely on
2568 by the same CPU thread to a particular device will arrive in program
2571 2. A writeX() issued by a CPU thread holding a spinlock is ordered
2572 before a writeX() to the same peripheral from another CPU thread
2578 3. A writeX() by a CPU thread to the peripheral will first wait for the
2580 propagated to, the same thread. This ensures that writes by the CPU
2582 visible to a DMA engine when the CPU writes to its MMIO control
2585 4. A readX() by a CPU thread from the peripheral will complete before
2587 ensures that reads by the CPU from an incoming DMA buffer allocated
2592 5. A readX() by a CPU thread from the peripheral will complete before
2594 This ensures that two MMIO register writes by the CPU to a peripheral
2595 will arrive at least 1us apart if the first write is immediately read
2604 The ordering properties of __iomem pointers obtained with non-default
2614 bullets 2-5 above) but they are still guaranteed to be ordered with
2615 respect to other accesses from the same CPU thread to the same
2622 register-based, memory-mapped FIFOs residing on peripherals that are not
2628 The inX() and outX() accessors are intended to access legacy port-mapped
2633 Since many CPU architectures ultimately access these peripherals via an
2639 Device drivers may expect outX() to emit a non-posted write transaction
2657 little-endian and will therefore perform byte-swapping operations on big-endian
2665 It has to be assumed that the conceptual CPU is weakly-ordered but that it will
2669 of arch-specific code.
2671 This means that it must be considered that the CPU will execute its instruction
2672 stream in any order it feels like - or even in parallel - provided that if an
2678 [*] Some instructions have more than one effect - such as changing the
2679 condition codes, changing registers or changing memory - and different
2682 A CPU may also discard any instruction sequence that winds up having no
2693 THE EFFECTS OF THE CPU CACHE
2700 As far as the way a CPU interacts with another part of the system through the
2701 caches goes, the memory system has to include the CPU's caches, and memory
2702 barriers for the most part act at the interface between the CPU and its cache
2705 <--- CPU ---> : <----------- Memory ----------->
2707 +--------+ +--------+ : +--------+ +-----------+
2708 | | | | : | | | | +--------+
2709 | CPU | | Memory | : | CPU | | | | |
2710 | Core |--->| Access |----->| Cache |<-->| | | |
2711 | | | Queue | : | | | |--->| Memory |
2713 +--------+ +--------+ : +--------+ | | | |
2714 : | Cache | +--------+
2716 : | Mechanism | +--------+
2717 +--------+ +--------+ : +--------+ | | | |
2719 | CPU | | Memory | : | CPU | | |--->| Device |
2720 | Core |--->| Access |----->| Cache |<-->| | | |
2722 | | | | : | | | | +--------+
2723 +--------+ +--------+ : +--------+ +-----------+
2728 CPU that issued it since it may have been satisfied within the CPU's own cache,
2731 cacheline over to the accessing CPU and propagate the effects upon conflict.
2733 The CPU core may execute instructions in any order it deems fit, provided the
2741 accesses cross from the CPU side of things to the memory side of things, and
2745 [!] Memory barriers are _not_ needed within a given CPU, as CPUs always see
2750 the use of any special device communication instructions the CPU may have.
2754 ----------------------
2760 the kernel must flush the overlapping bits of cache on each CPU (and maybe
2764 cache lines being written back to RAM from a CPU's cache after the device has
2765 installed its own data, or cache lines present in the CPU's cache may simply
2767 is discarded from the CPU's cache and reloaded. To deal with this, the
2769 cache on each CPU.
2771 See Documentation/core-api/cachetlb.rst for more information on cache
2776 -----------------------
2779 a window in the CPU's memory space that has different properties assigned than
2794 A programmer might take it for granted that the CPU will perform memory
2795 operations in exactly the order specified, so that if the CPU is, for example,
2804 they would then expect that the CPU will complete the memory operation for each
2825 of the CPU buses and caches;
2832 (*) the CPU's data cache may affect the ordering, and while cache-coherency
2833 mechanisms may alleviate this - once the store has actually hit the cache
2834 - there's no guarantee that the coherency management will be propagated in
2837 So what another CPU, say, might actually observe from the above piece of code
2845 However, it is guaranteed that a CPU will be self-consistent: it will see its
2864 The code above may cause the CPU to generate the full sequence of memory
2872 are -not- optional in the above example, as there are architectures
2873 where a given CPU might reorder successive loads to the same location.
2880 the CPU even sees them.
2903 and the LOAD operation never appear outside of the CPU.
2907 --------------------------
2909 The DEC Alpha CPU is one of the most relaxed CPUs there is. Not only that,
2910 some versions of the Alpha CPU have a split data cache, permitting them to have
2911 two semantically-related cache lines updated at separate times. This is where
2912 the address-dependency barrier really becomes necessary as this synchronises
2922 ----------------------
2927 barriers for this use-case would be possible but is often suboptimal.
2929 To handle this case optimally, low-level virt_mb() etc macros are available.
2931 identical code for SMP and non-SMP systems. For example, virtual machine guests
2945 ----------------
2950 Documentation/core-api/circular-buffers.rst
2964 Chapter 5.6: Read/Write Ordering
2967 Chapter 7.1: Memory-Access Ordering
2970 ARM Architecture Reference Manual (ARMv8, for ARMv8-A architecture profile)
2973 IA-32 Intel Architecture Software Developer's Manual, Volume 3:
2988 Chapter 15: Sparc-V9 Memory Models
3004 Solaris Internals, Core Kernel Architecture, p63-68: