Lines Matching +full:write +full:- +full:data
7 may become inconsistent with data on other member disks. If the array is also
9 disks is missing. This can lead to silent data corruption when rebuilding the
10 array or using it is as degraded - data calculated from parity for array blocks
11 that have not been touched by a write request during the unclean shutdown can
12 be incorrect. Such condition is known as the RAID5 Write Hole. Because of
15 Partial parity for a write operation is the XOR of stripe data chunks not
16 modified by this write. It is just enough data needed for recovering from the
17 write hole. XORing partial parity with the modified chunks produces parity for
18 the stripe, consistent with its state before the write operation, regardless of
19 which chunk writes have completed. If one of the not modified data disks of
23 the array. Because of this, using write-intent bitmap and PPL together is not
26 When handling a write request PPL writes partial parity before new data and
27 parity are dispatched to disks. PPL is a distributed log - it is stored on
29 stripe. It does not require a dedicated journaling drive. Write performance is
30 reduced by up to 30%-40% but it scales with the number of drives in the array
34 Unlike raid5-cache, the other solution in md for closing the write hole, PPL is
35 not a true journal. It does not protect from losing in-flight data, only from
36 silent data corruption. If a dirty disk of a stripe is lost, no PPL recovery is
38 arbitrary data in the written part of a stripe if that disk is lost. In such
41 PPL is available for md version-1 metadata and external (specifically IMSM)
42 metadata arrays. It can be enabled using mdadm option --consistency-policy=ppl.
45 keep data structures and implementation simple. RAID5 arrays with so many disks