Lines Matching full:the

7 The dm-vdo (virtual data optimizer) target provides inline deduplication,
13 Permabit was acquired by Red Hat. This document describes the design of
14 dm-vdo. For usage, see vdo.rst in the same directory as this file.
16 Because deduplication rates fall drastically as the block size increases, a
25 The design of dm-vdo is based on the idea that deduplication is a two-part
26 problem. The first is to recognize duplicate data. The second is to avoid
30 maps from logical block addresses to the actual storage location of the
36 Due to the complexity of data optimization, the number of metadata
47 thread will access the portion of the data structure in that zone.
49 request object (the "data_vio") which will be added to a work queue when
50 the next phase of its operation requires access to the structures in the
53 Another way of thinking about this arrangement is that the work queue for
54 each zone has an implicit lock on the structures it manages for all its
59 reflected in the on-disk representation of each data structure. Therefore,
60 the number of zones for each structure, and hence the number of threads,
63 The Deduplication Index
68 observations, we gathered two key insights. The first is that in most data
69 sets with significant amounts of duplicate data, the duplicates tend to
72 written at about the same time. This is why the index keeps records in
73 temporal order. The second insight is that new data is more likely to
76 when the index is full, it should cull its oldest records to make space for
77 new ones. Another important idea behind the design of the index is that the
79 trade-off between the storage saved and the resources expended to achieve
81 is sufficient to find and eliminate most of the redundancy.
84 record consists of this block name paired with the presumed location of
85 that data on the underlying storage. However, it is not possible to
86 guarantee that the index is accurate. In the most common case, this occurs
87 because it is too costly to update the index when a block is over-written
88 or discarded. Doing so would require either storing the block name along
89 with the blocks, which is difficult to do efficiently in block-based
92 have the same name. In practice, this is extremely unlikely, but because
94 constructed. Because of these inaccuracies, vdo treats the locations in the
96 a duplicate before sharing the existing block with a new one.
99 the newest chapter, called the open chapter. This chapter is stored in a
100 format optimized for adding and modifying records, and the content of the
102 When the open chapter fills up, it is closed and a new open chapter is
106 reading. The records are written to a series of record pages based on the
108 locality should be on a small number of pages, reducing the I/O required to
109 retrieve them. The chapter also compiles an index that indicates which
112 without having to load the entire chapter from storage. This index uses
113 only a subset of the block name as its key, so it cannot guarantee that an
114 index entry refers to the desired block name. It can only guarantee that if
115 there is a record for this name, it will be on the indicated page. Closed
119 Once enough records have been written to fill up all the available index
120 space, the oldest chapter is removed to make space for new chapters. Any
121 time a request finds a matching record in the index, that record is copied
122 into the open chapter. This ensures that useful block names remain available
123 in the index, while unreferenced block names are forgotten over time.
125 In order to find records in older chapters, the index also maintains a
126 higher level structure called the volume index, which contains entries
127 mapping each block name to the chapter containing its newest record. This
128 mapping is updated as records for the block name are copied or updated,
129 ensuring that only the newest record for a given block name can be found.
131 not been deleted from its chapter. Like the chapter index, the volume index
132 uses only a subset of the block name as its key and can not definitively
134 contain the record if a record exists. The volume index is stored entirely
135 in memory and is saved to storage only when the vdo target is shut down.
137 From the viewpoint of a request for a particular block name, it will first
138 look up the name in the volume index. This search will either indicate that
139 the name is new, or which chapter to search. If it returns a chapter, the
140 request looks up its name in the chapter index. This will indicate either
141 that the name is new, or which record page to search. Finally, if it is not
142 new, the request will look for its name in the indicated record page.
143 This process may require up to two page reads per request (one for the
144 chapter index page and one for the request page). However, recently
148 The volume index and the chapter indexes are implemented using a
149 memory-efficient structure called a delta index. Instead of storing the
150 entire block name (the key) for each entry, the entries are sorted by name
151 and only the difference between adjacent keys (the delta) is stored.
152 Because we expect the hashes to be randomly distributed, the size of the
154 the deltas are expressed using a Huffman code to take up even less space.
155 The entire sorted list of keys is called a delta list. This structure
156 allows the index to use many fewer bytes per entry than a traditional hash
158 request must read every entry in a delta list to add up the deltas in order
159 to find the record it needs. The delta index reduces this lookup cost by
163 The default index size can hold 64 million records, corresponding to about
164 256GB of data. This means that the index can identify duplicate data if the
165 original data was written within the last 256GB of writes. This range is
166 called the deduplication window. If new writes duplicate data that is older
167 than that, the index will not be able to find it because the records of the
169 200 GB file to a vdo target and then immediately writes it again, the two
170 copies will deduplicate perfectly. Doing the same with a 500 GB file will
171 result in no deduplication, because the beginning of the file will no
172 longer be in the index by the time the second write begins (assuming there
173 is no duplication within the file itself).
176 deduplication beyond the 256GB threshold, vdo can be configured to use a
178 configuration can only be set when the target is created, not altered
179 later. It is important to consider the expected workload for a vdo target
182 One way is to increase the memory size of the index, which also increases
183 the amount of backing storage required. Doubling the size of the index will
184 double the length of the deduplication window at the expense of doubling
185 the storage size and the memory requirements.
187 The other option is to enable sparse indexing. Sparse indexing increases
188 the deduplication window by a factor of 10, at the expense of also
189 increasing the storage size by a factor of 10. However with sparse
190 indexing, the memory requirements do not increase. The trade-off is
191 slightly more computation per request and a slight decrease in the amount
193 duplicate data, sparse indexing will detect 97-99% of the deduplication
196 The vio and data_vio Structures
201 pointer to a bio but also tracks other fields specific to the operation of
202 vdo. The vio is kept separate from its related bio because there are many
203 circumstances where vdo completes the bio but must continue to do work
210 related to deduplication and other vdo features. The data_vio is the
212 set of steps to handle the application data, after which it is reset and
216 the amount of work that is required to recover from a crash. In addition,
217 benchmarks have indicated that increasing the size of the pool does not
220 The Data Store
223 The data store is implemented by three main data structures, all of which
227 *The Slab Depot*
229 Most of the vdo volume belongs to the slab depot. The depot contains a
230 collection of slabs. The slabs can be up to 32GB, and are divided into
232 These blocks are used either to store data, or to hold portions of the
233 block map (see below). In addition to the data blocks, each slab has a set
237 Reference updates are written to the slab journal. Slab journal blocks are
238 written out either when they are full, or when the recovery journal
239 requests they do so in order to allow the main recovery journal (see below)
240 to free up space. The slab journal is used both to ensure that the main
241 recovery journal can regularly free up space, and also to amortize the cost
242 of updating individual reference blocks. The reference counters are kept in
244 when there is a need to reclaim slab journal space. The write operations
245 are performed in the background as needed so they do not add latency to
252 The slab depot maintains an additional small data structure, the "slab
253 summary," which is used to reduce the amount of work needed to come back
254 online after a crash. The slab summary maintains an entry for each slab
255 indicating whether or not the slab has ever been used, whether all of its
260 available, the target can resume normal operation in a degraded mode. Read
262 while the remainder of the dirty slabs are recovered.
264 *The Block Map*
266 The block map contains the logical to physical mapping. It can be thought
268 36 bits of which contain the physical block number which holds the data for
269 the given logical address. The other 4 bits are used to indicate the nature
270 of the mapping. Of the 16 possible states, one represents a logical address
272 one represents an uncompressed block, and the other 14 states are used to
273 indicate that the mapped data is compressed, and which of the compression
274 slots in the compressed block contains the data for this logical address.
276 In practice, the array of mapping entries is divided into "block map
282 mod L.) At each level, the trees are interleaved, so logical addresses
284 on. The interleaving is maintained all the way up to the 60 root nodes.
286 for a large number of possible logical zone counts. The storage for the 60
288 allocated out of the slabs as needed. This flexible allocation avoids the
289 need to pre-allocate space for the entire set of logical mappings and also
290 makes growing the logical size of a vdo relatively easy.
292 In operation, the block map maintains two caches. It is prohibitive to keep
293 the entire leaf level of the trees in memory, so each logical zone
294 maintains its own cache of leaf pages. The size of this cache is
295 configurable at target start time. The second cache is allocated at start
296 time, and is large enough to hold all the non-leaf pages of the entire
299 *The Recovery Journal*
301 The recovery journal is used to amortize updates across the block map and
302 slab depot. Each write request causes an entry to be made in the journal.
304 remapping, the journal records the logical address affected and its old and
305 new physical mappings. For a block map remapping, the journal records the
306 block map page number and the physical block allocated for it. Block map
307 pages are never reclaimed or repurposed, so the old mapping is always 0.
309 Each journal entry is an intent record summarizing the metadata updates
310 that are required for a data_vio. The recovery journal issues a flush
311 before each journal block write to ensure that the physical data for the
313 writes are all issued with the FUA bit set to ensure the recovery journal
314 entries themselves are stable. The journal entry and the data write it
315 represents must be stable on disk before the other metadata structures may
316 be updated to reflect the operation. These entries allow the vdo device to
317 reconstruct the logical to physical mappings after an unexpected
323 as vdo has done enough work to guarantee that it can complete the write
324 eventually. Generally, the data for acknowledged but unflushed write I/O
326 requires data to be stable on storage, it must issue a flush or write the
327 data with the FUA bit set like any other asynchronous I/O. Shutting down
328 the vdo target will also flush any remaining I/O.
330 Application write bios follow the steps outlined below.
332 1. A data_vio is obtained from the data_vio pool and associated with the
333 application bio. If there are no data_vios available, the incoming bio
335 to the application. The data_vio pool is protected by a spin lock.
337 The newly acquired data_vio is reset and the bio's data is copied into
338 the data_vio if it is a write and the data is not all zeroes. The data
339 must be copied because the application bio can be acknowledged before
340 the data_vio processing is complete, which means later processing steps
341 will no longer have access to the application bio. The application bio
342 may also be smaller than 4K, in which case the data_vio will have
343 already read the underlying block and the data is instead copied over
344 the relevant portion of the larger block.
346 2. The data_vio places a claim (the "logical lock") on the logical address
347 of the bio. It is vital to prevent simultaneous modifications of the
349 This claim is implemented as an entry in a hashtable where the key is
350 the logical address and the value is a pointer to the data_vio
353 If a data_vio looks in the hashtable and finds that another data_vio is
354 already operating on that logical address, it waits until the previous
355 operation finishes. It also sends a message to inform the current
357 for a logical lock will flush the previous lock holder out of the
361 This stage requires the data_vio to get an implicit lock on the
362 appropriate logical zone to prevent concurrent modifications of the
363 hashtable. This implicit locking is handled by the zone divisions
366 3. The data_vio traverses the block map tree to ensure that all the
368 the leaf page for its logical address. If any interior tree page is
369 missing, it is allocated at this time out of the same physical storage
372 a. If any page-node in the tree has not yet been allocated, it must be
373 allocated before the write can continue. This step requires the
374 data_vio to lock the page-node that needs to be allocated. This
375 lock, like the logical block lock in step 2, is a hashtable entry
376 that causes other data_vios to wait for the allocation process to
379 The implicit logical zone lock is released while the allocation is
380 happening, in order to allow other operations in the same logical
381 zone to proceed. The details of allocation are the same as in
383 the tree using a similar process to adding a new data block mapping.
384 The data_vio journals the intent to add the new node to the block
385 map tree (step 10), updates the reference count of the new block
386 (step 11), and reacquires the implicit logical zone lock to add the
387 new mapping to the parent tree node (step 12). Once the tree is
388 updated, the data_vio proceeds down the tree. Any other data_vios
391 b. In the steady-state case, the block map tree nodes will already be
392 allocated, so the data_vio just traverses the tree until it finds
393 the required leaf node. The location of the mapping (the "block map
394 slot") is recorded in the data_vio so that later steps do not need
395 to traverse the tree again. The data_vio then releases the implicit
398 4. If the block is a zero block, skip to step 9. Otherwise, an attempt is
399 made to allocate a free data block. This allocation ensures that the
404 The data_vio will search each slab in a zone until it finds a free
405 block or decides there are none. If the first zone has no free space,
406 it will proceed to search the next physical zone by taking the implicit
407 lock for that zone and releasing the previous one until it finds a
408 free block or runs out of zones to search. The data_vio will acquire a
409 struct pbn_lock (the "physical block lock") on the free block. The
410 struct pbn_lock also has several fields to record the various kinds of
411 claims that data_vios can have on physical blocks. The pbn_lock is
412 added to a hashtable like the logical block locks in step 2. This
413 hashtable is also covered by the implicit physical zone lock. The
414 reference count of the free block is updated to prevent any other
415 data_vio from considering it free. The reference counters are a
416 sub-component of the slab and are thus also covered by the implicit
419 5. If an allocation was obtained, the data_vio has all the resources it
420 needs to complete the write. The application bio can safely be
421 acknowledged at this point. The acknowledgment happens on a separate
422 thread to prevent the application callback from blocking other data_vio
425 If an allocation could not be obtained, the data_vio continues to
426 attempt to deduplicate or compress the data, but the bio is not
427 acknowledged because the vdo device may be out of space.
429 6. At this point vdo must determine where to store the application data.
430 The data_vio's data is hashed and the hash (the "record name") is
431 recorded in the data_vio.
433 7. The data_vio reserves or joins a struct hash_lock, which manages all of
434 the data_vios currently writing the same data. Active hash locks are
435 tracked in a hashtable similar to the way logical block locks are
436 tracked in step 2. This hashtable is covered by the implicit lock on
437 the hash zone.
439 If there is no existing hash lock for this data_vio's record_name, the
440 data_vio obtains a hash lock from the pool, adds it to the hashtable,
441 and sets itself as the new hash lock's "agent." The hash_lock pool is
442 also covered by the implicit hash zone lock. The hash lock agent will
443 do all the work to decide where the application data will be
444 written. If a hash lock for the data_vio's record_name already exists,
445 and the data_vio's data is the same as the agent's data, the new
446 data_vio will wait for the agent to complete its work and then share
449 In the rare case that a hash lock exists for the data_vio's hash but
450 the data does not match the hash lock's agent, the data_vio skips to
452 different data blocks produce the same hash, for example.
454 8. The hash lock agent attempts to deduplicate or compress its data with
455 the following steps.
457 a. The agent initializes and sends its embedded deduplication request
458 (struct uds_request) to the deduplication index. This does not
459 require the data_vio to get any locks because the index components
460 manage their own locking. The data_vio waits until it either gets a
461 response from the index or times out.
463 b. If the deduplication index returns advice, the data_vio attempts to
464 obtain a physical block lock on the indicated physical address, in
465 order to read the data and verify that it is the same as the
466 data_vio's data, and that it can accept more references. If the
467 physical address is already locked by another data_vio, the data at
468 that address may soon be overwritten so it is not safe to use the
471 c. If the data matches and the physical block can add references, the
474 to record their new mapping. If there are more data_vios in the hash
475 lock than there are references available, one of the remaining
476 data_vios becomes the new agent and continues to step 8d as if no
479 d. If no usable duplicate block was found, the agent first checks that
481 to. If the agent does not have an allocation, some other data_vio in
482 the hash lock that does have an allocation takes over as agent. If
483 none of the data_vios have an allocated physical block, these writes
486 e. The agent attempts to compress its data. If the data does not
487 compress, the data_vio will continue to step 8h to write its data
490 If the compressed size is small enough, the agent will release the
491 implicit hash zone lock and go to the packer (struct packer) where
493 data_vios. All compression operations require the implicit lock on
494 the packer zone.
496 The packer can combine up to 14 compressed blocks in a single 4k
499 wait in the packer for an arbitrarily long time for other data_vios
500 to fill out the compressed block. There is a mechanism for vdo to
504 the same logical block address. A data_vio may also be evicted from
505 the packer if it cannot be paired with any other compressed block
509 f. If the agent fills a packer bin, either because all 14 of its slots
511 using the allocated physical block from one of its data_vios. Step
514 g. Each data_vio sets the compressed block as its new physical address.
515 The data_vio obtains an implicit lock on the physical zone and
516 acquires the struct pbn_lock for the compressed block, which is
517 modified to be a shared lock. Then it releases the implicit physical
520 h. Any data_vio evicted from the packer will have an allocation from
523 i. After the data is written, if the data_vio is the agent of a hash
524 lock, it will reacquire the implicit hash zone lock and share its
525 physical address with as many other data_vios in the hash lock as
529 j. If the agent actually wrote new data (whether compressed or not),
530 the deduplication index is updated to reflect the location of the
531 new data. The agent then releases the implicit hash zone lock.
533 9. The data_vio determines the previous mapping of the logical address.
534 There is a cache for block map leaf pages (the "block map cache"),
536 entirely in memory. If the desired leaf page is not in the cache, the
537 data_vio will reserve a slot in the cache and load the desired page
538 into it, possibly evicting an older cached page. The data_vio then
539 finds the current physical address for this logical address (the "old
541 on the block map cache structures, covered by the implicit logical zone
544 10. The data_vio makes an entry in the recovery journal containing the
545 logical block address, the old physical mapping, and the new physical
546 mapping. Making this journal entry requires holding the implicit
547 recovery journal lock. The data_vio will wait in the journal until all
548 recovery blocks up to the one containing its entry have been written
549 and flushed to ensure the transaction is stable on storage.
551 11. Once the recovery journal entry is stable, the data_vio makes two slab
552 journal entries: an increment entry for the new mapping, and a
553 decrement entry for the old mapping. These two operations each require
554 holding a lock on the affected physical slab, covered by its implicit
555 physical zone lock. For correctness during recovery, the slab journal
556 entries in any given slab journal must be in the same order as the
557 corresponding recovery journal entries. Therefore, if the two entries
559 the same zone, the increment is always made before the decrement in
561 memory, the associated reference count is also updated in memory.
563 12. Once both of the reference count updates are done, the data_vio
564 acquires the implicit logical zone lock and updates the
565 logical-to-physical mapping in the block map to point to the new
566 physical block. At this point the write operation is complete.
568 13. If the data_vio has a hash lock, it acquires the implicit hash zone
569 lock and releases its hash lock to the pool.
571 The data_vio then acquires the implicit physical zone lock and releases
572 the struct pbn_lock it holds for its allocated block. If it had an
573 allocation that it did not use, it also sets the reference count for
576 The data_vio then acquires the implicit logical zone lock and releases
577 the logical block lock acquired in step 2.
579 The application bio is then acknowledged if it has not previously been
580 acknowledged, and the data_vio is returned to the pool.
585 1 and 2 in the write path to obtain a data_vio and lock its logical
587 address that is guaranteed to complete, the read data_vio will copy the
588 data from the write data_vio and return it. Otherwise, it will look up the
589 logical-to-physical mapping by traversing the block map tree as in step 3,
590 and then read and possibly decompress the indicated data at the indicated
592 nodes if they are missing. If the interior block map nodes do not exist
593 yet, the logical block map address must still be unmapped and the read
595 acknowledgment as in step 13, although it only needs to release the logical
596 lock and return itself to the pool.
602 a read-modify-write operation that reads the relevant 4K block, copies the
603 new data over the approriate sectors of the block, and then launches a
604 write operation for the modified data block. The read and write stages of
605 this operation are nearly identical to the normal read and write
610 When a vdo is restarted after a crash, it will attempt to recover from the
611 recovery journal. During the pre-resume phase of the next start, the
612 recovery journal is read. The increment portion of valid entries are played
613 into the block map. Next, valid entries are played, in order as required,
614 into the slab journals. Finally, each physical zone attempts to replay at
615 least one slab journal to reconstruct the reference counts of one slab.
617 the vdo comes back online, while the remainder of the slab journals are
618 used to reconstruct the rest of the reference counts in the background.
624 lost. The vdo may be instructed to rebuild as best it can in order to
626 to the possibility that data has been lost. During a read-only rebuild, the
627 block map is recovered from the recovery journal as before. However, the
628 reference counts are not rebuilt from the slab journals. Instead, the
629 reference counts are zeroed, the entire block map is traversed, and the
630 reference counts are updated from the block mappings. While this may lose
631 some data, it ensures that the block map and reference counts are