Lines Matching full:the
15 does in the kernel.
21 This document captures the design of the online filesystem check feature for
23 The purpose of this document is threefold:
25 - To help kernel distributors understand exactly what the XFS online fsck
28 - To help people reading the code to familiarize themselves with the relevant
29 concepts and design points before they start digging into the code.
31 - To help developers maintaining the system by capturing the reasons
34 As the online fsck code is merged, the links in this document to topic branches
37 This document is licensed under the terms of the GNU Public License, v2.
38 The primary author is Darrick J. Wong.
41 Part 1 defines what fsck tools are and the motivations for writing a new one.
44 Part 4 discusses the user interface and the intended usage modes of the new
46 Parts 5 and 6 show off the high level components and how they fit together, and
64 - Retrieve the named data blobs at any time.
71 operations internal to the filesystem, such as internal consistency checking
73 Summary metadata, as the name implies, condense information contained in
76 The filesystem check (fsck) tool examines all the metadata in a filesystem
83 As a word of caution -- the primary goal of most Linux fsck tools is to restore
84 the filesystem metadata to a consistent state, not to maximize the data
88 Filesystems of the 20th century generally lacked any redundancy in the ondisk
98 | System administrators avoid data loss by increasing the number of |
99 | separate storage systems through the creation of backups; and they avoid |
100 | downtime by increasing the redundancy of each storage system through the |
102 | fsck tools address only the first problem. |
105 TLDR; Show Me the Code!
108 Code is posted to the kernel.org git trees as follows:
112 Each kernel patchset adding an online repair function will use the same branch
113 name across the kernel, xfsprogs, and fstests git repos.
118 The online fsck tool described here will be the third tool in the history of
122 The first program, ``xfs_check``, was created as part of the XFS debugger
124 It walks all metadata in the filesystem looking for inconsistencies in the
129 The second program, ``xfs_repair``, was created to be faster and more robust
130 than the first program.
134 while it scans the metadata of the entire filesystem.
135 The most important feature of this tool is its ability to respond to
138 Space usage metadata are rebuilt from the observed file metadata.
143 The current XFS tools leave several problems unsolved:
145 1. **User programs** suddenly **lose access** to the filesystem when unexpected
146 shutdowns occur as a result of silent corruptions in the metadata.
149 2. **Users** experience a **total loss of service** during the recovery period
152 3. **Users** experience a **total loss of service** if the filesystem is taken
155 4. **Data owners** cannot **check the integrity** of their stored data without
158 performed by the storage system administrator might suffice.
161 with corruptions if they **lack the means** to assess filesystem health
162 while the filesystem is online.
171 Given this definition of the problems to be solved and the actors who would
172 benefit, the proposed solution is a third fsck tool that acts on a running
178 ``xfs_scrub`` is the name of the driver program.
179 The rest of this document presents the goals and use cases of the new fsck
181 discusses the similarities and differences with existing tools.
186 | Throughout this document, the existing offline fsck tool can also be |
188 | The userspace driver program for the new online fsck tool can be |
190 | The kernel portion of online fsck that validates metadata is called |
191 | "online scrub", and portion of the kernel that fixes metadata is called |
195 The naming hierarchy is broken up into objects known as directories and files
196 and the physical space is split into pieces known as allocation groups.
198 contain the damage when corruptions occur.
199 The division of the filesystem into principal objects (allocation groups and
201 repairs on a subset of the filesystem.
204 Even if a piece of filesystem metadata can only be regenerated by scanning the
205 entire system, the scan can still be done in the background while other file
209 metadata to enable targeted checking and repair operations while the system
219 The first is the userspace driver program ``xfs_scrub``, which is responsible
221 reacting to the outcomes appropriately, and reporting results to the system
223 The second and third are in the kernel, which implements functions to check
229 | For brevity, this document shortens the phrase "online fsck work |
233 Scrub item types are delineated in a manner consistent with the Unix design
241 the offline fsck program can handle.
242 However, online fsck cannot be running 100% of the time, which means that
244 If these errors cause the next mount to fail, offline fsck is the only
246 This limitation means that maintenance of the offline fsck tool will continue.
247 A second limitation of online fsck is that it must follow the same resource
248 sharing and lock acquisition rules as the regular filesystem.
253 However, both of these limitations are acceptable tradeoffs to satisfy the
262 The userspace driver program ``xfs_scrub`` splits the work of checking and
265 on the success of all previous phases.
266 The seven phases are as follows:
268 1. Collect geometry information about the mounted filesystem and computer,
269 discover the online fsck capabilities of the kernel, and open the
275 If corruption is found in the inode header or inode btree and ``xfs_scrub``
278 Repairs are implemented by using the information in the scrub item to
279 resubmit the kernel scrub call with the repair flag enabled; this is
280 discussed in the next section.
283 3. Check all metadata of every file in the filesystem.
292 phase, if the caller permits them.
293 Before starting repairs, the summary counters are checked and any necessary
294 repairs are performed so that subsequent repairs will not fail the resource
297 made somewhere in the filesystem.
298 Free space in the filesystem is trimmed at the end of phase 4 if the
301 5. By the start of this phase, all primary and secondary filesystem metadata
303 Summary counters such as the free space counts and quota resource counts
309 6. If the caller asks for a media scan, read all allocated and written data
310 file extents in the filesystem.
311 The ability to use hardware-assisted data file integrity checking is new
312 to online fsck; neither of the previous tools have this capability.
313 If media errors occur, they will be mapped to the owning files and reported.
315 7. Re-check the summary counters and presents the caller with a summary of
324 The kernel scrub code uses a three-step strategy for checking and repairing
325 the one aspect of a metadata object represented by a scrub item:
327 1. The scrub item of interest is checked for corruptions; opportunities for
328 optimization; and for values that are directly controlled by the system
330 If the item is not corrupt or does not need optimization, resource are
331 released and the positive scan results are returned to userspace.
332 If the item is corrupt or could be optimized but the caller does not permit
333 this, resources are released and the negative scan results are returned to
335 Otherwise, the kernel moves on to the second step.
337 2. The repair function is called to rebuild the data structure.
339 rather than try to salvage the existing structure.
340 If the repair fails, the scan results from the first step are returned to
342 Otherwise, the kernel moves on to the third step.
344 3. In the third step, the kernel runs the same checks over the new metadata
345 item to assess the efficacy of the repairs.
346 The results of the reassessment are returned to userspace.
358 users either because they are directly created by the user or they index
359 objects created by the user
376 Scrub obeys the same rules as regular filesystem accesses for resource and lock
379 Primary metadata objects are the simplest for scrub to process.
380 The principal filesystem object (either an allocation group or an inode) that
381 owns the item being scrubbed is locked to guard against concurrent updates.
382 The check function examines every record associated with the type for obvious
385 Repairs for this class of scrub item are simple, since the repair function
386 starts by holding all the resources acquired in the previous step.
387 The repair function scans available metadata as needed to record all the
388 observations needed to complete the structure.
389 Next, it stages the observations in a new ondisk structure and commits it
390 atomically to complete the repair.
391 Finally, the storage from the old data structure are carefully reaped.
393 Because ``xfs_scrub`` locks a primary object for the duration of the repair,
394 this is effectively an offline repair operation performed on a subset of the
396 This minimizes the complexity of the repair code because it is not necessary to
398 any other part of the filesystem.
400 trying to access the damaged structure will be blocked until repairs complete.
401 The only infrastructure needed by the repair code are the staging area for
403 Despite these limitations, the advantage that online repair holds is clear:
404 targeted work on individual shards of the filesystem avoids total loss of
413 in-memory array prior to formatting the new ondisk structure, which is very
414 similar to the list-based algorithm discussed in section 2.3 ("List-Based
416 However, any data structure builder that maintains a resource lock for the
417 duration of the repair is *always* an offline algorithm.
425 but are only needed for online fsck or for reorganization of the filesystem.
434 to the secondary object but needs to check primary metadata, which runs counter
435 to the usual order of resource acquisition.
436 Frequently, this means that full filesystems scans are necessary to rebuild the
441 Under these conditions, ``xfs_scrub`` cannot lock resources for the entire
442 duration of the repair.
446 Depending on the requirements of the specific repair function, the staging
447 index will either have the same format as the ondisk structure or a design
449 The next step is to release all locks and start the filesystem scan.
450 When the repair scanner needs to record an observation, the staging data are
451 locked long enough to apply the update.
452 While the filesystem scan is in progress, the repair function hooks the
453 filesystem so that it can apply pending filesystem updates to the staging
455 Once the scan is done, the owning object is re-locked, the live data is used to
456 write a new ondisk structure, and the repairs are committed atomically.
457 The hooks are disabled and the staging staging area is freed.
458 Finally, the storage from the old data structure are carefully reaped.
462 Live filesystem code has to be hooked so that the repair function can observe
464 The staging area has to become a fully functional parallel structure so that
465 updates can be merged from the hooks.
466 Finally, the hook, the filesystem scan, and the inode locking model must be
468 should be applied to the staging structure.
470 In theory, the scrub implementation could apply these same techniques for
473 Programs attempting to access the damaged structures are not blocked from
477 Inspiration for the secondary metadata repair strategy was drawn from section
483 The sidecar index mentioned above bears some resemblance to the side file
486 build the new structure as quickly as possible; and an auxiliary structure that
487 captures all updates that would be committed to the index by other threads were
488 the new index already online.
489 After the index building scan finishes, the updates recorded in the side file
490 are applied to the new index.
491 To avoid conflicts between the index builder and other writer threads, the
492 builder maintains a publicly visible cursor that tracks the progress of the
493 scan through the record space.
494 To avoid duplication of work between the side file and the index builder, side
495 file updates are elided when the record ID for the update is greater than the
496 cursor position within the record ID space.
498 To minimize changes to the rest of the codebase, XFS online repair keeps the
500 In other words, there is no attempt to expose the keyspace of the new index
502 The complexity of such an approach would be very high and perhaps more
505 **Future Work Question**: Can the full scan and live update code used to
509 employed these live scans to build a shadow copy of the metadata and then
510 compared the shadow records to the ondisk records.
511 However, doing that is a fair amount more work than what the checking functions
513 The live scans and hooks were developed much later.
514 That in turn increases the runtime of those scrub functions.
519 Metadata structures in this last category summarize the contents of primary
522 smaller than the primary metadata which they represent.
533 acquisition follow the same paths as regular filesystem accesses.
535 The superblock summary counters have special requirements due to the underlying
536 implementation of the incore counters, and will be treated separately.
537 Check and repair of the other types of summary counters (quota resource counts
538 and file link counts) employ the same filesystem scanning and hooking
539 techniques as outlined above, but because the underlying data are sets of
540 integer counters, the staging data need not be a fully functional mirror of the
550 quotacheck can use the incremental view deltas described in section 2.14 to
551 track pending changes to the block and inode usage counts in each transaction,
552 and commit those changes to a dquot side file when the transaction commits.
553 Delta tracking is necessary for dquots because the index builder scans inodes,
554 whereas the data structure being rebuilt is an index of dquots.
555 Link count checking combines the view deltas and commit step into one because
556 it sets attributes of the objects being scanned instead of writing them to a
564 During the development of online fsck, several risk factors were identified
565 that may make the feature unsuitable for certain distributors and users.
569 - **Decreased performance**: Adding metadata indices to the filesystem
570 increases the time cost of persisting changes to disk, and the reverse space
572 System administrators who require the maximum performance can disable the
574 reduces the ability of online fsck to find inconsistencies and repair them.
576 - **Incorrect repairs**: As with all software, there might be defects in the
577 software that result in incorrect repairs being written to the filesystem.
578 Systematic fuzz testing (detailed in the next section) is employed by the
580 The kernel build system provides Kconfig options (``CONFIG_XFS_ONLINE_SCRUB``
583 The xfsprogs build system has a configure option (``--enable-scrub=no``) that
584 disables building of the ``xfs_scrub`` binary, though this is not a risk
585 mitigation if the kernel functionality remains enabled.
589 If the keyspaces of several metadata indices overlap in some manner but a
590 coherent narrative cannot be formed from records collected, then the repair
592 To reduce the chance that a repair will fail with a dirty transaction and
593 render the filesystem unusable, the online repair functions have been
594 designed to stage and validate all new records before committing the new
599 and the ability to perform administrative changes.
600 Running this automatically in the background scares people, so the systemd
601 background service is configured to run with only the privileges required.
602 Obviously, this cannot address certain problems like the kernel crashing or
603 deadlocking, but it should be sufficient to prevent the scrub process from
604 escaping and reconfiguring the system.
605 The cron job does not have this protection.
609 spraying exploit code onto the public mailing list for instant zero-day
611 In the view of this author, the benefit is realized only when the fuzz
612 operators help to **fix** the flaws, but this opinion apparently is not
614 The XFS maintainers' continuing ability to manage these events presents an
615 ongoing risk to the stability of the development process.
616 Automated testing should front-load some of the risk while the feature is
628 1. Detect inconsistencies in the metadata;
635 that the software behaves within expectations.
637 of every aspect of a fsck tool until the introduction of low-cost virtual
639 With ample hardware availability in mind, the testing strategy for the online
640 fsck project involves differential analysis against the existing fsck tools and
647 The primary goal of any free software QA effort is to make testing as
648 inexpensive and widespread as possible to maximize the scaling advantages of
650 In other words, testing should maximize the breadth of filesystem configuration
652 This improves code quality by enabling the authors of online fsck to find and
656 The Linux filesystem community shares a common QA testing suite,
660 would run both the ``xfs_check`` and ``xfs_repair -n`` commands on the test and
662 This provides a level of assurance that the kernel and the fsck tools stay in
664 During development of the online checking code, fstests was modified to run
665 ``xfs_scrub -n`` between each test to ensure that the new checking code
666 produces the same results as the two existing fsck tools.
669 ``xfs_repair`` to rebuild the filesystem's metadata indices between tests.
671 after it exists, or trigger complaints from the online check.
673 To complete the first phase of development of online repair, fstests was
675 This enables a comparison of the effectiveness of online repair as compared to
676 the existing offline repair tools.
684 to test the rather common fault that entire metadata blocks get corrupted.
685 This required the creation of fstests library code that can create a filesystem
688 a single block of a specific type of metadata object, trash it with the
689 existing ``blocktrash`` command in ``xfs_db``, and test the reaction of a
692 This earlier test suite enabled XFS developers to test the ability of the
693 in-kernel validation functions and the ability of the offline fsck tool to
694 detect and eliminate the inconsistent metadata.
695 This part of the test suite was extended to cover online fsck in exactly the
700 * For each metadata object existing on the filesystem:
704 * Test the reactions of:
706 1. The kernel verifiers to stop obviously bad metadata
713 The testing plan for online fsck includes extending the existing fs testing
715 of every metadata field of every metadata object in the filesystem.
717 block in the filesystem to simulate the effects of memory corruption and
719 Given that fstests already contains the ability to create a filesystem
720 containing every metadata format known to the filesystem, ``xfs_db`` can be
725 * For each metadata object existing on the filesystem...
735 3. Toggle the most significant bit
736 4. Toggle the middle bit
737 5. Toggle the least significant bit
740 8. Randomize the contents
742 * ...test the reactions of:
744 1. The kernel verifiers to stop obviously bad metadata
751 This is quite the combinatoric explosion!
754 check the responses of XFS' fsck tools.
755 Since the introduction of the fuzz testing framework, these tests have been
758 The enhanced testing was used to finalize the deprecation of ``xfs_check`` by
760 the older tool.
762 These tests have been very valuable for ``xfs_scrub`` in the same ways -- they
763 allow the online fsck developers to compare online fsck against offline fsck,
764 and they enable XFS developers to find deficiencies in the code base.
777 A unique requirement to online fsck is the ability to operate on a filesystem
780 impact on the running system, the online repair code should never introduce
781 inconsistencies into the filesystem metadata, and regular workloads should
784 the following ways:
790 * Race ``fsstress`` and ``xfs_scrub -n`` to ensure that checking the whole
793 force-repairing the whole filesystem doesn't cause problems.
795 freezing and thawing the filesystem.
797 remounting the filesystem read-only and read-write.
798 * The same, but running ``fsx`` instead of ``fsstress``. (Not done yet?)
800 Success is defined by the ability to run all of these tests without observing
806 and the `evolution of existing per-function stress testing
812 The primary user of online fsck is the system administrator, just like offline
821 For administrators who want the absolute freshest information about the
824 The program checks every piece of metadata in the filesystem while the
825 administrator waits for the results to be reported, just like the existing
828 option to increase the verbosity of the information reported.
830 A new feature of ``xfs_scrub`` is the ``-x`` option, which employs the error
831 correction capabilities of the hardware to check data file contents.
832 The media scan is not enabled by default because it may dramatically increase
835 The output of a foreground invocation is captured in the system log.
837 The ``xfs_scrub_all`` program walks the list of mounted filesystems and
839 It serializes scans for any filesystems that resolve to the same top level
845 To reduce the workload of system administrators, the ``xfs_scrub`` package
848 The background service configures scrub to run with as little privilege as
849 possible, the lowest CPU and IO priority, and in a CPU-constrained single
851 This can be tuned by the systemd administrator at any time to suit the latency
854 The output of the background service is also captured in the system log.
856 errors) can be emailed automatically by setting the ``EMAIL_ADDR`` environment
857 variable in the following service files:
863 The decision to enable the background scan is left to the system administrator.
864 This can be done by enabling either of the following services:
869 This automatic weekly scan is configured out of the box to perform an
873 redundancy can be provided elsewhere above the filesystem, or the storage
876 The systemd unit file definitions have been subjected to a security audit
877 (as of systemd 249) to ensure that the xfs_scrub processes have as little
878 access to the rest of the system as possible.
880 were restricted to the minimum required, sandboxing was set up to the maximal
881 extent possible with sandboxing and system call filtering; and access to the
882 filesystem tree was restricted to the minimum needed to start the program and
883 access the filesystem being scanned.
884 The service definition files restrict CPU usage to 80% of one CPU core, and
886 This measure was taken to minimize delays in the rest of the filesystem.
887 No such hardening has been performed for the cron job.
890 `Enabling the xfs_scrub background service
897 The information is updated whenever ``xfs_scrub`` is run, or whenever
898 inconsistencies are detected in the filesystem metadata during regular
900 System administrators should use the ``health`` command of ``xfs_spaceman`` to
902 If problems have been observed, the administrator can schedule a reduced
903 service window to run the online repair tool to correct the problem.
904 Failing that, the administrator can decide to schedule a maintenance window to
905 run the traditional offline repair tool to correct the problem.
907 **Future Work Question**: Should the health reporting integrate with the new
912 *Answer*: These questions remain unanswered, but should be a part of the
925 This section discusses the key algorithms and data structures of the kernel
926 code that provide the ability to check and repair metadata while the system
928 The first chapters in this section reveal the pieces that provide the
930 The remainder of this section presents the mechanisms through which XFS
936 Starting with XFS version 5 in 2012, XFS updated the format of nearly every
938 "unique" identifier (UUID), an owner code, the ondisk address of the block,
940 When loading a block buffer from disk, the magic number, UUID, owner, and
941 ondisk address confirm that the retrieved block matches the specific owner of
942 the current filesystem, and that the information contained in the block is
943 supposed to be found at the ondisk address.
944 The first three components enable checking tools to disregard alleged metadata
945 that doesn't belong to the filesystem, and the fourth component enables the
948 Whenever a file system operation modifies a block, the change is submitted
949 to the log as part of a transaction.
950 The log then processes these transactions marking them done once they are
952 The logging code maintains the checksum and the log sequence number of the last
955 be introduced between the computer and its storage devices.
957 log updates to the filesystem.
960 the filesystem to detect obvious corruption when reading metadata blocks from
964 For more information, please see the documentation for
970 The original design of XFS (circa 1993) is an improvement upon 1980s Unix
975 the filesystem, even at the cost of data integrity.
976 Filesystems designers in the early 21st century choose different strategies to
980 For XFS, a different redundancy strategy was chosen to modernize the design:
983 By adding a new index, the filesystem retains most of its ability to scale
984 well to heavily threaded workloads involving large datasets, since the primary
985 file metadata (the directory tree, the file block map, and the allocation
987 Like any system that improves redundancy, the reverse-mapping feature increases
989 However, it has two critical advantages: first, the reverse index is key to
992 Second, the different ondisk storage format of the reverse mapping btree
993 defeats device-level deduplication because the filesystem requires real
999 | A criticism of adding the secondary index is that it does nothing to |
1000 | improve the robustness of user data storage itself. |
1003 | copy-writes, which age the filesystem prematurely. |
1006 | As for metadata, the complexity of adding a new secondary index of space |
1010 | layers in the kernel. |
1013 The information captured in a reverse space mapping record is as follows:
1021 uint64_t rm_offset; /* offset within the owner */
1025 The first two fields capture the location and size of the physical space,
1027 The owner field tells scrub which metadata structure or file inode have been
1029 For space allocated to files, the offset field tells scrub where the space was
1030 mapped within the file fork.
1031 Finally, the flags field provides extra information about the space usage --
1035 Online filesystem checking judges the consistency of each primary metadata
1037 The reverse mapping index plays a key role in the consistency checking process
1040 Program runtime and ease of resource acquisition are the only real limits to
1044 * The absence of an entry in the free space information.
1045 * The absence of an entry in the inode index.
1046 * The absence of an entry in the reference count data if the file is not
1048 * The correspondence of an entry in the reverse mapping information.
1053 the above primary metadata are in doubt.
1054 The checking code for most primary metadata follows a path similar to the
1057 2. Proving the consistency of secondary metadata with the primary metadata is
1061 btree block requires locking the file and searching the entire btree to
1062 confirm the block.
1063 Instead, scrub relies on rigorous cross-referencing during the primary space
1066 3. Consistency scans must use non-blocking lock acquisition primitives if the
1067 required locking order is not the same order used by regular filesystem
1069 For example, if the filesystem normally takes a file ILOCK before taking
1070 the AGF buffer lock but scrub wants to take a file ILOCK while holding
1072 This means that forward progress during this part of a scan of the reverse
1077 The details of how these records are staged, written to disk, and committed
1078 into the filesystem are covered in subsequent sections.
1083 The first step of checking a metadata structure is to examine every record
1084 contained within the structure and its relationship with the rest of the
1087 metadata from wreaking havoc on the system.
1088 Each of these layers contributes information that helps the kernel to make
1089 three decisions about the health of a metadata structure:
1092 - Is this structure inconsistent with the rest of the system
1094 - Is there so much damage around the filesystem that cross-referencing is not
1096 - Can the structure be optimized to improve performance or reduce the size of
1098 - Does the structure contain data that is not inconsistent but deserves review
1099 by the system administrator (``XFS_SCRUB_OFLAG_WARNING``) ?
1101 The following sections describe how the metadata scrubbing process works.
1106 The lowest layer of metadata protection in XFS are the metadata verifiers built
1107 into the buffer cache.
1108 These functions perform inexpensive internal consistency checking of the block
1111 - Does the block belong to this filesystem?
1113 - Does the block belong to the structure that asked for the read?
1117 - Is the type of data stored in the block within a reasonable range of what
1120 - Does the physical location of the block match the location it was read from?
1122 - Does the block checksum match the data?
1124 The scope of the protections here are very limited -- verifiers can only
1125 establish that the filesystem code is reasonably free of gross corruption bugs
1126 and that the storage system is reasonably competent at retrieval.
1127 Corruption problems observed at runtime cause the generation of health reports,
1128 failed system calls, and in the extreme case, filesystem shutdowns if the
1129 corrupt metadata force the cancellation of a dirty transaction.
1132 block of a structure in the course of checking the structure.
1135 failure to cross-reference once the full examination is complete.
1142 After the buffer cache, the next level of metadata protection is the internal
1143 record verification code built into the filesystem.
1144 These checks are split between the buffer verifiers, the in-filesystem users of
1145 the buffer cache, and the scrub code itself, depending on the amount of higher
1147 The scope of checking is still internal to the block.
1150 - Does the type of data stored in the block match what scrub is expecting?
1152 - Does the block belong to the owning structure that asked for the read?
1154 - If the block contains records, do the records fit within the block?
1156 - If the block tracks internal free space information, is it consistent with
1157 the record areas?
1159 - Are the records contained inside the block free of obvious corruptions?
1163 within the dynamically allocated parts of an allocation group and within
1164 the filesystem.
1168 Btree records spanning an interval of the btree keyspace are checked for
1179 that a value is within the possible range.
1194 - Quota timer expiration (if resource usage exceeds the soft limit)
1199 After internal block checks, the next higher level of checking is
1201 For regular runtime code, the cost of these checks is considered to be
1204 The exact set of cross-referencing is highly dependent on the context of the
1207 The XFS btree code has keyspace scanning functions that online fsck uses to
1209 Specifically, scrub can scan the key space of an index to determine if that
1211 For the reverse mapping btree, it is possible to mask parts of the key for the
1212 purposes of performing a keyspace scan so that scrub can decide if the rmap
1213 btree contains records mapping a certain extent of physical space without the
1214 sparsenses of the rest of the rmap keyspace getting in the way.
1216 Btree blocks undergo the following checks before cross-referencing:
1218 - Does the type of data stored in the block match what scrub is expecting?
1220 - Does the block belong to the owning structure that asked for the read?
1222 - Do the records fit within the block?
1224 - Are the records contained inside the block free of obvious corruptions?
1226 - Are the name hashes in the correct order?
1228 - Do node pointers within the btree point to valid block addresses for the type
1231 - Do child pointers point towards the leaves?
1233 - Do sibling pointers point across the same level?
1235 - For each node block record, does the record key accurate reflect the contents
1236 of the child block?
1243 - Does the reverse mapping index list only the appropriate owner as the
1246 - Are none of the blocks claimed as free space?
1248 - If these aren't file data blocks, are none of the blocks claimed as space
1255 - If there's a parent node block, do the keys listed for this block match the
1258 - Do the sibling pointers point to valid blocks? Of the same level?
1260 - Do the child pointers point to valid blocks? Of the next level down?
1266 - Does the reverse mapping index list no owners of this space?
1268 - Is this space not claimed by the inode index for inodes?
1270 - Is it not mentioned by the reference count index?
1272 - Is there a matching record in the other free space btree?
1280 - Do cleared bits in the holemask correspond with inode clusters?
1282 - Do set bits in the freemask correspond with inode records with zero link
1289 - Do all the fields that summarize information about the file forks actually
1292 - Does each inode with zero link count correspond to a record in the free
1299 - Is this space not mentioned by the inode btrees?
1301 - If this is a CoW fork mapping, does it correspond to a CoW entry in the
1308 - Within the space subkeyspace of the rmap btree (that is to say, all
1309 records mapped to a particular space extent and ignoring the owner info),
1310 are there the same number of reverse mapping records for each block as the
1313 Proposed patchsets are the series to find gaps in
1333 Both the kernel and userspace can access the keys and values, subject to
1335 Most typically these fragments are metadata about the file -- origins, security
1341 A file's extended attributes are stored in blocks mapped by the attr fork.
1342 The mappings point to leaf blocks, remote value blocks, or dabtree blocks.
1343 Block 0 in the attribute fork is always the top of the structure, but otherwise
1344 each of the three types of blocks can be found at any offset in the attr fork.
1345 Leaf blocks contain attribute key records that point to the name and the value.
1346 Names are always stored elsewhere in the same leaf block.
1347 Values that are less than 3/4 the size of a filesystem block are also stored
1348 elsewhere in the same leaf block.
1350 If the leaf information exceeds a single filesystem block, a dabtree (also
1351 rooted at block 0) is created to map hashes of the attribute names to leaf
1352 blocks in the attr fork.
1354 Checking an extended attribute structure is not so straightforward due to the
1356 Scrub must read each block mapped by the attr fork and ignore the non-leaf
1359 1. Walk the dabtree in the attr fork (if present) to ensure that there are no
1360 irregularities in the blocks or dabtree mappings that do not point to
1363 2. Walk the blocks of the attr fork looking for leaf blocks.
1366 a. Validate that the name does not contain invalid characters.
1368 b. Read the attr value.
1369 This performs a named lookup of the attr name to ensure the correctness
1370 of the dabtree.
1371 If the value is stored in a remote block, this also validates the
1372 integrity of the remote value block.
1377 The filesystem directory tree is a directed acylic graph structure, with files
1378 constituting the nodes, and directory entries (dirents) constituting the edges.
1382 Each directory file must have exactly one directory pointing to the file.
1389 The first partition contains directory entry data blocks.
1392 If the directory entry data grows beyond one block, the second partition (which
1394 information and an index that maps hashes of the dirent names to directory data
1395 blocks in the first partition.
1397 If this second partition grows beyond one block, the third partition is
1400 If the free space has been separated and the second partition grows again
1406 1. Walk the dabtree in the second partition (if present) to ensure that there
1407 are no irregularities in the blocks or dabtree mappings that do not point to
1410 2. Walk the blocks of the first partition looking for directory entries.
1413 a. Does the name contain no invalid characters?
1415 b. Does the inumber correspond to an actual, allocated inode?
1417 c. Does the child inode have a nonzero link count?
1419 d. If a file type is included in the dirent, does it match the type of the
1422 e. If the child is a subdirectory, does the child's dotdot pointer point
1423 back to the parent?
1425 f. If the directory has a second partition, perform a named lookup of the
1426 dirent name to ensure the correctness of the dabtree.
1428 3. Walk the free space list in the third partition (if present) to ensure that
1429 the free spaces it describes are really unused.
1438 As stated in previous sections, the directory/attribute btree (dabtree) index
1440 Internally, it maps a 32-bit hash of the name to a block offset within the
1443 The internal structure of a dabtree closely resembles the btrees that record
1446 The format of leaf and node records are the same -- each entry points to the
1447 next level down in the hierarchy, with dabtree node records pointing to dabtree
1449 in the fork.
1451 Checking and cross-referencing the dabtree is very similar to what is done for
1454 - Does the type of data stored in the block match what scrub is expecting?
1456 - Does the block belong to the owning structure that asked for the read?
1458 - Do the records fit within the block?
1460 - Are the records contained inside the block free of obvious corruptions?
1462 - Are the name hashes in the correct order?
1464 - Do node pointers within the dabtree point to valid fork offsets for dabtree
1467 - Do leaf pointers within the dabtree point to valid fork offsets for directory
1470 - Do child pointers point towards the leaves?
1472 - Do sibling pointers point across the same level?
1474 - For each dabtree node record, does the record key accurate reflect the
1475 contents of the child dabtree block?
1477 - For each dabtree leaf record, does the record key accurate reflect the
1478 contents of the directory or attr block?
1486 In theory, the amount of available resources (data blocks, inodes, realtime
1487 extents) can be found by walking the entire filesystem.
1489 maintain summaries of this information in the superblock.
1490 Cross-referencing these values against the filesystem metadata should be a
1491 simple matter of walking the free space and inode metadata in each AG and the
1501 After performing a repair, the checking code is run a second time to validate
1502 the new structure, and the results of the health assessment are recorded
1503 internally and returned to the calling process.
1504 This step is critical for enabling system administrator to monitor the status
1505 of the filesystem and the progress of any repairs.
1506 For developers, it is a useful means to judge the efficacy of error detection
1507 and correction in the online and offline checking tools.
1514 These chains, once committed to the log, are restarted during log recovery if
1515 the system crashes while processing the chain.
1516 Because the AG header buffers are unlocked between transactions within a chain,
1520 the metadata are temporarily inconsistent with each other, and rebuilding is
1528 The count should be bumped whenever a new item is added to the chain.
1529 The count should be dropped when the filesystem has locked the AG header
1530 buffers and finished the work.
1532 * When online fsck wants to examine an AG, it should lock the AG header
1534 If the count is zero, proceed with the checking operation.
1535 If it is nonzero, cycle the buffer locks to allow the chain to make forward
1540 Details about the discovery of this situation are presented in the
1541 :ref:`next section <chain_coordination>`, and details about the solution
1546 Discovery of the Problem
1549 Midway through the development of online scrubbing, the fsstress tests
1553 The root cause of these reports is the eventual consistency model introduced by
1554 the expansion of deferred work items and compound transaction chains when
1564 items to commit to freeing some space in one transaction while deferring the
1566 The transaction sequence looks like this:
1568 1. The first transaction contains a physical update to the file's block mapping
1569 structures to remove the mapping from the btree blocks.
1570 It then attaches to the in-memory transaction an action item to schedule
1575 Returning to the example above, the action item tracks the freeing of both
1576 the unmapped space from AG 7 and the block mapping btree (BMBT) block from
1578 Deferred frees recorded in this manner are committed in the log by creating
1579 an EFI log item from the ``struct xfs_extent_free_item`` object and
1580 attaching the log item to the transaction.
1581 When the log is persisted to disk, the EFI item is written into the ondisk
1585 2. The second transaction contains a physical update to the free space btrees
1586 of AG 3 to release the former BMBT block and a second physical update to the
1587 free space btrees of AG 7 to release the unmapped file space.
1588 Observe that the physical updates are resequenced in the correct order
1590 Attached to the transaction is a an extent free done (EFD) log item.
1591 The EFD contains a pointer to the EFI logged in transaction #1 so that log
1592 recovery can tell if the EFI needs to be replayed.
1594 If the system goes down after transaction #1 is written back to the filesystem
1595 but before #2 is committed, a scan of the filesystem metadata would show
1597 of the unmapped space.
1600 reconstruct the incore state of the intent item and finish it.
1601 In the example above, the log must replay both frees described in the recovered
1602 EFI to complete the recovery phase.
1606 * Log items must be added to a transaction in the correct order to prevent
1607 conflicts with principal objects that are not held by the transaction.
1609 completed before the last update to free the extent, and extents should not
1610 be reallocated until that last update commits to the log.
1614 but as long as the first subtlety is handled, this should not affect the
1617 * Unmounting the filesystem flushes all pending work to disk, which means that
1618 offline fsck never sees the temporary inconsistencies caused by deferred
1624 During the design phase of the reverse mapping and reflink features, it was
1625 decided that it was impractical to cram all the reverse mapping updates for a
1629 * The block mapping update itself
1630 * A reverse mapping update for the block mapping update
1631 * Fixing the freelist
1632 * A reverse mapping update for the freelist fix
1634 * A shape change to the block mapping btree
1635 * A reverse mapping update for the btree update
1636 * Fixing the freelist (again)
1637 * A reverse mapping update for the freelist fix
1639 * An update to the reference counting information
1640 * A reverse mapping update for the refcount update
1641 * Fixing the freelist (a third time)
1642 * A reverse mapping update for the freelist fix
1645 * Fixing the freelist (a fourth time)
1646 * A reverse mapping update for the freelist fix
1648 * Freeing the space used by the block mapping btree
1649 * Fixing the freelist (a fifth time)
1650 * A reverse mapping update for the freelist fix
1655 remove the space from a staging area and again to map it into the file!
1659 This reduces the worst case size of transaction reservations by breaking the
1660 work into a long chain of small updates, which increases the degree of eventual
1661 consistency in the system.
1665 However, online fsck changes the rules -- remember that although physical
1666 updates to per-AG structures are coordinated by locking the buffers for AG
1669 all the validation work without releasing the lock.
1670 If the main lock for a space btree is an AG header buffer lock, scrub may have
1673 mapping update but not the corresponding refcount update, the two AG btrees
1676 If a repair is attempted in this state, the results will be catastrophic!
1682 acquire the higher level lock in AG order before making any changes.
1685 without simulating the entire operation.
1687 make the filesystem very slow.
1689 2. Make the deferred work coordinator code aware of consecutive intent items
1690 targeting the same AG and have it hold the AG header buffers locked across
1691 the transaction roll between updates.
1692 This would introduce a lot of complexity into the coordinator since it is
1693 only loosely coupled with the actual deferred work items.
1694 It would also fail to solve the problem because deferred work items can
1699 protect the data structure being scrubbed to look for pending operations.
1700 The checking and repair operations must factor these pending operations into
1701 the evaluations being performed.
1702 This solution is a nonstarter because it is *extremely* invasive to the main
1712 There are two key properties to the drain mechanism.
1713 First, the counter is incremented when a deferred work item is *queued* to a
1714 transaction, and it is decremented after the associated intent done log item is
1716 The second property is that deferred work can be added to a transaction without
1718 locking that AG header buffer to log the physical updates and the intent done
1720 The first property enables scrub to yield to running transaction chains, which
1722 The second property of the drain is key to the correct coordination of scrub,
1725 For regular filesystem code, the drain works as follows:
1727 1. Call the appropriate subsystem function to add a deferred work item to a
1730 2. The function calls ``xfs_defer_drain_bump`` to increase the counter.
1732 3. When the deferred item manager wants to finish the deferred work item, it
1735 4. The ``->finish_item`` implementation logs some changes and calls
1736 ``xfs_defer_drain_drop`` to decrease the sloppy counter and wake up any threads
1737 waiting on the drain.
1739 5. The subtransaction commits, which unlocks the resource associated with the
1742 For scrub, the drain works as follows:
1744 1. Lock the resource(s) associated with the metadata being scrubbed.
1745 For example, a scan of the refcount btree would lock the AGI and AGF header
1748 2. If the counter is zero (``xfs_defer_drain_busy`` returns false), there are no
1749 chains in progress and the operation may proceed.
1751 3. Otherwise, release the resources grabbed in step 1.
1753 4. Wait for the intent counter to reach zero (``xfs_defer_drain_intents``), then go
1756 To avoid polling in step 4, the drain provides a waitqueue for scrub threads to
1757 be woken up whenever the intent count drops to zero.
1759 The proposed patchset is the
1768 Online fsck for XFS separates the regular filesystem from the checking and
1770 However, there are a few parts of online fsck (such as the intent drains, and
1771 later, live update hooks) where it is useful for the online fsck code to know
1772 what's going on in the rest of the filesystem.
1773 Since it is not expected that online fsck will be constantly running in the
1774 background, it is very important to minimize the runtime overhead imposed by
1775 these hooks when online fsck is compiled into the kernel but not actively
1777 Taking locks in the hot path of a writer thread to access a data structure only
1778 to find that no further action is necessary is expensive -- on the author's
1780 Fortunately, the kernel supports dynamic code patching, which enables XFS to
1783 This sled has an overhead of however long it takes the instruction decoder to
1784 skip past the sled, which seems to be on the order of less than 1ns and
1787 When online fsck enables the static key, the sled is replaced with an
1788 unconditional branch to call the hook code.
1789 The switchover is quite expensive (~22000ns) but is paid entirely by the
1791 enter online fsck at the same time, or if multiple filesystems are being
1792 checked at the same time.
1793 Changing the branch direction requires taking the CPU hotplug lock, and since
1796 accessed in the memory reclaim paths.
1797 To minimize contention on the CPU hotplug lock, care should be taken not to
1801 filesystem operations when xfs_scrub is not running, the intended usage
1804 - The hooked part of XFS should declare a static-scoped static key that
1806 The ``DEFINE_STATIC_KEY_FALSE`` macro takes care of this.
1807 The static key itself should be declared as a ``static`` variable.
1809 - When deciding to invoke code that's only used by scrub, the regular
1810 filesystem should call the ``static_branch_unlikely`` predicate to avoid the
1811 scrub-only hook code if the static key is not enabled.
1813 - The regular filesystem should export helper functions that call
1814 ``static_branch_inc`` to enable and ``static_branch_dec`` to disable the
1816 Wrapper functions make it easy to compile out the relevant code if the kernel
1820 the ``xchk_fsgates_enable`` from the setup function to enable a specific
1824 Callers had better be sure they really need the functionality gated by the
1825 static key; the ``TRY_HARDER`` flag is useful here.
1829 If it detects a conflict between scrub and the running transactions, it will
1831 If the caller of the helper has not enabled the static key, the helper will
1832 return -EDEADLOCK, which should result in the scrub being restarted with the
1834 The scrub setup function should detect that flag, enable the static key, and
1835 try the scrub again.
1838 For more information, please see the kernel documentation of
1846 Some online checking functions work by scanning the filesystem to build a
1847 shadow copy of an ondisk metadata structure in memory and comparing the two
1849 For online repair to rebuild a metadata structure, it must compute the record
1850 set that will be stored in the new structure before it can persist that new
1854 To meet these goals, the kernel needs to collect a large amount of information
1855 in a place that doesn't require the correct operation of the filesystem.
1863 and eliminate the possibility of indexed lookups.
1865 * Kernel memory is pinned, which can drive the system into OOM conditions.
1867 * The system might not have sufficient memory to stage all the information.
1869 At any given time, online fsck does not need to keep the entire record set in
1871 Continued development of online fsck demonstrated that the ability to perform
1873 Fortunately, the Linux kernel already has a facility for byte-addressable and
1878 Hence, the ``xfile`` was born!
1883 | The first edition of online repair inserted records into a new btree as |
1887 | The second edition solved the half-rebuilt structure problem by storing |
1888 | everything in memory, but frequently ran the system out of memory. |
1890 | The third edition solved the OOM problem by using linked lists, but the |
1891 | memory overhead of the list pointers was extreme. |
1897 A survey of the intended uses of xfiles suggested these use cases:
1911 To support the first four use cases, high level data structures wrap the xfile
1913 The rest of this section discusses the interfaces that the xfile presents to
1915 The fifth use case is discussed in the :ref:`realtime summary <rtsummary>` case
1918 XFS is very record-based, which suggests that the ability to load and store
1923 in this manner is an acceptable behavior because the only reaction is to abort
1924 the operation back to userspace.
1926 However, no discussion of file access idioms is complete without answering the
1930 Online fsck must not drive the system into OOM conditions, which means that
1932 tmpfs can only push a pagecache folio to the swap cache if the folio is neither
1933 pinned nor locked, which means the xfile must not pin too many folios.
1935 Short term direct access to xfile contents is done by locking the pagecache
1938 long term direct access to xfile contents is done by bumping the folio refcount,
1939 mapping it into kernel address space, and dropping the folio lock.
1941 the shrinker infrastructure to know when to release folios.
1943 The ``xfile_get_folio`` and ``xfile_put_folio`` functions are provided to
1944 retrieve the (locked) folio that backs part of an xfile and to release it.
1945 The only code to use these folio lease functions are the xfarray
1946 :ref:`sorting<xfarray_sort>` algorithms and the :ref:`in-memory
1952 For security reasons, xfiles must be owned privately by the kernel.
1953 They are marked ``S_PRIVATE`` to prevent interference from the security system,
1957 To avoid locking recursion issues with the VFS, all accesses to the shmfs file
1958 are performed by manipulating the page cache directly.
1959 xfile writers call the ``->write_begin`` and ``->write_end`` functions of the
1960 xfile's address space to grab writable pages, copy the caller's buffer into the
1961 page, and release the pages.
1963 before copying the contents into the caller's buffer.
1964 In other words, xfiles ignore the VFS read and write code paths to avoid
1969 If an xfile is shared between threads to stage repairs, the caller must provide
1972 other threads to provide updates to the scanned data, the scrub function must
1983 Directories have a set of fixed-size dirent records that point to the names,
1987 During a repair, scrub needs to stage new records during the gathering step and
1988 retrieve them during the btree building step.
1990 Although this requirement can be satisfied by calling the read and write
1991 methods of the xfile directly, it is simpler for callers for there to be a
1994 The ``xfarray`` abstraction presents a linear array for fixed-size records atop
1995 the byte-accessible xfile.
2004 covered in the next section.
2006 The first type of caller handles records that are indexed by position.
2008 during the collection step.
2010 The typical use case are quota records or file link count records.
2012 ``xfarray_store`` functions, which wrap the similarly-named xfile functions to
2020 The second type of caller handles records that are not indexed by position
2022 The typical use case here is rebuilding space btrees and key/value btrees.
2023 These callers can add records to the array without caring about array indices
2024 via the ``xfarray_append`` function, which stores a record at the end of the
2027 rebuilding btree data), the ``xfarray_sort`` function can arrange the sorted
2030 The third type of caller is a bag, which is useful for counting records.
2031 The typical use case here is constructing space extent reference counts from
2033 Records can be put in the bag in any order, they can be removed from the bag
2035 The ``xfarray_store_anywhere`` function is used to insert a record in any
2036 null record slot in the bag; and the ``xfarray_unset`` function removes a
2037 record from the bag.
2039 The proposed patchset is the
2046 Most users of the xfarray require the ability to iterate the records stored in
2047 the array.
2048 Callers can probe every possible array index with the following:
2062 For xfarray users that want to iterate a sparse array, the ``xfarray_iter``
2063 function ignores indices in the xfarray that have never been written to by
2065 of the array that are not populated with memory pages.
2066 Once it finds a page, it will skip the zeroed areas of the page.
2080 During the fourth demonstration of online repair, a community reviewer remarked
2084 The btree insertion code in XFS is responsible for maintaining correct ordering
2085 of the records, so naturally the xfarray must also support sorting the record
2091 The sorting algorithm used in the xfarray is actually a combination of adaptive
2092 quicksort and a heapsort subalgorithm in the spirit of
2094 `pdqsort <https://github.com/orlp/pdqsort>`_, with customizations for the Linux
2097 advantage of the binary subpartitioning offered by quicksort, but it also uses
2098 heapsort to hedge against performance collapse if the chosen quicksort pivots
2101 gulf between the two implementations.
2103 The Linux kernel already contains a reasonably fast implementation of heapsort.
2104 It only operates on regular C arrays, which limits the scope of its usefulness.
2105 There are two key places where the xfarray uses it:
2110 of the xfarray into a memory buffer, and sorting the buffer.
2112 In other words, ``xfarray`` uses heapsort to constrain the nested recursion of
2116 A good pivot splits the set to sort in half, leading to the divide and conquer
2118 A poor pivot barely splits the subset at all, leading to O(n\ :sup:`2`)
2120 The xfarray sort routine tries to avoid picking a bad pivot by sampling nine
2121 records into a memory buffer and using the kernel heapsort to identify the
2122 median of the nine.
2127 of the triads, and then sort the middle value of each triad to determine the
2130 It turned out to be much more performant to read the nine elements into a
2131 memory buffer, run the kernel's in-memory heapsort on the buffer, and choose
2132 the 4th element of that buffer as the pivot.
2133 Tukey's ninthers are described in J. W. Tukey, `The ninther, a technique for
2138 The partitioning of quicksort is fairly textbook -- rearrange the record
2139 subset around the pivot, then set up the current and next stack frames to
2140 sort with the larger and the smaller halves of the pivot, respectively.
2141 This keeps the stack space requirements to log2(record count).
2143 As a final performance optimization, the hi and lo scanning phase of quicksort
2144 keeps examined xfile pages mapped in the kernel for as long as possible to
2147 accounting for the application of heapsort directly onto xfile pages.
2157 and each extended attribute needs to store both the attribute name and value.
2158 The names, keys, and values can consume a large amount of memory, so the
2164 The store function returns a magic cookie for every object that it persists.
2165 Later, callers provide this cookie to the ``xblob_load`` to recall the object.
2166 The ``xfblob_free`` function frees a specific blob, and the ``xfblob_truncate``
2169 The details of repairing directories and extended attributes will be discussed
2175 The proposed patchset is at the start of the
2184 The chapter about :ref:`secondary metadata<secondary_metadata>` mentioned that
2186 between a live metadata scan of the filesystem and writer threads that are
2188 Keeping the scan data up to date requires requires the ability to propagate
2189 metadata updates from the filesystem into the data being collected by the scan.
2191 applying them before writing the new metadata to disk, but this leads to
2192 unbounded memory consumption if the rest of the system is very busy.
2193 Another option is to skip the side-log and commit live updates from the
2194 filesystem directly into the scan data, which trades more overhead for a lower
2196 In both cases, the data structure holding the scan results must support indexed
2200 fsck employs the second strategy of committing live updates directly into
2205 mapping records: the existing rmap btree code!
2208 Recall that the :ref:`xfile <xfile>` abstraction represents memory pages as a
2209 regular file, which means that the kernel can create byte or block addressable
2211 The XFS buffer cache specializes in abstracting IO to block-oriented address
2212 spaces, which means that adaptation of the buffer cache to interface with
2213 xfiles enables reuse of the entire btree library.
2215 The next few sections describe how they actually work.
2217 The proposed patchset is the
2226 The first is to make it possible for the ``struct xfs_buftarg`` structure to
2227 host the ``struct xfs_buf`` rhashtable, because normally those are held by a
2229 The second change is to modify the buffer ``ioapply`` function to "read" cached
2230 pages from the xfile and "write" cached pages back to the xfile.
2231 Multiple access to individual buffers is controlled by the ``xfs_buf`` lock,
2232 since the xfile does not provide any locking on its own.
2233 With this adaptation in place, users of the xfile-backed buffer cache use
2234 exactly the same APIs as users of the disk-backed buffer cache.
2235 The separation between xfile and buffer cache implies higher memory usage since
2238 Today, however, it simply eliminates the need for new code.
2245 These blocks use the same header format as an on-disk btree, but the in-memory
2246 block verifiers ignore the checksums, assuming that xfile memory is no more
2250 The very first block of an xfile backing an xfbtree contains a header block.
2251 The header describes the owner, height, and the block number of the root
2254 To allocate a btree block, use ``xfile_seek_data`` to find a gap in the file.
2255 If there are no gaps, create one by extending the length of the xfile.
2256 Preallocate space for the block with ``xfile_prealloc``, and hand back the
2259 ``FALLOC_FL_PUNCH_HOLE``) to remove the memory page from the xfile.
2270 pointing to the xfile.
2272 3. Pass the buffer cache target, buffer ops, and other information to
2273 ``xfbtree_init`` to initialize the passed in ``struct xfbtree`` and write an
2274 initial root block to the xfile.
2276 the creation function.
2278 all the necessary details for callers.
2280 4. Pass the xfbtree object to the btree cursor creation function for the
2282 Following the example above, ``xfs_rmapbt_mem_cursor`` takes care of this
2285 5. Pass the btree cursor to the regular btree functions to make queries against
2286 and to update the in-memory btree.
2287 For example, a btree cursor for an rmap xfbtree can be passed to the
2289 See the :ref:`next section<xfbtree_commit>` for information on dealing with
2292 6. When finished, delete the btree cursor, destroy the xfbtree object, free the
2293 buffer target, and the destroy the xfile to release all resources.
2300 Although it is a clever hack to reuse the rmap btree code to handle the staging
2301 structure, the ephemeral nature of the in-memory btree block storage presents
2303 The XFS transaction manager must not commit buffer log items for buffers backed
2304 by an xfile because the log format does not understand updates for devices
2305 other than the data device.
2306 An ephemeral xfbtree probably will not exist by the time the AIL checkpoints
2307 log transactions back into the filesystem, and certainly won't exist during
2310 remove the buffer log items from the transaction and write the updates into the
2311 backing xfile before committing or cancelling the transaction.
2313 The ``xfbtree_trans_commit`` and ``xfbtree_trans_cancel`` functions implement
2316 1. Find each buffer log item whose buffer targets the xfile.
2318 2. Record the dirty/ordered status of the log item.
2320 3. Detach the log item from the buffer.
2322 4. Queue the buffer to a special delwri list.
2324 5. Clear the transaction dirty flag if the only dirty log items were the ones
2327 6. Submit the delwri list to commit the changes to the xfile, if the updates
2330 After removing xfile logged buffers from the transaction in this manner, the
2339 the incore records to be sorted prior to commit, but was very slow and leaked
2340 blocks if the system went down during a repair.
2341 Loading records one at a time also meant that repair could not control the
2342 loading factor of the blocks in the new btree.
2344 Fortunately, the venerable ``xfs_repair`` tool had a more efficient means for
2349 To prepare for online fsck, each of the four bulk loaders were studied, notes
2350 were taken, and the four were refactored into a single generic btree bulk
2357 The zeroth step of bulk loading is to assemble the entire record set that will
2358 be stored in the new btree, and sort the records.
2359 Next, call ``xfs_btree_bload_compute_geometry`` to compute the shape of the
2360 btree from the record set, the type of btree, and any load factor preferences.
2363 First, the geometry computation computes the minimum and maximum records that
2364 will fit in a leaf block from the size of a btree block and the size of the
2366 Roughly speaking, the maximum number of records is::
2370 The XFS design specifies that btree blocks should be merged when possible,
2371 which means the minimum number of records is half of maxrecs::
2375 The next variable to determine is the desired loading factor.
2377 Choosing minrecs is undesirable because it wastes half the block.
2381 The default loading factor was chosen to be 75% of maxrecs, which provides a
2386 If space is tight, the loading factor will be set to maxrecs to try to avoid
2391 Load factor is computed for btree node blocks using the combined size of the
2392 btree key and pointer as the record size::
2398 Once that's done, the number of leaf blocks required to store the record set
2403 The number of node blocks needed to point to the next level down in the tree
2409 The entire computation is performed recursively until the current level only
2411 The resulting geometry is as follows:
2413 - For AG-rooted btrees, this level is the root level, so the height of the new
2414 tree is ``level + 1`` and the space needed is the summation of the number of
2417 - For inode-rooted btrees where the records in the top level do not fit in the
2418 inode fork area, the height is ``level + 2``, the space needed is the
2419 summation of the number of blocks on each level, and the inode fork points to
2420 the root block.
2422 - For inode-rooted btrees where the records in the top level can be stored in
2423 the inode fork area, then the root block can be stored in the inode, the
2424 height is ``level + 1``, and the space needed is one less than the summation
2425 of the number of blocks on each level.
2426 This only becomes relevant when non-bmap btrees gain the ability to root in
2434 Once repair knows the number of blocks needed for the new btree, it allocates
2435 those blocks using the free space information.
2436 Each reserved extent is tracked separately by the btree builder state data.
2437 To improve crash resilience, the reservation code also logs an Extent Freeing
2438 Intent (EFI) item in the same transaction as each space allocation and attaches
2439 its in-memory ``struct xfs_extent_free_item`` object to the space reservation.
2440 If the system goes down, log recovery will use the unfinished EFIs to free the
2441 unused space, the free space, leaving the filesystem unchanged.
2443 Each time the btree builder claims a block for the btree from a reserved
2444 extent, it updates the in-memory reservation to reflect the claimed space.
2446 reduce the number of EFIs in play.
2448 While repair is writing these new btree blocks, the EFIs created for the space
2449 reservations pin the tail of the ondisk log.
2450 It's possible that other parts of the system will remain busy and push the head
2451 of the log towards the pinned tail.
2452 To avoid livelocking the filesystem, the EFIs must not pin the tail of the log
2454 To alleviate this problem, the dynamic relogging capability of the deferred ops
2455 mechanism is reused here to commit a transaction at the log head containing an
2456 EFD for the old EFI and new EFI at the head.
2457 This enables the log to release the old EFI to keep the log moving forwards.
2459 EFIs have a role to play during the commit and reaping phases; please see the
2460 next section and the section about :ref:`reaping<reaping>` for more details.
2462 Proposed patchsets are the
2465 and the
2470 Writing the New Tree
2473 This part is pretty simple -- the btree builder (``xfs_btree_bulkload``) claims
2474 a block from the reserved list, writes the new btree block header, fills the
2475 rest of the block with records, and adds the new leaf block to a list of
2483 Sibling pointers are set every time a new block is added to the level::
2490 When it finishes writing the record leaf blocks, it moves on to the node
2492 To fill a node block, it walks each block in the next level down in the tree
2493 to compute the relevant keys and write them into the parent node::
2505 When it reaches the root level, it is ready to commit the new btree!::
2522 The first step to commit the new btree is to persist the btree blocks to disk
2525 in the recent past, so the builder must use ``xfs_buf_delwri_queue_here`` to
2526 remove the (stale) buffer from the AIL list before it can write the new blocks
2531 Once the new blocks have been persisted to disk, control returns to the
2532 individual repair function that called the bulk loader.
2533 The repair function must log the location of the new root in a transaction,
2534 clean up the space reservations that were made for the new btree, and reap the
2537 1. Commit the location of the new btree root.
2541 a. Log Extent Freeing Done (EFD) items for all the space that was consumed
2542 by the btree builder. The new EFDs must point to the EFIs attached to
2543 the reservation to prevent log recovery from freeing the new blocks.
2546 extent free work item to be free the unused space later in the
2549 c. The EFDs and EFIs logged in steps 2a and 2b must not overrun the
2550 reservation of the committing transaction.
2551 If the btree loading code suspects this might be about to happen, it must
2552 call ``xrep_defer_finish`` to clear out the deferred work and obtain a
2555 3. Clear out the deferred work a second time to finish the commit and clean
2556 the repair transaction.
2558 The transaction rolling in steps 2c and 3 represent a weakness in the repair
2559 algorithm, because a log flush and a crash before the end of the reap step can
2561 Online repair functions minimize the chances of this occurring by using very
2564 Repair moves on to reaping the old blocks, which will be presented in a
2567 Case Study: Rebuilding the Inode Index
2570 The high level process to rebuild the inode index btree is:
2572 1. Walk the reverse mapping records to generate ``struct xfs_inobt_rec``
2573 records from the inode chunk information and a bitmap of the old inode btree
2576 2. Append the records to an xfarray in inode order.
2578 3. Use the ``xfs_btree_bload_compute_geometry`` function to compute the number
2579 of blocks needed for the inode btree.
2580 If the free space inode btree is enabled, call it again to estimate the
2581 geometry of the finobt.
2583 4. Allocate the number of blocks computed in the previous step.
2585 5. Use ``xfs_btree_bload`` to write the xfarray records to btree blocks and
2586 generate the internal node blocks.
2587 If the free space inode btree is enabled, call it again to load the finobt.
2589 6. Commit the location of the new btree root block(s) to the AGI.
2591 7. Reap the old btree blocks using the bitmap created in step 1.
2595 The inode btree maps inumbers to the ondisk location of the associated
2596 inode records, which means that the inode btrees can be rebuilt from the
2598 Reverse mapping records with an owner of ``XFS_RMAP_OWN_INOBT`` marks the
2599 location of the old inode btree blocks.
2600 Each reverse mapping record with an owner of ``XFS_RMAP_OWN_INODES`` marks the
2602 A cluster is the smallest number of ondisk inodes that can be allocated or
2605 For the space represented by each inode cluster, ensure that there are no
2606 records in the free space btrees nor any records in the reference count btree.
2607 If there are, the space metadata inconsistencies are reason enough to abort the
2610 ondisk inodes and to decide if the file is allocated
2612 Accumulate the results of successive inode cluster buffer reads until there is
2614 numbers in the inumber keyspace.
2615 If the chunk is sparse, the chunk record may include holes.
2617 Once the repair function accumulates one chunk's worth of data, it calls
2618 ``xfarray_append`` to add the inode btree record to the xfarray.
2619 This xfarray is walked twice during the btree creation step -- once to populate
2620 the inode btree with all inode chunk records, and a second time to populate the
2622 The number of records for the inode btree is the number of xfarray records,
2623 but the record count for the free inode btree has to be computed as inode chunk
2624 records are stored in the xfarray.
2626 The proposed patchset is the
2631 Case Study: Rebuilding the Space Reference Counts
2634 Reverse mapping records are used to rebuild the reference count information.
2637 Imagine the reverse mapping entries as rectangles representing extents of
2638 physical blocks, and that the rectangles can be laid down to allow them to
2640 From the diagram below, it is apparent that a reference count record must start
2641 or end wherever the height of the stack changes.
2642 In other words, the record emission stimulus is level-triggered::
2651 The ondisk reference count btree does not store the refcount == 0 cases because
2652 the free space btree already records which blocks are free.
2653 Extents being used to stage copy-on-write operations should be the only records
2655 Single-owner file blocks aren't recorded in either the free space or the
2658 The high level process to rebuild the reference count btree is:
2660 1. Walk the reverse mapping records to generate ``struct xfs_refcount_irec``
2662 the xfarray.
2663 Any records owned by ``XFS_RMAP_OWN_COW`` are also added to the xfarray
2665 are tracked in the refcount btree.
2670 2. Sort the records in physical extent order, putting the CoW staging extents
2671 at the end of the xfarray.
2672 This matches the sorting order of records in the refcount btree.
2674 3. Use the ``xfs_btree_bload_compute_geometry`` function to compute the number
2675 of blocks needed for the new tree.
2677 4. Allocate the number of blocks computed in the previous step.
2679 5. Use ``xfs_btree_bload`` to write the xfarray records to btree blocks and
2680 generate the internal node blocks.
2682 6. Commit the location of new btree root block to the AGF.
2684 7. Reap the old btree blocks using the bitmap created in step 1.
2686 Details are as follows; the same algorithm is used by ``xfs_repair`` to
2689 - Until the reverse mapping btree runs out of records:
2691 - Retrieve the next record from the btree and put it in a bag.
2693 - Collect all records with the same starting block from the btree and put
2694 them in the bag.
2696 - While the bag isn't empty:
2698 - Among the mappings in the bag, compute the lowest block number where the
2700 This position will be either the starting block number of the next
2701 unprocessed reverse mapping or the next block after the shortest mapping
2702 in the bag.
2704 - Remove all mappings from the bag that end at this position.
2706 - Collect all reverse mappings that start at this position from the btree
2707 and put them in the bag.
2709 - If the size of the bag changed and is greater than one, create a new
2710 refcount record associating the block number range that we just walked to
2711 the size of the bag.
2713 The bag-like structure in this case is a type 2 xfarray as discussed in the
2715 Reverse mappings are added to the bag using ``xfarray_store_anywhere`` and
2719 The proposed patchset is the
2727 The high level process to rebuild a data/attr fork mapping btree is:
2729 1. Walk the reverse mapping records to generate ``struct xfs_bmbt_rec``
2730 records from the reverse mapping records for that inode and fork.
2732 Compute the bitmap of the old bmap btree blocks from the ``BMBT_BLOCK``
2735 2. Use the ``xfs_btree_bload_compute_geometry`` function to compute the number
2736 of blocks needed for the new tree.
2738 3. Sort the records in file offset order.
2740 4. If the extent records would fit in the inode fork immediate area, commit the
2743 5. Allocate the number of blocks computed in the previous step.
2745 6. Use ``xfs_btree_bload`` to write the xfarray records to btree blocks and
2746 generate the internal node blocks.
2748 7. Commit the new btree root block to the inode fork immediate area.
2750 8. Reap the old btree blocks using the bitmap created in step 1.
2753 First, it's possible to move the fork offset to adjust the sizes of the
2754 immediate areas if the data and attr forks are not both in BMBT format.
2757 Third, the incore extent map must be reloaded carefully to avoid disturbing
2760 The proposed patchset is the
2771 suspect, there is a question of how to find and dispose of the blocks that
2772 belonged to the old structure.
2773 The laziest method of course is not to deal with them at all, but this slowly
2774 leads to service degradations as space leaks out of the filesystem.
2775 Hopefully, someone will schedule a rebuild of the free space information to
2777 Offline repair rebuilds all space metadata after recording the usage of
2778 the files and directories that it decides not to clear, hence it can build new
2779 structures in the discovered free space and avoid the question of reaping.
2781 As part of a repair, online fsck relies heavily on the reverse mapping records
2782 to find space that is owned by the corresponding rmap owner yet truly free.
2786 Permitting the block allocator to hand them out again will not push the system
2789 For space metadata, the process of finding extents to dispose of generally
2793 The space reservations used to create the new metadata can be used here if
2794 the same rmap owner code is used to denote all of the objects being rebuilt.
2796 2. Survey the reverse mapping data to create a bitmap of space owned by the
2797 same ``XFS_RMAP_OWN_*`` number for the metadata that is being preserved.
2799 3. Use the bitmap disunion operator to subtract (1) from (2).
2800 The remaining set bits represent candidate extents that could be freed.
2801 The process moves on to step 4 below.
2805 new structure attached to a temporary file and exchanging all mappings in the
2807 Afterward, the mappings in the old file fork are the candidate blocks for
2810 The process for disposing of old extents is as follows:
2812 4. For each candidate extent, count the number of reverse mapping records for
2813 the first block in that extent that do not have the same rmap owner for the
2816 - If zero, the block has a single owner and can be freed.
2818 - If not, the block is part of a crosslinked structure and must not be
2821 5. Starting with the next block in the extent, figure out how many more blocks
2822 have the same zero/nonzero other owner status as that first block.
2824 6. If the region is crosslinked, delete the reverse mapping entry for the
2825 structure being repaired and move on to the next region.
2827 7. If the region is to be freed, mark any corresponding buffers in the buffer
2830 8. Free the region and move on.
2833 Transactions are of finite size, so the reaping process must be careful to roll
2834 the transactions to avoid overruns.
2841 This is also a window in which a crash during the reaping process can leak
2844 minimize the chances of this occurring.
2846 The proposed patchset is the
2854 Old reference count and inode btrees are the easiest to reap because they have
2855 rmap records with special owner codes: ``XFS_RMAP_OWN_REFC`` for the refcount
2856 btree, and ``XFS_RMAP_OWN_INOBT`` for the inode and free inode btrees.
2857 Creating a list of extents to reap the old btree blocks is quite simple,
2860 1. Lock the relevant AGI/AGF header buffers to prevent allocation and frees.
2862 2. For each reverse mapping record with an rmap owner corresponding to the
2863 metadata structure being rebuilt, set the corresponding range in a bitmap.
2865 3. Walk the current data structures that have the same rmap owner.
2866 For each block visited, clear that range in the above bitmap.
2868 4. Each set bit in the bitmap represents a block that could be a block from the
2871 are the blocks that might be freeable.
2873 If it is possible to maintain the AGF lock throughout the repair (which is the
2874 common case), then step 2 can be performed at the same time as the reverse
2875 mapping record walk that creates the records for the new btree.
2877 Case Study: Rebuilding the Free Space Indices
2880 The high level process to rebuild the free space indices is:
2882 1. Walk the reverse mapping records to generate ``struct xfs_alloc_rec_incore``
2883 records from the gaps in the reverse mapping btree.
2885 2. Append the records to an xfarray.
2887 3. Use the ``xfs_btree_bload_compute_geometry`` function to compute the number
2890 4. Allocate the number of blocks computed in the previous step from the free
2893 5. Use ``xfs_btree_bload`` to write the xfarray records to btree blocks and
2894 generate the internal node blocks for the free space by length index.
2895 Call it again for the free space by block number index.
2897 6. Commit the locations of the new btree root blocks to the AGF.
2899 7. Reap the old btree blocks by looking for space that is not recorded by the
2900 reverse mapping btree, the new free space btrees, or the AGFL.
2902 Repairing the free space btrees has three key complications over a regular
2905 First, free space is not explicitly tracked in the reverse mapping records.
2906 Hence, the new free space records must be inferred from gaps in the physical
2907 space component of the keyspace of the reverse mapping btree.
2909 Second, free space repairs cannot use the common btree reservation code because
2910 new blocks are reserved out of the free space btrees.
2911 This is impossible when repairing the free space btrees themselves.
2912 However, repair holds the AGF buffer lock for the duration of the free space
2913 index reconstruction, so it can use the collected free space information to
2914 supply the blocks for the new free space btrees.
2915 It is not necessary to back each reserved extent with an EFI because the new
2916 free space btrees are constructed in what the ondisk filesystem thinks is
2918 However, if reserving blocks for the new btrees from the collected free space
2919 information changes the number of free space records, repair must re-estimate
2920 the new free space btree geometry with the new record count until the
2922 As part of committing the new btrees, repair must ensure that reverse mappings
2923 are created for the reserved blocks and that unused reserved blocks are
2924 inserted into the free space btrees.
2926 is atomic, similar to the other btree repair functions.
2928 Third, finding the blocks to reap after the repair is not overly
2930 Blocks for the free space btrees and the reverse mapping btrees are supplied by
2931 the AGFL.
2932 Blocks put onto the AGFL have reverse mapping records with the owner
2934 This ownership is retained when blocks move from the AGFL into the free space
2935 btrees or the reverse mapping btrees.
2937 creates a bitmap (``ag_owner_bitmap``) of all the space claimed by
2939 The repair context maintains a second bitmap corresponding to the rmap btree
2940 blocks and the AGFL blocks (``rmap_agfl_bitmap``).
2941 When the walk is complete, the bitmap disunion operation ``(ag_owner_bitmap &
2942 ~rmap_agfl_bitmap)`` computes the extents that are used by the old free space
2944 These blocks can then be reaped using the methods outlined above.
2946 The proposed patchset is the
2957 As mentioned in the previous section, blocks on the AGFL, the two free space
2958 btree blocks, and the reverse mapping btree blocks all have reverse mapping
2959 records with ``XFS_RMAP_OWN_AG`` as the owner.
2960 The full process of gathering reverse mapping records and building a new btree
2961 are described in the case study of
2963 discussion is that the new rmap btree will not contain any records for the old
2964 rmap btree, nor will the old btree blocks be tracked in the free space btrees.
2965 The list of candidate reaping blocks is computed by setting the bits
2966 corresponding to the gaps in the new rmap btree records, and then clearing the
2967 bits corresponding to extents in the free space btrees and the current AGFL
2969 The result ``(new_rmapbt_gaps & ~(agfl | bnobt_records))`` are reaped using the
2972 The rest of the process of rebuildng the reverse mapping btree is discussed
2975 The proposed patchset is the
2980 Case Study: Rebuilding the AGFL
2983 The allocation group free block list (AGFL) is repaired as follows:
2985 1. Create a bitmap for all the space that the reverse mapping data claims is
2988 2. Subtract the space used by the two free space btrees and the rmap btree.
2990 3. Subtract any space that the reverse mapping data claims is owned by any
2991 other owner, to avoid re-adding crosslinked blocks to the AGFL.
2993 4. Once the AGFL is full, reap any blocks leftover.
2995 5. The next operation to fix the freelist will right-size the list.
3005 careful to access the ondisk metadata *only* when the ondisk metadata is so
3006 badly damaged that the filesystem cannot load the in-memory representation.
3008 specialized resource acquisition functions that return either the in-memory
3010 update to the ondisk location.
3012 The only repairs that should be made to the ondisk inode buffers are whatever
3013 is necessary to get the in-core structure loaded.
3014 This means fixing whatever is caught by the inode cluster buffer and inode fork
3015 verifiers, and retrying the ``iget`` operation.
3016 If the second ``iget`` fails, the repair has failed.
3018 Once the in-memory representation is loaded, repair can lock the inode and can
3022 Dealing with the data and attr fork extent counts and the file block counts is
3023 more complicated, because computing the correct value requires traversing the
3024 forks, or if that fails, leaving the fields invalid and waiting for the fork
3027 The proposed patchset is the
3036 an in-memory representation, and hence are subject to the same cache coherency
3038 Somewhat confusingly, both are known as dquots in the XFS codebase.
3040 The only repairs that should be made to the ondisk quota record buffers are
3041 whatever is necessary to get the in-core structure loaded.
3042 Once the in-memory representation is loaded, the only attributes needing
3045 Quota usage counters are checked, repaired, and discussed separately in the
3048 The proposed patchset is the
3060 This information could be compiled by walking the free space and inode indexes,
3061 but this is a slow process, so XFS maintains a copy in the ondisk superblock
3062 that should reflect the ondisk metadata, at least when the filesystem has been
3066 Writer threads reserve the worst-case quantities of resources from the
3068 It is therefore only necessary to serialize on the superblock when the
3071 The lazy superblock counter feature introduced in XFS v5 took this even further
3072 by training log recovery to recompute the summary counters from the AG headers,
3073 which eliminated the need for most transactions even to touch the superblock.
3074 The only time XFS commits the summary counters is at filesystem unmount.
3075 To reduce contention even further, the incore counter is implemented as a
3077 global incore counter and can satisfy small allocations from the local batch.
3079 The high-performance nature of the summary counters makes it difficult for
3081 while the system is running.
3082 Although online fsck can read the filesystem metadata to compute the correct
3083 values of the summary counters, there's no way to hold the value of a percpu
3084 counter stable, so it's quite possible that the counter will be out of date by
3085 the time the walk is complete.
3088 For repairs, the in-memory counters must be stabilized while walking the
3089 filesystem metadata to get an accurate reading and install it in the percpu
3092 To satisfy this requirement, online fsck must prevent other programs in the
3093 system from initiating new writes to the filesystem, it must disable background
3095 exit the kernel.
3096 Once that has been established, scrub can walk the AG free space indexes, the
3097 inode btrees, and the realtime bitmap to compute the correct value of all
3099 This is very similar to a filesystem freeze, though not all of the pieces are
3102 - The final freeze state is set one higher than ``SB_FREEZE_COMPLETE`` to
3103 prevent other threads from thawing the filesystem, or other scrub threads
3106 - It does not quiesce the log.
3108 With this code in place, it is now possible to pause the filesystem for just
3109 long enough to check and correct the summary counters.
3114 | The initial implementation used the actual VFS filesystem freeze |
3116 | With the filesystem frozen, it is possible to resolve the counter values |
3117 | with exact precision, but there are many problems with calling the VFS |
3120 | - Other programs can unfreeze the filesystem without our knowledge. |
3123 | - Adding an extra lock to prevent others from thawing the filesystem |
3124 | required the addition of a ``->freeze_super`` function to wrap |
3127 | the VFS ``freeze_super`` and ``thaw_super`` functions can drop the |
3128 | last reference to the VFS superblock, and any subsequent access |
3130 | This can happen if the filesystem is unmounted while the underlying |
3131 | block device has frozen the filesystem. |
3132 | This problem could be solved by grabbing extra references to the |
3133 | superblock, but it felt suboptimal given the other inadequacies of |
3136 | - The log need not be quiesced to check the summary counters, but a VFS |
3140 | - Quiescing the log means that XFS flushes the (possibly incorrect) |
3141 | counters to disk as part of cleaning the log. |
3143 | - A bug in the VFS meant that freeze could complete even when |
3144 | sync_filesystem fails to flush the filesystem and returns an error. |
3148 The proposed patchset is the
3156 Certain types of metadata can only be checked by walking every file in the
3157 entire filesystem to record observations and comparing the observations against
3161 However, it is not practical to shut down the entire filesystem to examine
3162 hundreds of billions of files because the downtime would be excessive.
3163 Therefore, online fsck must build the infrastructure to manage a live scan of
3164 all the files in the filesystem.
3167 - How does scrub manage the scan while it is collecting data?
3169 - How does the scan keep abreast of changes being made to the system by other
3177 In the original Unix filesystems of the 1970s, each directory entry contained
3183 UNIX, 6th Edition*, (Dept. of Computer Science, the University of New South
3185 `"Implementation of the File System"
3186 <https://archive.org/details/bstj57-6-1905/page/n8/mode/1up>`_, from *The UNIX
3187 Time-Sharing System*, (The Bell System Technical Journal, July 1978), pp.
3191 the space in the data section filesystem.
3193 though the inodes themselves are sparsely distributed within the keyspace.
3194 Scans proceed in a linear fashion across the inumber keyspace, starting from
3196 Naturally, a scan through a keyspace requires a scan cursor object to track the
3199 The first part of this scan cursor object tracks the inode that will be
3200 examined next; call this the examination cursor.
3201 Somewhat less obviously, the scan cursor object must also track which parts of
3202 the keyspace have already been visited, which is critical for deciding if a
3203 concurrent filesystem update needs to be incorporated into the scan data.
3204 Call this the visited inode cursor.
3206 Advancing the scan cursor is a multi-step process encapsulated in
3209 1. Lock the AGI buffer of the AG containing the inode pointed to by the visited
3212 advancing the cursor.
3214 2. Use the per-AG inode btree to look up the next inumber after the one that
3219 a. Move the examination cursor to the point of the inumber keyspace that
3220 corresponds to the start of the next AG.
3222 b. Adjust the visited inode cursor to indicate that it has "visited" the
3223 last possible inode in the current AG's inode keyspace.
3224 XFS inumbers are segmented, so the cursor needs to be marked as having
3225 visited the entire keyspace up to just before the start of the next AG's
3228 c. Unlock the AGI and return to step 1 if there are unexamined AGs in the
3231 d. If there are no more AGs to examine, set both cursors to the end of the
3233 The scan is now complete.
3237 a. Move the examination cursor ahead to the next inode marked as allocated
3238 by the inode btree.
3240 b. Adjust the visited inode cursor to point to the inode just prior to where
3241 the examination cursor is now.
3242 Because the scanner holds the AGI buffer lock, no inodes could have been
3243 created in the part of the inode keyspace that the visited inode cursor
3246 5. Get the incore inode for the inumber of the examination cursor.
3247 By maintaining the AGI buffer lock until this point, the scanner knows that
3248 it was safe to advance the examination cursor across the entire keyspace,
3250 the filesystem until the scan releases the incore inode.
3252 6. Drop the AGI lock and return the incore inode to the caller.
3254 Online fsck functions scan all files in the filesystem as follows:
3258 2. Advance the scan cursor (``xchk_iscan_iter``) to get the next inode.
3261 a. Lock the inode to prevent updates during the scan.
3263 b. Scan the inode.
3265 c. While still holding the inode lock, adjust the visited inode cursor
3268 d. Unlock and release the inode.
3270 8. Call ``xchk_iscan_teardown`` to complete the scan.
3272 There are subtleties with the inode cache that complicate grabbing the incore
3273 inode for the caller.
3274 Obviously, it is an absolute requirement that the inode metadata be consistent
3275 enough to load it into the inode cache.
3276 Second, if the incore inode is stuck in some intermediate state, the scan
3277 coordinator must release the AGI and push the main filesystem to get the inode
3280 The proposed patches are the
3284 The first user of the new functionality is the
3293 always obtained (``xfs_iget``) outside of transaction context because the
3294 creation of the incore context for an existing file does not require metadata
3297 part of file creation must be performed in transaction context because the
3298 filesystem must ensure the atomicity of the ondisk inode btree index updates
3299 and the initialization of the actual ondisk inode.
3305 - The VFS may decide to kick off writeback as part of a ``DONTCACHE`` inode
3310 - An unlinked file may have lost its last reference, in which case the entire
3312 the ondisk metadata and freeing the inode.
3315 Inactivation has two parts -- the VFS part, which initiates writeback on all
3316 dirty file pages, and the XFS part, which cleans up XFS-specific information
3317 and frees the inode if it was unlinked.
3318 If the inode is unlinked (or unconnected after a file handle operation), the
3319 kernel drops the inode into the inactivation machinery immediately.
3337 7. Space on the data and realtime devices for the transaction.
3350 Resources are often released in the reverse order, though this is not required.
3352 an object that normally is acquired in a later stage of the locking order, and
3353 then decide to cross-reference the object with an object that is acquired
3354 earlier in the order.
3355 The next few sections detail the specific ways in which online fsck takes care
3363 This isn't much of a problem for ``iget`` since it can operate in the context
3364 of an existing transaction, as long as all of the bound resources are acquired
3365 before the inode reference in the regular filesystem.
3367 When the VFS ``iput`` function is given a linked inode with no other
3368 references, it normally puts the inode on an LRU list in the hope that it can
3369 save time if another process re-opens the file before the system runs out
3371 Filesystem callers can short-circuit the LRU process by setting a ``DONTCACHE``
3372 flag on the inode to cause the kernel to try to drop the inode into the
3375 In the past, inactivation was always done from the process that dropped the
3378 On the other hand, if there is no scrub transaction, it is desirable to drop
3380 To capture these nuances, the online fsck code has a separate ``xchk_irele``
3381 function to set or clear the ``DONTCACHE`` flag to get the required release
3395 In regular filesystem code, the VFS and XFS will acquire multiple IOLOCK locks
3396 in a well-known order: parent → child when updating the directory tree, and
3397 in numerical order of the addresses of their ``struct inode`` object otherwise.
3398 For regular files, the MMAPLOCK can be acquired after the IOLOCK to stop page
3401 the addresses of their ``struct address_space`` objects.
3402 Due to the structure of existing filesystem code, IOLOCKs and MMAPLOCKs must be
3408 scanner, the scrub process holds the IOLOCK of the file being scanned and it
3409 needs to take the IOLOCK of the file at the other end of the directory link.
3410 If the directory tree is corrupt because it contains a cycle, ``xfs_scrub``
3411 cannot use the regular inode locking functions and avoid becoming trapped in an
3415 needs to take a second lock of the same class, it uses trylock to avoid an ABBA
3417 If the trylock fails, scrub drops all inode locks and use trylock loops to
3420 scrub avoids deadlocking the filesystem or becoming an unresponsive process.
3421 However, trylock loops means that online fsck must be prepared to measure the
3422 resource being scrubbed before and after the lock cycle to detect changes and
3430 Consider the directory parent pointer repair code as an example.
3431 Online fsck must verify that the dotdot dirent of a directory points up to a
3432 parent directory, and that the parent directory contains exactly one dirent
3433 pointing down to the child directory.
3435 walk of every directory on the filesystem while holding the child locked, and
3436 while updates to the directory tree are being made.
3437 The coordinated inode scan provides a way to walk the filesystem without the
3439 The child directory is kept locked to prevent updates to the dotdot dirent, but
3440 if the scanner fails to lock a parent, it can drop and relock both the child
3441 and the prospective parent.
3442 If the dotdot entry changes while the directory is unlocked, then a move or
3443 rename operation must have changed the child's parentage, and the scan can
3446 The proposed patchset is the
3456 The second piece of support that online fsck functions need during a full
3457 filesystem scan is the ability to stay informed about updates being made by
3458 other threads in the filesystem, since comparisons against the past are useless
3465 In this case, the downstream consumer is always an online fsck function.
3466 Because multiple fsck functions can run in parallel, online fsck uses the Linux
3471 Because these hooks are private to the XFS module, the information passed along
3472 contains exactly what the checking function needs to update its observations.
3474 The current implementation of XFS hooks uses SRCU notifier chains to reduce the
3478 However, it may turn out that the combination of blocking chains and static
3481 The following pieces are necessary to hook a certain point in the filesystem:
3487 about the action.
3490 around the ``xfs_hooks`` and ``xfs_hook`` objects to take advantage of type
3493 - A callsite in the regular filesystem code must be chosen to call
3494 ``xfs_hooks_call`` with the action code and data structure.
3495 This place should be adjacent to (and not earlier than) the place where
3496 the filesystem update is committed to the transaction.
3497 In general, when the filesystem calls a hook chain, it should be able to
3500 However, the exact requirements are very dependent on the context of the hook
3501 caller and the callee.
3503 - The online fsck function should define a structure to hold scan data, a lock
3504 to coordinate access to the scan data, and a ``struct xfs_hook`` object.
3505 The scanner function and the regular filesystem code must acquire resources
3506 in the same order; see the next section for details.
3508 - The online fsck code must contain a C function to catch the hook action code
3510 If the object being updated has already been visited by the scan, then the
3511 hook information must be applied to the scan data.
3513 - Prior to unlocking inodes to start the scan, online fsck must call
3514 ``xfs_hooks_setup`` to initialize the ``struct xfs_hook``, and
3515 ``xfs_hooks_add`` to enable the hook.
3517 - Online fsck must call ``xfs_hooks_del`` to disable the hook once the scan is
3520 The number of hooks should be kept to a minimum to reduce complexity.
3521 Static keys are used to reduce the overhead of filesystem hooks to nearly
3529 The code paths of the online fsck scanning code and the :ref:`hooked<fshooks>`
3558 These rules must be followed to ensure correct interactions between the
3559 checking code and the code making an update to the filesystem:
3561 - Prior to invoking the notifier call chain, the filesystem function being
3562 hooked must acquire the same lock that the scrub scanning function acquires
3563 to scan the inode.
3565 - The scanning function and the scrub hook function must coordinate access to
3566 the scan data by acquiring a lock on the scan data.
3568 - Scrub hook function must not add the live update information to the scan
3569 observations unless the inode being updated has already been scanned.
3570 The scan coordinator has a helper predicate (``xchk_iscan_want_live_update``)
3573 - Scrub hook functions must not change the caller's state, including the
3575 They must not acquire any resources that might conflict with the filesystem
3578 - The hook function can abort the inode scan to avoid breaking the other rules.
3580 The inode scan APIs are pretty simple:
3584 - ``xchk_iscan_iter`` grabs a reference to the next inode in the scan or
3588 visited in the scan.
3589 This is critical for hook functions to decide if they need to update the
3592 - ``xchk_iscan_mark_visited`` to mark an inode as having been visited in the
3595 - ``xchk_iscan_teardown`` to finish the scan
3597 This functionality is also a part of the
3607 It is useful to compare the mount time quotacheck code to the online repair
3610 it does the following:
3612 1. Make sure the ondisk dquots are in good enough shape that all the incore
3613 dquots will actually load, and zero the resource usage counters in the
3616 2. Walk every inode in the filesystem.
3617 Add each file's resource usage to the incore dquot.
3620 If the incore dquot is not being flushed, add the ondisk buffer backing the
3623 4. Write the buffer list to disk.
3626 filesystem objects until the newly collected metadata reflect all filesystem
3629 index implemented with a sparse ``xfarray``, and only writes to the real dquots
3630 once the scan is complete.
3634 1. The inodes involved are joined and locked to a transaction.
3636 2. For each dquot attached to the file:
3638 a. The dquot is locked.
3640 b. A quota reservation is added to the dquot's resource usage.
3641 The reservation is recorded in the transaction.
3643 c. The dquot is unlocked.
3645 3. Changes in actual quota usage are tracked in the transaction.
3649 a. The dquot is locked again.
3652 the dquot.
3654 c. The dquot is unlocked.
3657 The step 2 hook creates a shadow version of the transaction dquot context
3658 (``dqtrx``) that operates in a similar manner to the regular code.
3659 The step 4 hook commits the shadow ``dqtrx`` changes to the shadow dquots.
3660 Notice that both hooks are called with the inode locked, which is how the
3661 live update coordinates with the inode scanner.
3663 The quotacheck scan looks like this:
3667 2. For each inode returned by the inode scan iterator:
3669 a. Grab and lock the inode.
3672 realtime blocks) and add that to the shadow dquots for the user, group,
3673 and project ids associated with the inode.
3675 c. Unlock and release the inode.
3677 3. For each dquot in the system:
3679 a. Grab and lock the dquot.
3681 b. Check the dquot against the shadow dquots created by the scan and updated
3682 by the live hooks.
3686 If repairs are desired, the real and shadow dquots are locked and their
3687 resource counts are set to the values in the shadow dquot.
3689 The proposed patchset is the
3700 The coordinated inode scanner is used to visit all directories on the
3703 During the scanning phase, each entry in a directory generates observation
3706 1. If the entry is a dotdot (``'..'``) entry of the root directory, the
3707 directory's parent link count is bumped because the root directory's dotdot
3710 2. If the entry is a dotdot entry of a subdirectory, the parent's backref
3713 3. If the entry is neither a dot nor a dotdot entry, the target file's parent
3716 4. If the target is a subdirectory, the parent's child link count is bumped.
3718 A crucial point to understand about how the link count inode scanner interacts
3719 with the live update hooks is that the scan cursor tracks which *parent*
3721 In other words, the live updates ignore any update about ``A → B`` when A has
3724 accounted as a backref counter in the shadow data for A, since child dotdot
3725 entries affect the parent's link count.
3726 Live update hooks are carefully placed in all parts of the filesystem that
3730 For any file, the correct link count is the number of parents plus the number
3733 The backref information is used to detect inconsistencies in the number of
3734 links pointing to child subdirectories and the number of dotdot entries
3737 After the scan completes, the link count of each file can be checked by locking
3738 both the inode and the shadow data, and comparing the link counts.
3742 If repairs are desired, the inode's link count is set to the value in the
3744 If no parents are found, the file must be :ref:`reparented <orphanage>` to the
3745 orphanage to prevent the file from being lost forever.
3747 The proposed patchset is the
3757 Most repair functions follow the same pattern: lock filesystem resources,
3758 walk the surviving ondisk metadata looking for replacement metadata records,
3759 and use an :ref:`in-memory array <xfarray>` to store the gathered observations.
3760 The primary advantage of this approach is the simplicity and modularity of the
3761 repair code -- code and data are entirely contained within the scrub module,
3762 do not require hooks in the main filesystem, and are usually the most efficient
3764 A secondary advantage of this repair approach is atomicity -- once the kernel
3765 decides a structure is corrupt, no other threads can access the metadata until
3766 the kernel finishes repairing and revalidating the metadata.
3768 For repairs going on within a shard of the filesystem, these advantages
3769 outweigh the delays inherent in locking the shard while repairing parts of the
3771 Unfortunately, repairs to the reverse mapping btree cannot use the "standard"
3773 every file in the filesystem, and the filesystem cannot stop.
3776 <liveupdate>`, and an :ref:`in-memory rmap btree <xfbtree>` to complete the
3781 2. While holding the locks on the AGI and AGF buffers acquired during the
3783 staging extents, and the internal log.
3787 4. Hook into rmap updates for the AG being repaired so that the live scan data
3788 can receive updates to the rmap btree from the rest of the filesystem during
3789 the file scan.
3792 decide if the mapping matches the AG of interest.
3795 a. Create a btree cursor for the in-memory btree.
3797 b. Use the rmap code to add the record to the in-memory btree.
3799 c. Use the :ref:`special commit function <xfbtree_commit>` to write the
3800 xfbtree changes to the xfile.
3802 6. For each live update received via the hook, decide if the owner has already
3804 If so, apply the live update into the scan data:
3806 a. Create a btree cursor for the in-memory btree.
3808 b. Replay the operation into the in-memory btree.
3810 c. Use the :ref:`special commit function <xfbtree_commit>` to write the
3811 xfbtree changes to the xfile.
3812 This is performed with an empty transaction to avoid changing the
3815 7. When the inode scan finishes, create a new scrub transaction and relock the
3818 8. Compute the new btree geometry using the number of rmap records in the
3821 9. Allocate the number of blocks computed in the previous step.
3823 10. Perform the usual btree bulk loading and commit to install the new rmap
3826 11. Reap the old rmap btree blocks as discussed in the case study about how
3829 12. Free the xfbtree now that it not needed.
3831 The proposed patchset is the
3841 information for the realtime volume, and quota records.
3846 attributes) use blocks mapped in the file fork offset address space that point
3849 the file fork offset address space.
3851 Because file forks can consume as much space as the entire filesystem, repairs
3854 the XFS filesystem, writes a new structure at the correct offsets into the
3855 temporary file, and atomically exchanges all file fork mappings (and hence the
3856 fork contents) to commit the repair.
3857 Once the repair is complete, the old fork can be reaped as necessary; if the
3858 system goes down during the reap, the iunlink code will delete the blocks
3861 **Note**: All space usage and inode indices in the filesystem *must* be
3863 This dependency is the reason why online repair can only use pageable kernel
3866 Exchanging metadata file mappings with a temporary file requires the owner
3867 field of the block headers to match the file being repaired and not the
3869 The directory, extended attribute, and symbolic link functions were all
3872 There is a downside to the reaping process -- if the system crashes during the
3873 reap phase and the fork extents are crosslinked, the iunlink processing will
3874 fail because freeing space will find the extra reverse mappings and abort.
3878 They are not linked into a directory and the entire file will be reaped when
3879 the last reference to the file is lost.
3880 The key differences are that these files must have no access permission outside
3881 the kernel at all, they must be specially marked to prevent them from being
3882 opened by handle, and they must never be linked into the directory tree.
3887 | In the initial iteration of file metadata repair, the damaged metadata |
3888 | blocks would be scanned for salvageable data; the extents in the file |
3891 | This strategy did not survive the introduction of the atomic repair |
3894 | The second iteration explored building a second structure at a high |
3895 | offset in the fork from the salvage data, reaping the old extents, and |
3896 | using a ``COLLAPSE_RANGE`` operation to slide the new extents into |
3901 | - Array structures are linearly addressed, and the regular filesystem |
3902 | codebase does not have the concept of a linear offset that could be |
3903 | applied to the record offset computation to build an alternate copy. |
3905 | - Extended attributes are allowed to use the entire attr fork offset |
3909 | different part of the fork address space, the atomic repair commit |
3911 | a log assisted ``COLLAPSE_RANGE`` operation to ensure that the old |
3914 | - A crash after construction of the secondary tree but before the range |
3915 | collapse would leave unreachable blocks in the file fork. |
3922 | - Directory entry blocks and quota records record the file fork offset |
3923 | in the header area of each block. |
3931 | Were the atomic commit to use a range collapse operation, each block |
3932 | would have to be rewritten very carefully to preserve the graph |
3937 | This lead to the introduction of temporary file staging. |
3943 Online repair code should use the ``xrep_tempfile_create`` function to create a
3944 temporary file inside the filesystem.
3945 This allocates an inode, marks the in-core inode private, and attaches it to
3946 the scrub context.
3947 These files are hidden from userspace, may not be added to the directory tree,
3950 Temporary files only use two inode locks: the IOLOCK and the ILOCK.
3951 The MMAPLOCK is not needed here, because there must not be page faults from
3953 The usage patterns of these two locks are the same as for any other XFS file --
3954 access to file data are controlled via the IOLOCK, and access to file metadata
3955 are controlled via the ILOCK.
3956 Locking helpers are provided so that the temporary file and its lock state can
3957 be cleaned up by the scrub context.
3958 To comply with the nested locking strategy laid out in the :ref:`inode
3959 locking<ilocking>` section, it is recommended that scrub functions use the
3964 1. ``xrep_tempfile_copyin`` can be used to set the contents of a regular
3967 2. The regular directory, symbolic link, and extended attribute functions can
3968 be used to write to the temporary file.
3971 must be conveyed to the file being repaired, which is the topic of the next
3974 The proposed patches are in the
3983 it, it must commit the new changes into the existing file.
3984 It is not possible to swap the inumbers of two files, so instead the new
3985 metadata must replace the old.
3986 This suggests the need for the ability to swap extents, but the existing extent
3987 swapping code used by the file defragmenting tool ``xfs_fsr`` is not sufficient
3990 a. When the reverse-mapping btree is enabled, the swap code must keep the
3995 b. Reverse-mapping is critical for the operation of online fsck, so the old
4002 change in file contents, even if the operation is interrupted.
4004 d. Online repair needs to swap the contents of two files that are by definition
4006 For directory and xattr repairs, the user-visible contents might be the
4007 same, but the contents of individual blocks may be very different.
4009 e. Old blocks in the file may be cross-linked with another structure and must
4010 not reappear if the system goes down mid-repair.
4013 of log intent item to track the progress of an operation to exchange two file
4015 The new exchange operation type chains together the same transactions used by
4016 the reverse-mapping extent swap code, but records intermedia progress in the
4018 This new functionality is called the file contents exchange (xfs_exchrange)
4020 The underlying implementation exchanges file fork mappings (xfs_exchmaps).
4021 The new log item records the progress of the exchange to ensure that once an
4024 The new ``XFS_SB_FEAT_INCOMPAT_EXCHRANGE`` incompatible feature flag
4025 in the superblock protects these new log item records from being replayed on
4028 The proposed patchset is the
4036 | Starting with XFS v5, the superblock contains a |
4037 | ``sb_features_log_incompat`` field to indicate that the log contains |
4040 | In short, log incompat features protect the log contents against kernels |
4041 | that will not understand the contents. |
4042 | Unlike the other superblock feature bits, log incompat bits are |
4044 | The log cleans itself after its contents have been committed into the |
4045 | filesystem, either as part of an unmount or because the system is |
4047 | Because upper level code can be working on a transaction at the same |
4048 | time that the log cleans itself, it is necessary for upper level code to |
4049 | communicate to the log when it is going to use a log incompatible |
4052 | The log coordinates access to incompatible features through the use of |
4054 | The log cleaning code tries to take this rwsem in exclusive mode to |
4055 | clear the bit; if the lock attempt fails, the feature bit remains set. |
4056 | The code supporting a log incompat feature should create wrapper |
4057 | functions to obtain the log feature and call |
4058 | ``xfs_add_incompat_log_feature`` to set the feature bits in the primary |
4060 | The superblock update is performed transactionally, so the wrapper to |
4061 | obtain log assistance must be called just prior to the creation of the |
4062 | transaction that uses the functionality. |
4063 | For a file operation, this step must happen after taking the IOLOCK |
4064 | and the MMAPLOCK, but before allocating the transaction. |
4065 | When the transaction is complete, the ``xlog_drop_incompat_feat`` |
4066 | function is called to release the feature. |
4067 | The feature bit will not be cleared from the superblock until the log |
4071 | use log incompat features and provide convenience wrappers around the |
4079 The goal is to exchange all file fork mappings between two file fork offset
4081 There are likely to be many extent mappings in each fork, and the edges of
4082 the mappings aren't necessarily aligned.
4083 Furthermore, there may be other updates that need to happen after the exchange,
4086 This is roughly the format of the new deferred exchange-mapping work item:
4091 /* Inodes participating in the operation. */
4100 /* Set these file sizes after the operation, unless negative. */
4108 The new log intent item contains enough information to track two logical fork
4111 Each step of an exchange operation exchanges the largest file range mapping
4112 possible from one file to the other.
4113 After each step in the exchange operation, the two startoff fields are
4114 incremented and the blockcount field is decremented to reflect the progress
4116 The flags field captures behavioral parameters such as exchanging attr fork
4117 mappings instead of the data fork and other work to be done after the exchange.
4118 The two isize fields are used to exchange the file sizes at the end of the
4119 operation if the file data fork is the target of the operation.
4121 When the exchange is initiated, the sequence of operations is as follows:
4123 1. Create a deferred work item for the file mapping exchange.
4124 At the start, it should contain the entirety of the file block ranges to be
4127 2. Call ``xfs_defer_finish`` to process the exchange.
4129 This will log an extent swap intent item to the transaction for the deferred
4132 3. Until ``xmi_blockcount`` of the deferred mapping exchange work item is zero,
4134 a. Read the block maps of both file ranges starting at ``xmi_startoff1`` and
4135 ``xmi_startoff2``, respectively, and compute the longest extent that can
4137 This is the minimum of the two ``br_blockcount`` s in the mappings.
4138 Keep advancing through the file forks until at least one of the mappings
4140 Mutual holes, unwritten extents, and extent mappings to the same physical
4143 For the next few steps, this document will refer to the mapping that came
4144 from file 1 as "map1", and the mapping that came from file 2 as "map2".
4154 f. Log the block, quota, and extent count updates for both files.
4156 g. Extend the ondisk size of either file if necessary.
4159 item that was read at the start of step 3.
4161 i. Compute the amount of file range that has just been covered.
4165 j. Increase the starting offsets of ``xmi_startoff1`` and ``xmi_startoff2``
4166 by the number of blocks computed in the previous step, and decrease
4167 ``xmi_blockcount`` by the same quantity.
4168 This advances the cursor.
4170 k. Log a new mapping exchange intent log item reflecting the advanced state
4171 of the work item.
4173 l. Return the proper error code (EAGAIN) to the deferred operation manager
4175 The operation manager completes the deferred work in steps 3b-3e before
4176 moving back to the start of step 3.
4181 If the filesystem goes down in the middle of an operation, log recovery will
4182 find the most recent unfinished maping exchange log intent item and restart
4185 will either see the old broken structure or the new one, and never a mismash of
4193 First, regular files require the page cache to be flushed to disk before the
4195 Like any filesystem operation, file mapping exchanges must determine the
4197 files in the operation, and reserve that quantity of resources to avoid an
4199 The preparation step scans the ranges of both files to estimate:
4201 - Data device blocks needed to handle the repeated updates to the fork
4204 - Increase in quota usage for both files, if the two files do not share the
4206 - The number of extent mappings that will be added to each file.
4209 to different extents on the realtime volume, which could happen if the
4212 The need for precise estimation increases the run time of the exchange
4214 The filesystem must not run completely out of free space, nor can the mapping
4216 Regular users are required to abide the quota limits, though metadata repairs
4222 Extended attributes, symbolic links, and directories can set the fork format to
4223 "local" and treat the fork as a literal area for data storage.
4226 - If both forks are in local format and the fork areas are large enough, the
4227 exchange is performed by copying the incore fork contents, logging both
4229 The atomic file mapping exchange mechanism is not necessary, since this can
4232 - If both forks map blocks, then the regular atomic file mapping exchange is
4236 The contents of the local format fork are converted to a block to perform the
4238 The conversion to block format must be done in the same transaction that
4239 logs the initial mapping exchange intent log item.
4240 The regular atomic mapping exchange is used to exchange the metadata file
4242 Special flags are set on the exchange operation so that the transaction can
4243 be rolled one more time to convert the second file's fork back to local
4244 format so that the second file will be ready to go as soon as the ILOCK is
4247 Extended attributes and directories stamp the owning inode into every block,
4248 but the buffer verifiers do not actually check the inode number!
4250 referential integrity, so prior to performing the mapping exchange, online
4251 repair builds every block in the new data structure with the owner field of the
4254 After a successful exchange operation, the repair operation must reap the old
4255 fork blocks by processing each fork mapping through the standard :ref:`file
4257 If the filesystem should go down during the reap part of the repair, the
4258 iunlink processing at the end of recovery will free both the temporary file and
4260 However, this iunlink processing omits the cross-link detection of online
4270 2. Use the staging data to write out new contents into the temporary repair
4272 The same fork must be written to as is being repaired.
4274 3. Commit the scrub transaction, since the exchange resource estimation step
4278 the appropriate resource reservations, locks, and fill out a ``struct
4279 xfs_exchmaps_req`` with the details of the exchange operation.
4281 5. Call ``xrep_tempexch_contents`` to exchange the contents.
4283 6. Commit the transaction to complete the repair.
4287 Case Study: Repairing the Realtime Summary File
4290 In the "realtime" section of an XFS filesystem, free space is tracked via a
4292 Each bit in the bitmap represents one realtime extent, which is a multiple of
4293 the filesystem block size between 4KiB and 1GiB in size.
4294 The realtime summary file indexes the number of free extents of a given size to
4295 the offset of the block within the realtime free space bitmap where those free
4297 In other words, the summary file helps the allocator find free extents by
4298 length, similar to what the free space by count (cntbt) btree does for the data
4301 The summary file itself is a flat file (with no block headers or checksums!)
4303 counters to match the number of blocks in the rt bitmap.
4304 Each counter records the number of free extents that start in that bitmap block
4307 To check the summary file against the bitmap:
4309 1. Take the ILOCK of both the realtime bitmap and summary files.
4311 2. For each free space extent recorded in the bitmap:
4313 a. Compute the position in the summary file that contains a counter that
4316 b. Read the counter from the xfile.
4318 c. Increment it, and write it back to the xfile.
4320 3. Compare the contents of the xfile against the ondisk file.
4322 To repair the summary file, write the xfile contents into the temporary file
4323 and use atomic mapping exchange to commit the new contents.
4324 The temporary file is then reaped.
4326 The proposed patchset is the
4335 Values are limited in size to 64KiB, but there is no limit in the number of
4337 The attribute fork is unpartitioned, which means that the root of the attribute
4341 user-provided names with the user-provided values.
4343 If the leaf information expands beyond a single block, a directory/attribute
4349 1. Walk the attr fork mappings of the file being repaired to find the attribute
4353 a. Walk the attr leaf block to find candidate keys.
4356 1. Check the name for problems, and ignore the name if there are.
4358 2. Retrieve the value.
4359 If that succeeds, add the name and value to the staging xfarray and
4362 2. If the memory usage of the xfarray and xfblob exceed a certain amount of
4363 memory or there are no more attr fork blocks to examine, unlock the file and
4364 add the staged extended attributes to the temporary file.
4366 3. Use atomic file mapping exchange to exchange the new and old extended
4368 The old attribute blocks are now attached to the temporary file.
4370 4. Reap the temporary file.
4372 The proposed patchset is the
4382 The offline repair tool scans all inodes to find files with nonzero link count,
4385 moved to the ``/lost+found`` directory.
4388 The best that online repair can do at this time is to read directory data
4390 move orphans back into the directory tree.
4391 The salvage process is discussed in the case study at the end of this section.
4392 The :ref:`file link count fsck <nlinks>` code takes care of fixing link counts
4393 and moving orphans to the ``/lost+found`` directory.
4398 Unlike extended attributes, directory blocks are all the same size, so
4401 1. Find the parent of the directory.
4402 If the dotdot entry is not unreadable, try to confirm that the alleged
4403 parent has a child entry pointing back to the directory being repaired.
4404 Otherwise, walk the filesystem to find it.
4406 2. Walk the first partition of data fork of the directory to find the directory
4410 a. Walk the directory data block to find candidate entries.
4413 i. Check the name for problems, and ignore the name if there are.
4415 ii. Retrieve the inumber and grab the inode.
4416 If that succeeds, add the name, inode number, and file type to the
4419 3. If the memory usage of the xfarray and xfblob exceed a certain amount of
4420 memory or there are no more directory data blocks to examine, unlock the
4421 directory and add the staged dirents into the temporary directory.
4422 Truncate the staging files.
4424 4. Use atomic file mapping exchange to exchange the new and old directory
4426 The old directory blocks are now attached to the temporary file.
4428 5. Reap the temporary file.
4430 **Future Work Question**: Should repair revalidate the dentry cache when
4436 ensure that one of the following apply:
4438 1. The cached dentry reflects an ondisk dirent in the new directory.
4440 2. The cached dentry no longer has a corresponding ondisk dirent in the new
4441 directory and the dentry can be purged from the cache.
4443 3. The cached dentry no longer has an ondisk dirent but the dentry cannot be
4445 This is the problem case.
4447 Unfortunately, the current dentry cache design doesn't provide a means to walk
4451 The proposed patchset is the
4459 A parent pointer is a piece of file metadata that enables a user to locate the
4460 file's parent directory without having to traverse the directory tree from the
4462 Without them, reconstruction of directory trees is hindered in much the same
4463 way that the historic lack of reverse space mapping information once hindered
4465 The parent pointer feature, however, makes total directory reconstruction
4468 XFS parent pointers contain the information needed to identify the
4469 corresponding directory entry in the parent directory.
4471 parents in the form ``(dirent_name) → (parent_inum, parent_gen)``.
4472 The directory checking process can be strengthened to ensure that the target of
4473 each dirent also contains a parent pointer pointing back to the dirent.
4474 Likewise, each parent pointer can be checked by ensuring that the target of
4476 the parent pointer.
4485 | extended attribute in the child that could be used to identify the |
4490 | 1. The XFS codebase of the late 2000s did not have the infrastructure to |
4491 | enforce strong referential integrity in the directory tree. |
4493 | followed up with the corresponding change to the reverse links. |
4500 | 3. The extended attribute did not record the name of the directory entry |
4501 | in the parent, so the SGI parent pointer implementation cannot be |
4502 | used to reconnect the directory tree. |
4506 | point before the maximum file link count is achieved. |
4508 | The original parent pointer design was too unstable for something like |
4511 | second implementation that solves all shortcomings of the first. |
4513 | manipulations of the extended attribute structures. |
4514 | This solves the referential integrity problem by making it possible to |
4515 | commit a dirent update and a parent pointer update in the same |
4517 | Chandan increased the maximum extent counts of both data and attribute |
4518 | forks, thereby ensuring that the extended attribute structure can grow |
4519 | to handle the maximum hardlink count of any file. |
4521 | For this second effort, the ondisk parent pointer format as originally |
4523 | The format was changed during development to eliminate the requirement |
4524 | of repair tools needing to to ensure that the ``dirent_pos`` field |
4529 | 1. The field could be designated advisory, since the other three values |
4530 | are sufficient to find the entry in the parent. |
4535 | solves the referential integrity problem but runs the risk that |
4536 | dirent creation will fail due to conflicts with the free space in the |
4539 | These conflicts could be resolved by appending the directory entry |
4540 | and amending the xattr code to support updating an xattr key and |
4541 | reindexing the dabtree, though this would have to be performed with |
4542 | the parent directory still locked. |
4544 | 3. Same as above, but remove the old parent pointer entry and add a new |
4547 | 4. Change the ondisk xattr format to |
4548 | ``(parent_inum, name) → (parent_gen)``, which would provide the attr |
4550 | update the dirent position. |
4551 | Unfortunately, this requires changes to the xattr code to support |
4554 | 5. Change the ondisk xattr format to ``(parent_inum, hash(name)) → |
4556 | If the hash is sufficiently resistant to collisions (e.g. sha256) |
4557 | then this should provide the attr name uniqueness that we require. |
4560 | 6. Change the ondisk xattr format to ``(dirent_name) → (parent_ino, |
4561 | parent_gen)``. This format doesn't require any of the complicated |
4562 | nested name hashing of the previous suggestions. However, it was |
4563 | discovered that multiple hardlinks to the same inode with the same |
4565 | the parent inumber is now xor'd into the hash index. |
4567 | In the end, it was decided that solution #6 was the most compact and the |
4578 1. Set up a temporary directory for generating the new directory structure,
4579 an xfblob for storing entry names, and an xfarray for stashing the fixed
4583 2. Set up an inode scanner and hook into the directory entry code to receive
4586 3. For each parent pointer found in each file scanned, decide if the parent
4587 pointer references the directory of interest.
4590 a. Stash the parent pointer name and an addname entry for this dirent in the
4593 b. When finished scanning that file or the kernel memory consumption exceeds
4594 a threshold, flush the stashed updates to the temporary directory.
4596 4. For each live directory update received via the hook, decide if the child
4600 a. Stash the parent pointer name an addname or removename entry for this
4601 dirent update in the xfblob and xfarray for later.
4602 We cannot write directly to the temporary directory because hook
4604 Instead, we stash updates in the xfarray and rely on the scanner thread
4605 to apply the stashed updates to the temporary directory.
4607 5. When the scan is complete, replay any stashed entries in the xfarray.
4609 6. When the scan is complete, atomically exchange the contents of the temporary
4610 directory and the directory being repaired.
4611 The temporary directory now contains the damaged directory structure.
4613 7. Reap the temporary directory.
4615 The proposed patchset is the
4627 an xfblob for storing parent pointer names, and an xfarray for stashing the
4631 2. Set up an inode scanner and hook into the directory entry code to receive
4634 3. For each directory entry found in each directory scanned, decide if the
4635 dirent references the file of interest.
4638 a. Stash the dirent name and an addpptr entry for this parent pointer in the
4641 b. When finished scanning the directory or the kernel memory consumption
4642 exceeds a threshold, flush the stashed updates to the temporary file.
4644 4. For each live directory update received via the hook, decide if the parent
4648 a. Stash the dirent name and an addpptr or removepptr entry for this dirent
4649 update in the xfblob and xfarray for later.
4650 We cannot write parent pointers directly to the temporary file because
4652 Instead, we stash updates in the xfarray and rely on the scanner thread
4653 to apply the stashed parent pointer updates to the temporary file.
4655 5. When the scan is complete, replay any stashed entries in the xfarray.
4657 6. Copy all non-parent pointer extended attributes to the temporary file.
4659 7. When the scan is complete, atomically exchange the mappings of the attribute
4660 forks of the temporary file and the file being repaired.
4661 The temporary file now contains the damaged extended attribute structure.
4663 8. Reap the temporary file.
4665 The proposed patchset is the
4675 Parent pointer checks are therefore a second pass to be added to the existing
4678 1. After the set of surviving files has been established (phase 6),
4679 walk the surviving directories of each AG in the filesystem.
4680 This is already performed as part of the connectivity checks.
4684 a. If the name has already been stored in the xfblob, then use that cookie
4685 and skip the next step.
4687 b. Otherwise, record the name in an xfblob, and remember the xfblob cookie.
4692 2. Creating a stable sort key for the parent pointer indexes so that the
4696 name_cookie)`` tuples in a per-AG in-memory slab. The ``name_hash``
4697 referenced in this section is the regular directory entry name hash, not
4698 the specialized one used for parent pointer xattrs.
4700 3. For each AG in the filesystem,
4702 a. Sort the per-AG tuple set in order of ``child_ag_inum``, ``parent_inum``,
4705 handling the uncommon case of a directory containing multiple hardlinks
4706 to the same file where all the names hash to the same value.
4708 b. For each inode in the AG,
4710 1. Scan the inode for parent pointers.
4713 a. Validate the ondisk parent pointer.
4714 If validation fails, move on to the next parent pointer in the
4717 b. If the name has already been stored in the xfblob, then use that
4718 cookie and skip the next step.
4720 c. Record the name in a per-file xfblob, and remember the xfblob
4726 2. Sort the per-file tuples in order of ``parent_inum``, ``name_hash``,
4729 3. Position one slab cursor at the start of the inode's records in the
4731 This should be trivial since the per-AG tuples are in child inumber
4734 4. Position a second slab cursor at the start of the per-file tuple slab.
4736 5. Iterate the two cursors in lockstep, comparing the ``parent_ino``,
4737 ``name_hash``, and ``name_cookie`` fields of the records under each
4740 a. If the per-AG cursor is at a lower point in the keyspace than the
4741 per-file cursor, then the per-AG cursor points to a missing parent
4743 Add the parent pointer to the inode and advance the per-AG
4746 b. If the per-file cursor is at a lower point in the keyspace than
4747 the per-AG cursor, then the per-file cursor points to a dangling
4749 Remove the parent pointer from the inode and advance the per-file
4752 c. Otherwise, both cursors point at the same parent pointer.
4753 Update the parent_gen component if necessary.
4758 The proposed patchset is the
4764 challenging because xfs_repair currently uses two single-pass scans of the
4769 1. The first pass of the scan zaps corrupt inodes, forks, and attributes
4773 2. The next pass records parent pointers pointing to the directories noted
4774 as being corrupt in the first pass.
4775 This second pass may have to happen after the phase 4 scan for duplicate
4778 3. The third pass resets corrupt directories to an empty shortform directory.
4779 Free space metadata has not been ensured yet, so repair cannot yet use the
4782 4. At the start of phase 6, space metadata have been rebuilt.
4783 Use the parent pointer information recorded during step 2 to reconstruct
4784 the dirents and add them to the now-empty directories.
4793 As mentioned earlier, the filesystem directory tree is supposed to be a
4796 own locks, which makes validating the tree qualities difficult.
4799 Directories typically constitute 5-10% of the files in a filesystem, which
4800 reduces the amount of work dramatically.
4802 If the directory tree could be frozen, it would be easy to discover cycles and
4804 from the root directory and marking a bitmap for each directory found.
4805 At any point in the walk, trying to set an already set bit means there is a
4807 After the scan completes, XORing the marked inode bitmap with the inode
4809 However, one of online repair's design goals is to avoid locking the entire
4811 Directory tree updates can move subtrees across the scanner wavefront on a live
4812 filesystem, so the bitmap algorithm cannot be applied.
4814 Directory parent pointers enable an incremental approach to validation of the
4816 Instead of using one thread to scan the entire filesystem, multiple threads can
4817 walk from individual subdirectories upwards towards the root.
4819 consistent, each directory entry must have a parent pointer, and the link
4821 Each scanner thread must be able to take the IOLOCK of an alleged parent
4822 directory while holding the IOLOCK of the child directory to prevent either
4823 directory from being moved within the tree.
4824 This is not possible since the VFS does not take the IOLOCK of a child
4825 subdirectory when moving that subdirectory, so instead the scanner stabilizes
4826 the parent -> child relationship by taking the ILOCKs and installing a dirent
4829 The scanning process uses a dirent hook to detect changes to the directories
4830 mentioned in the scan data.
4831 The scan works as follows:
4833 1. For each subdirectory in the filesystem,
4837 1. Create a path object for that parent pointer, and mark the
4838 subdirectory inode number in the path object's bitmap.
4840 2. Record the parent pointer name and inode number in a path structure.
4842 3. If the alleged parent is the subdirectory being scrubbed, the path is
4844 Mark the path for deletion and repeat step 1a with the next
4847 4. Try to mark the alleged parent inode number in a bitmap in the path
4849 If the bit is already set, then there is a cycle in the directory
4851 Mark the path as a cycle and repeat step 1a with the next subdirectory
4854 5. Load the alleged parent.
4855 If the alleged parent is not a linked directory, abort the scan
4856 because the parent pointer information is inconsistent.
4860 a. Record the parent pointer name and inode number in the path object
4863 b. If an ancestor has more than one parent, mark the path as corrupt.
4864 Repeat step 1a with the next subdirectory parent pointer.
4866 c. Repeat steps 1a3-1a6 for the ancestor identified in step 1a6a.
4867 This repeats until the directory tree root is reached or no parents
4870 7. If the walk terminates at the root directory, mark the path as ok.
4872 8. If the walk terminates without reaching the root, mark the path as
4875 2. If the directory entry update hook triggers, check all paths already found
4876 by the scan.
4877 If the entry matches part of a path, mark that path and the scan stale.
4878 When the scanner thread sees that the scan has been marked stale, it deletes
4881 Repairing the directory tree works as follows:
4883 1. Walk each path of the target subdirectory.
4889 c. Paths that reached the root are counted as good.
4891 2. If the subdirectory is either the root directory or has zero link count,
4892 delete all incoming directory entries in the immediate parents.
4895 3. If the subdirectory has exactly one path, set the dotdot entry to the
4898 4. If the subdirectory has at least one good path, delete all the other
4899 incoming directory entries in the immediate parents.
4901 5. If the subdirectory has no good paths and more than one suspect path, delete
4902 all the other incoming directory entries in the immediate parents.
4904 6. If the subdirectory has zero paths, attach it to the lost and found.
4906 The proposed patches are in the
4914 The Orphanage
4919 The root of the filesystem is a directory, and each entry in a directory points
4921 Unfortunately, a disruption in the directory graph pointers result in a
4925 Without parent pointers, the directory parent pointer online scrub code can
4927 back to the child directory and the file link count checker can detect a file
4928 that isn't pointed to by any directory in the filesystem.
4929 If such a file has a positive link count, the file is an orphan.
4933 This should reduce the incidence of files ending up in ``/lost+found``.
4935 When orphans are found, they should be reconnected to the directory tree.
4936 Offline fsck solves the problem by creating a directory ``/lost+found`` to
4937 serve as an orphanage, and linking orphan files into the orphanage by using the
4938 inumber as the name.
4939 Reparenting a file to the orphanage does not reset any of its permissions or
4942 This process is more involved in the kernel than it is in userspace.
4943 The directory and file link count repair setup functions must use the regular
4944 VFS mechanisms to create the orphanage directory with all the necessary
4948 Orphaned files are adopted by the orphanage as follows:
4950 1. Call ``xrep_orphanage_try_create`` at the start of the scrub setup function
4951 to try to ensure that the lost and found directory actually exists.
4952 This also attaches the orphanage directory to the scrub context.
4954 2. If the decision is made to reconnect a file, take the IOLOCK of both the
4955 orphanage and the file being reattached.
4956 The ``xrep_orphanage_iolock_two`` function follows the inode locking
4959 3. Use ``xrep_adoption_trans_alloc`` to reserve resources to the repair
4962 4. Call ``xrep_orphanage_compute_name`` to compute the new name in the
4965 5. If the adoption is going to happen, call ``xrep_adoption_reparent`` to
4966 reparent the orphaned file into the lost and found and invalidate the dentry
4969 6. Call ``xrep_adoption_finish`` to commit any filesystem updates, release the
4970 orphanage ILOCK, and clean the scrub transaction. Call
4971 ``xrep_adoption_commit`` to commit the updates and the scrub transaction.
4976 The proposed patches are in the
4984 This section discusses the key algorithms and data structures of the userspace
4985 program, ``xfs_scrub``, that provide the ability to drive metadata checks and
4986 repairs in the kernel, verify file data, and look for other potential problems.
4993 Recall the :ref:`phases of fsck work<scrubphases>` outlined earlier.
4994 That structure follows naturally from the data dependencies designed into the
4998 a. Filesystem summary counts depend on consistency within the inode indices,
4999 the allocation group space btrees, and the realtime volume space
5002 b. Quota resource counts depend on consistency within the quota file data
5003 forks, inode indices, inode records, and the forks of every file on the
5006 c. The naming hierarchy depends on consistency within the directory and
5011 the file forks that map directory and extended attribute data to physical
5014 e. The file forks depends on consistency within inode records and the space
5015 metadata indices of the allocation groups and the realtime volume.
5018 f. Inode records depends on consistency within the inode metadata indices.
5020 g. Realtime space metadata depend on the inode records and data forks of the
5023 h. The allocation group metadata indices (free space, inodes, reference count,
5024 and reverse mapping btrees) depend on consistency within the AG headers and
5025 between all the AG metadata btrees.
5027 i. ``xfs_scrub`` depends on the filesystem being mounted and kernel support
5031 operations in the ``xfs_scrub`` program:
5033 - Phase 1 checks that the provided path maps to an XFS filesystem and detect
5034 the kernel's scrubbing abilities, which validates group (i).
5052 Notice that the data dependencies between groups are enforced by the structure
5053 of the program flow.
5061 if the program has been invoked manually from a command line.
5062 This requires careful scheduling to keep the threads as evenly loaded as
5065 Early iterations of the ``xfs_scrub`` inode scanner naïvely created a single
5067 Each workqueue item walked the inode btree (with ``XFS_IOC_INUMBERS``) to find
5070 The file handle was then passed to a function to generate scrub items for each
5072 This simple algorithm leads to thread balancing problems in phase 3 if the
5073 filesystem contains one AG with a few large sparse files and the rest of the
5075 The inode scan dispatch function was not sufficiently granular; it should have
5076 been dispatching at the level of individual inodes, or, to constrain memory
5081 Just like before, the first workqueue is seeded with one workqueue item per AG,
5083 The second workqueue, however, is configured with an upper bound on the number
5085 Each inode btree chunk found by the first workqueue's workers are queued to the
5089 If the second workqueue is too full, the workqueue add function blocks the
5090 first workqueue's workers until the backlog eases.
5091 This doesn't completely solve the balancing problem, but reduces it enough to
5094 The proposed patchsets are the scrub
5097 and the
5109 functioning of the inode indices to find inodes to scan.
5120 In the original design of ``xfs_scrub``, it was thought that repairs would be
5121 so infrequent that the ``struct xfs_scrub_metadata`` objects used to
5122 communicate with the kernel could also be used as the primary object to
5124 With recent increases in the number of optimizations possible for a given
5132 The :ref:`data dependencies <scrubcheck>` outlined earlier still apply, which
5133 means that ``xfs_scrub`` must try to complete the repair work scheduled by
5135 The repair process is as follows:
5137 1. Start a round of repair with a workqueue and enough workers to keep the CPUs
5138 as busy as the user desires.
5142 i. Ask the kernel to repair everything listed in the repair item for a
5145 ii. Make a note if the kernel made any progress in reducing the number
5148 iii. If the object no longer requires repairs, revalidate all metadata
5150 If the revalidation succeeds, drop the repair item.
5151 If not, requeue the item for more repairs.
5153 b. If any repairs were made, jump back to 1a to retry all the phase 2 items.
5157 i. Ask the kernel to repair everything listed in the repair item for a
5160 ii. Make a note if the kernel made any progress in reducing the number
5163 iii. If the object no longer requires repairs, revalidate all metadata
5165 If the revalidation succeeds, drop the repair item.
5166 If not, requeue the item for more repairs.
5168 d. If any repairs were made, jump back to 1c to retry all the phase 3 items.
5174 Complain if the repairs were not successful, since this is the last chance
5179 Corrupt file data blocks reported by phase 6 cannot be recovered by the
5182 The proposed patchsets are the
5185 refactoring of the
5191 and the
5199 If ``xfs_scrub`` succeeds in validating the filesystem metadata by the end of
5201 the filesystem.
5202 These names consist of the filesystem label, names in directory entries, and
5203 the names of extended attributes.
5204 Like most Unix filesystems, XFS imposes the sparest of constraints on the
5211 - Null bytes are not allowed in the filesystem label.
5213 Directory entries and attribute keys store the length of the name explicitly
5215 For this section, the term "naming domain" refers to any place where names are
5216 presented together -- all the names in a directory, or all the attributes of a
5219 Although the Unix naming constraints are very permissive, the reality of most
5223 with the C library because the kernel expects null-terminated names.
5224 In the common case, therefore, names found in an XFS filesystem are actually
5227 To maximize its expressiveness, the Unicode standard defines separate control
5229 systems around the world.
5230 For example, the character "Cyrillic Small Letter A" U+0430 "а" often renders
5233 The standard also permits characters to be constructed in multiple ways --
5236 For example, the character "Angstrom Sign U+212B "Å" can also be expressed
5241 Like the standards that preceded it, Unicode also defines various control
5242 characters to alter the presentation of text.
5243 For example, the character "Right-to-Left Override" U+202E can trick some
5246 If the character "Zero Width Space" U+200B is encountered in a file name, the
5247 name will render identically to a name that does not have the zero width
5252 The kernel, in its indifference to upper level encoding schemes, permits this.
5253 Most filesystem drivers persist the byte sequence names that are given to them
5254 by the VFS.
5257 sections 4 and 5 of the
5260 When ``xfs_scrub`` detects UTF-8 encoding in use on a system, it uses the
5261 Unicode normalization form NFD in conjunction with the confusable name
5268 All of these potential issues are reported to the system administrator during
5274 The system administrator can elect to initiate a media scan of all file data
5276 This scan after validation of all filesystem metadata (except for the summary
5278 The scan starts by calling ``FS_IOC_GETFSMAP`` to scan the filesystem space map
5281 they were data fork extents to reduce the command setup overhead.
5282 When the space map scan accumulates a region larger than 32MB, a media
5283 verification request is sent to the disk as a directio read of the raw block
5286 If the verification read fails, ``xfs_scrub`` retries with single-block reads
5287 to narrow down the failure to the specific region of the media and recorded.
5288 When it has finished issuing verification requests, it again uses the space
5289 mapping ioctl to map the recorded media errors back to metadata structures
5297 It is hoped that the reader of this document has followed the designs laid out
5301 Although the scope of this work is daunting, it is hoped that this guide will
5304 Please feel free to contact the XFS mailing list with questions.
5309 As discussed earlier, a second frontend to the atomic file mapping exchange
5312 This frontend has been out for review for several years now, though the
5314 the proposal has not been pushed very hard.
5319 As mentioned earlier, XFS has long had the ability to swap extents between
5321 The earliest form of this was the fork swap mechanism, where the entire
5322 contents of data forks could be exchanged between two files by exchanging the
5325 some log support to continue rewriting the owner fields of BMBT blocks during
5327 When the reverse mapping btree was later added to XFS, the only way to maintain
5328 the consistency of the fork mappings with the reverse mapping index was to
5331 This mechanism is identical to steps 2-3 from the procedure above except for
5332 the new tracking items, because the atomic file mapping exchange mechanism is
5334 For the narrow case of file defragmentation, the file contents must be
5335 identical, so the recovery guarantees are not much of a gain.
5337 Atomic file content exchanges are much more flexible than the existing swapext
5338 implementations because it can guarantee that the caller never sees a mix of
5341 The extra flexibility enables several new use cases:
5345 Next, it opens a temporary file and calls the file clone operation to reflink
5346 the first file's contents into the temporary file.
5347 Writes to the original file should instead be written to the temporary file.
5348 Finally, the process calls the atomic file mapping exchange system call
5349 (``XFS_IOC_EXCHANGE_RANGE``) to exchange the file contents, thereby
5350 committing all of the updates to the original file, or none of them.
5354 - **Transactional file updates**: The same mechanism as above, but the caller
5355 only wants the commit to occur if the original file's contents have not
5357 To make this happen, the calling process snapshots the file modification and
5358 change timestamps of the original file before reflinking its data to the
5360 When the program is ready to commit the changes, it passes the timestamps
5361 into the kernel as arguments to the atomic file mapping exchange system call.
5362 The kernel only commits the changes if the provided timestamps match the
5367 logical sector size matching the filesystem block size to force all writes
5368 to be aligned to the filesystem block size.
5369 Stage all writes to a temporary file, and when that is complete, call the
5371 in the temporary file should be ignored.
5378 As it turns out, the :ref:`refactoring <scrubrepair>` of repair items mentioned
5380 Since 2018, the cost of making a kernel call has increased considerably on some
5381 systems to mitigate the effects of speculative execution attacks.
5383 reduce the number of times an execution path crosses a security boundary.
5385 With vectorized scrub, userspace pushes to the kernel the identity of a
5387 simple representation of the data dependencies between the selected scrub
5389 The kernel executes as much of the caller's plan as it can until it hits a
5396 The relevant patchsets are the
5407 One serious shortcoming of the online fsck code is that the amount of time that
5408 it can spend in the kernel holding resource locks is basically unbounded.
5409 Userspace is allowed to send a fatal signal to the process which will cause
5411 for userspace to provide a time budget to the kernel.
5412 Given that the scrub codebase has helpers to detect fatal signals, it shouldn't
5414 operation and abort the operation if it exceeds budget.
5415 However, most repair functions have the property that once they begin to touch
5416 ondisk metadata, the operation cannot be cancelled cleanly, after which a QoS
5422 Over the years, many XFS users have requested the creation of a program to
5423 clear a portion of the physical storage underlying a filesystem so that it
5427 The first piece the ``clearspace`` program needs is the ability to read the
5429 This already exists in the form of the ``FS_IOC_GETFSMAP`` ioctl.
5430 The second piece it needs is a new fallocate mode
5431 (``FALLOC_FL_MAP_FREE_SPACE``) that allocates the free space in a region and
5433 Call this file the "space collector" file.
5434 The third piece is the ability to force an online repair.
5436 To clear all the metadata out of a portion of physical storage, clearspace
5437 uses the new fallocate map-freespace call to map any free space in that region
5438 to the space collector file.
5440 ``GETFSMAP`` and issues forced repair requests on the data structure.
5441 This often results in the metadata being rebuilt somewhere that is not being
5443 After each relocation, clearspace calls the "map free space" function again to
5444 collect any newly freed space in the region being cleared.
5446 To clear all the file data out of a portion of the physical storage, clearspace
5447 uses the FSMAP information to find relevant file data blocks.
5448 Having identified a good target, it uses the ``FICLONERANGE`` call on that part
5449 of the file to try to share the physical space with a dummy file.
5450 Cloning the extent means that the original owners cannot overwrite the
5452 Clearspace makes its own copy of the frozen extent in an area that is not being
5453 cleared, and uses ``FIEDEUPRANGE`` (or the :ref:`atomic file content exchanges
5454 <exchrange_if_unchanged>` feature) to change the target file's data extent
5455 mapping away from the area being cleared.
5456 When all other mappings have been moved, clearspace reflinks the space into the
5459 There are further optimizations that could apply to the above algorithm.
5463 the operation completes.
5466 With the refcount information exposed, clearspace can quickly find the longest,
5467 most shared data extents in the filesystem, and target them first.
5469 **Future Work Question**: How might the filesystem move inode chunks?
5472 that creates a new file with the old contents and then locklessly runs around
5473 the filesystem updating directory entries.
5474 The operation cannot complete if the filesystem goes down.
5476 hidden behind a jump label, and a log item that tracks the kernel walking the
5478 The trouble is, the kernel can't do anything about open files, since it cannot
5481 **Future Work Question**: Can static keys be used to minimize the cost of
5485 Until the first revocation, the bailout code need not be in the call path at
5488 The relevant patchsets are the
5499 Removing the end of the filesystem ought to be a simple matter of evacuating
5500 the data and metadata at the end of the filesystem, and handing the freed space
5501 to the shrink code.
5502 That requires an evacuation of the space at end of the filesystem, which is a