Lines Matching full:we

77  * We need to make sure the buffer pointer returned is naturally aligned for the
78 * biggest basic data type we put into it. We have already accounted for this
81 * However, this padding does not get written into the log, and hence we have to
86 * We also add space for the xlog_op_header that describes this region in the
87 * log. This prepends the data region we return to the caller to copy their data
89 * is not 8 byte aligned, we have to be careful to ensure that we align the
90 * start of the buffer such that the region we return to the call is 8 byte
171 * we have overrun available reservation space, return 0. The memory barrier
289 * path. Hence any lock will be globally hot if we take it unconditionally on
292 * As tickets are only ever moved on and off head->waiters under head->lock, we
293 * only need to take that lock if we are going to add the ticket to the queue
294 * and sleep. We can avoid taking the lock if the ticket was never added to
295 * head->waiters because the t_queue list head will be empty and we hold the
312 * logspace before us. Wake up the first waiters, if we do not wake in xlog_grant_head_check()
374 * This is a new transaction on the ticket, so we need to change the in xfs_log_regrant()
376 * the log. Just add one to the existing tid so that we can see chains in xfs_log_regrant()
397 * If we are failing, make sure the ticket doesn't have any current in xfs_log_regrant()
398 * reservations. We don't want to add this back when the ticket/ in xfs_log_regrant()
410 * When writes happen to the on-disk log, we don't subtract the length of the
412 * reservation, we prevent over allocation problems.
448 * If we are failing, make sure the ticket doesn't have any current in xfs_log_reserve()
449 * reservations. We don't want to add this back when the ticket/ in xfs_log_reserve()
459 * space waiters so they can process the newly set shutdown state. We really
460 * don't care what order we process callbacks here because the log is shut down
461 * and so state cannot change on disk anymore. However, we cannot wake waiters
462 * until the callbacks have been processed because we may be in unmount and
463 * we must ensure that all AIL operations the callbacks perform have completed
464 * before we tear down the AIL.
466 * We avoid processing actively referenced iclogs so that we don't run callbacks
501 * If XLOG_ICL_NEED_FUA is already set on the iclog, we need to ensure that the
504 * within the iclog. We need to ensure that the log tail does not move beyond
513 * the iclog will get zeroed on activation of the iclog after sync, so we
530 * of the tail LSN into the iclog so we guarantee that the log tail does in xlog_state_release_iclog()
531 * not move between the first time we know that the iclog needs to be in xlog_state_release_iclog()
532 * made stable and when we eventually submit it. in xlog_state_release_iclog()
613 * Now that we have set up the log and it's internal geometry in xfs_log_mount()
614 * parameters, we can validate the given log space and drop a critical in xfs_log_mount()
618 * the other log geometry constraints, so we don't have to check those in xfs_log_mount()
621 * Note: For v4 filesystems, we can't just reject the mount if the in xfs_log_mount()
626 * We can, however, reject mounts for V5 format filesystems, as the in xfs_log_mount()
652 * Initialize the AIL now we have a log. in xfs_log_mount()
684 * Now the log has been fully initialised and we know were our in xfs_log_mount()
685 * space grant counters are, we can initialise the permanent ticket in xfs_log_mount()
706 * If we finish recovery successfully, start the background log work. If we are
707 * not doing recovery, then we have a RO filesystem and we don't need to start
723 * During the second phase of log recovery, we need iget and in xfs_log_mount_finish()
726 * of inodes before we're done replaying log items on those in xfs_log_mount_finish()
728 * so that we don't leak the quota inodes if subsequent mount in xfs_log_mount_finish()
731 * We let all inodes involved in redo item processing end up on in xfs_log_mount_finish()
732 * the LRU instead of being evicted immediately so that if we do in xfs_log_mount_finish()
735 * in log recovery failure. We have to evict the unreferenced in xfs_log_mount_finish()
736 * lru inodes after clearing SB_ACTIVE because we don't in xfs_log_mount_finish()
752 * but we do it unconditionally to make sure we're always in a clean in xfs_log_mount_finish()
772 /* Make sure the log is dead if we're returning failure. */ in xfs_log_mount_finish()
807 * is done before we tear down these buffers.
825 * have been ordered and callbacks run before we are woken here, hence
851 * Write out an unmount record using the ticket provided. We have to account for
914 * At this point, we're umounting anyway, so there's no point in in xlog_unmount_write()
947 * We just write the magic number now since that particular field isn't
966 * If we think the summary counters are bad, avoid writing the unmount in xfs_log_unmount_write()
985 * To do this, we first need to shut down the background log work so it is not
986 * trying to cover the log as we clean up. We then need to unpin all objects in
987 * the log so we can then flush them out. Once they have completed their IO and
988 * run the callbacks removing themselves from the AIL, we can cover the log.
995 * Clear log incompat features since we're quiescing the log. Report in xfs_log_quiesce()
1015 * XBF_ASYNC flag set, so we need to use a lock/unlock pair to wait for in xfs_log_quiesce()
1037 * During unmount, we need to ensure we flush all the dirty metadata objects
1038 * from the AIL so that the log is empty before we write the unmount record to
1039 * the log. Once this is done, we can tear down the AIL and the log.
1049 * cleaning will have been skipped and so we need to wait in xfs_log_unmount()
1050 * for the iclog to complete shutdown processing before we in xfs_log_unmount()
1084 * Wake up processes waiting for log space after we have moved the log tail.
1116 * Determine if we have a transaction that has gone to disk that needs to be
1119 * we start attempting to cover the log.
1121 * Only if we are then in a state where covering is needed, the caller is
1125 * If there are any items in the AIl or CIL, then we do not want to attempt to
1126 * cover the log as we may be in a situation where there isn't log space
1129 * there's no point in running a dummy transaction at this point because we
1191 * state machine if the log requires covering. Therefore, we must call in xfs_log_cover()
1192 * this function once and use the result until we've issued an sb sync. in xfs_log_cover()
1211 * we found it. in xfs_log_cover()
1240 * Race to shutdown the filesystem if we see an error. in xlog_ioend_work()
1251 * Drop the lock to signal that we are done. Nothing references the in xlog_ioend_work()
1254 * unlock as we could race with it being freed. in xlog_ioend_work()
1264 * If the filesystem blocksize is too large, we may need to choose a
1297 * Clear the log incompat flags if we have the opportunity.
1299 * This only happens if we're about to log the second dummy transaction as part
1319 * Every sync period we need to unpin all items in the AIL and push them to
1320 * disk. If there is nothing dirty, then we might need to cover the log to
1338 * We cannot use an inode here for this - that will push dirty in xfs_log_worker()
1340 * will prevent log covering from making progress. Hence we in xfs_log_worker()
1441 * done this way so that we can use different sizes for machines in xlog_alloc_log()
1641 * We lock the iclogbufs here so that we can serialise against I/O in xlog_write_iclog()
1642 * completion during unmount. We might be processing a shutdown in xlog_write_iclog()
1644 * unmount thread, and hence we need to ensure that completes before in xlog_write_iclog()
1645 * tearing down the iclogbufs. Hence we need to hold the buffer lock in xlog_write_iclog()
1651 * It would seem logical to return EIO here, but we rely on in xlog_write_iclog()
1653 * doing it here. We kick of the state machine and unlock in xlog_write_iclog()
1661 * We use REQ_SYNC | REQ_IDLE here to tell the block layer the are more in xlog_write_iclog()
1676 * For external log devices, we also need to flush the data in xlog_write_iclog()
1679 * but it *must* complete before we issue the external log IO. in xlog_write_iclog()
1681 * If the flush fails, we cannot conclude that past metadata in xlog_write_iclog()
1683 * not possible, hence we must shut down with log IO error to in xlog_write_iclog()
1702 * If this log buffer would straddle the end of the log we will have in xlog_write_iclog()
1703 * to split it up into two bios, so that we can continue at the start. in xlog_write_iclog()
1727 * We need to bump cycle number for the part of the iclog that is
1771 * fashion. Previously, we should have moved the current iclog
1775 * to save away the 1st word of each BBSIZE block into the header. We replace
1779 * we can't have part of a 512 byte block written and part not written. By
1780 * tagging each block, we will know which blocks are valid when recovering
1809 * If we have a ticket, account for the roundoff via the ticket in xlog_sync()
1811 * Otherwise, we have to move grant heads directly. in xlog_sync()
1834 /* Do we need to split this write into 2 parts? */ in xlog_sync()
2060 * length. We write until we cannot fit a full record into the remaining space
2061 * and then stop. We return the log vector that is to be written that cannot
2080 /* walk the logvec, copying until we run out of space in the iclog */ in xlog_write_partial()
2088 * start recovering from the next opheader it finds. Because we in xlog_write_partial()
2094 * opheader, then we need to start afresh with a new iclog. in xlog_write_partial()
2116 /* If we wrote the whole region, move to the next. */ in xlog_write_partial()
2121 * We now have a partially written iovec, but it can span in xlog_write_partial()
2122 * multiple iclogs so we loop here. First we release the iclog in xlog_write_partial()
2123 * we currently have, then we get a new iclog and add a new in xlog_write_partial()
2124 * opheader. Then we continue copying from where we were until in xlog_write_partial()
2125 * we either complete the iovec or fill the iclog. If we in xlog_write_partial()
2126 * complete the iovec, then we increment the index and go right in xlog_write_partial()
2127 * back to the top of the outer loop. if we fill the iclog, we in xlog_write_partial()
2132 * and get a new one before returning to the outer loop. We must in xlog_write_partial()
2133 * always guarantee that we exit this inner loop with at least in xlog_write_partial()
2135 * iclog, hence we cannot just terminate the loop at the end in xlog_write_partial()
2136 * of the of the continuation. So we loop while there is no in xlog_write_partial()
2142 * Ensure we include the continuation opheader in the in xlog_write_partial()
2143 * space we need in the new iclog by adding that size in xlog_write_partial()
2144 * to the length we require. This continuation opheader in xlog_write_partial()
2146 * consumes hasn't been accounted to the lv we are in xlog_write_partial()
2168 * continuation. Otherwise we're going around again. in xlog_write_partial()
2204 * 2. Check whether we violate the tickets reservation.
2211 * 3. Find out if we can fit entire region into this iclog
2231 * we don't really know exactly how much space will be used. As a result,
2232 * we don't update ic_offset until the end when we know exactly how many
2266 * If we have a context pointer, pass it the first iclog we are in xlog_write()
2285 * We have no iclog to release, so just return in xlog_write()
2298 * We've already been guaranteed that the last writes will fit inside in xlog_write()
2300 * those writes accounted to it. Hence we do not need to update the in xlog_write()
2321 * dummy transaction, we can change state into IDLE (the second time in xlog_state_activate_iclog()
2322 * around). Otherwise we should change the state into NEED a dummy. in xlog_state_activate_iclog()
2323 * We don't need to cover the dummy. in xlog_state_activate_iclog()
2330 * We have two dirty iclogs so start over. This could also be in xlog_state_activate_iclog()
2374 * We go to NEED for any non-covering writes. We go to NEED2 if we just in xlog_covered_state()
2375 * wrote the first covering record (DONE). We go to IDLE if we just in xlog_covered_state()
2443 * Return true if we need to stop processing, false to continue to the next
2464 * Now that we have an iclog that is in the DONE_SYNC state, do in xlog_state_iodone_process_iclog()
2465 * one more check here to see if we have chased our tail around. in xlog_state_iodone_process_iclog()
2466 * If this is not the lowest lsn iclog, then we will leave it in xlog_state_iodone_process_iclog()
2474 * If there are no callbacks on this iclog, we can mark it clean in xlog_state_iodone_process_iclog()
2475 * immediately and return. Otherwise we need to run the in xlog_state_iodone_process_iclog()
2488 * in the DONE_SYNC state, we skip the rest and just try to in xlog_state_iodone_process_iclog()
2497 * we ran any callbacks, indicating that we dropped the icloglock. We don't need
2587 * If we got an error, either on the first buffer, or in the case of in xlog_state_done_syncing()
2588 * split log writes, on the second, we shut down the file system and in xlog_state_done_syncing()
2598 * iclog buffer, we wake them all, one will get to do the in xlog_state_done_syncing()
2607 * If the head of the in-core log ring is not (ACTIVE or DIRTY), then we must
2608 * sleep. We wait on the flush queue on the head iclog as that should be
2610 * we will wait here and all new writes will sleep until a sync completes.
2687 * If we are the only one writing to this iclog, sync it to in xlog_state_get_iclog_space()
2688 * disk. We need to do an atomic compare and decrement here to in xlog_state_get_iclog_space()
2701 /* Do we have enough room to write the full amount in the remainder in xlog_state_get_iclog_space()
2702 * of this iclog? Or must we continue a write on the next iclog and in xlog_state_get_iclog_space()
2703 * mark this iclog as completely taken? In the case where we switch in xlog_state_get_iclog_space()
2721 * The first cnt-1 times a ticket goes through here we don't need to move the
2743 /* just return if we still have some of the pre-reserved space */ in xfs_log_ticket_regrant()
2757 * All the information we need to make a correct determination of space left
2759 * count should have been decremented to zero. We only need to deal with the
2763 * reservation can be done before we need to ask for more space. The first
2764 * one goes to fill up the first current reservation. Once we run out of
2783 * If this is a permanent reservation ticket, we may be able to free in xfs_log_ticket_ungrant()
2853 * pmem) or fast async storage because we drop the icloglock to issue the IO.
2883 * we don't guarantee this data will be written out. A change from past
2886 * Basically, we try and perform an intelligent scan of the in-core logs.
2887 * If we determine there is no flushable data, we just return. There is no
2895 * We may sleep if:
2903 * b) when we return from flushing out this iclog, it is still
2930 * If the head is dirty or (active and empty), then we need to in xfs_log_force()
2933 * If the previous iclog is active or dirty we are done. There in xfs_log_force()
2934 * is nothing to sync out. Otherwise, we attach ourselves to the in xfs_log_force()
2940 /* We have exclusive access to this iclog. */ in xfs_log_force()
2950 * Someone else is still writing to this iclog, so we in xfs_log_force()
2952 * gets synced immediately as we may be waiting on it. in xfs_log_force()
2959 * The iclog we are about to wait on may contain the checkpoint pushed in xfs_log_force()
2961 * to disk yet. Like the ACTIVE case above, we need to make sure caches in xfs_log_force()
3017 * We sleep here if we haven't already slept (e.g. this is the in xlog_force_lsn()
3018 * first time we've looked at the correct iclog buf) and the in xlog_force_lsn()
3020 * is that if we are doing sync transactions here, by waiting in xlog_force_lsn()
3021 * for the previous I/O to complete, we can allow a few more in xlog_force_lsn()
3022 * transactions into this iclog before we close it down. in xlog_force_lsn()
3024 * Otherwise, we mark the buffer WANT_SYNC, and bump up the in xlog_force_lsn()
3025 * refcnt so we can release the log (which drops the ref count). in xlog_force_lsn()
3050 * ACTIVE case above, we need to make sure caches are flushed in xlog_force_lsn()
3059 * completes, so we don't need to manipulate caches here at all. in xlog_force_lsn()
3060 * We just need to wait for completion if necessary. in xlog_force_lsn()
3081 * a synchronous log force, we will wait on the iclog with the LSN returned by
3158 * We need to account for all the leadup data and trailer data in xlog_calc_unit_res()
3160 * And then we need to account for the worst case in terms of using in xlog_calc_unit_res()
3185 * the space used for the headers. If we use the iclog size, then we in xlog_calc_unit_res()
3197 * Fundamentally, this means we must pass the entire log vector to in xlog_calc_unit_res()
3206 /* add extra header reservations if we overrun */ in xlog_calc_unit_res()
3329 * 2. Make sure we have a good magic number
3330 * 3. Make sure we don't have magic numbers in the data
3445 * Return true if the shutdown cause was a log IO error and we actually shut the
3460 * being shut down. We need to do this first as shutting down the log in xlog_force_shutdown()
3464 * When we are in recovery, there are no transactions to flush, and in xlog_force_shutdown()
3465 * we don't want to touch the log because we don't want to perturb the in xlog_force_shutdown()
3466 * current head/tail for future recovery attempts. Hence we need to in xlog_force_shutdown()
3469 * If we are shutting down due to a log IO error, then we must avoid in xlog_force_shutdown()
3478 * set, there someone else is performing the shutdown and so we are done in xlog_force_shutdown()
3479 * here. This should never happen because we should only ever get called in xlog_force_shutdown()
3483 * cannot change once they hold the log->l_icloglock. Hence we need to in xlog_force_shutdown()
3484 * hold that lock here, even though we use the atomic test_and_set_bit() in xlog_force_shutdown()
3509 * We don't want anybody waiting for log reservations after this. That in xlog_force_shutdown()
3510 * means we have to wake up everybody queued up on reserveq as well as in xlog_force_shutdown()
3511 * writeq. In addition, we make sure in xlog_{re}grant_log_space that in xlog_force_shutdown()
3512 * we don't enqueue anything once the SHUTDOWN flag is set, and this in xlog_force_shutdown()
3569 * resets the in-core LSN. We can't validate in this mode, but in xfs_log_check_lsn()