Lines Matching full:we

23  * recover, so we don't allow failure here. Also, we allocate in a context that
24 * we don't want to be issuing transactions from, so we need to tell the
27 * We don't reserve any space for the ticket - we are going to steal whatever
28 * space we require from transactions as they commit. To ensure we reserve all
29 * the space required, we need to set the current reservation of the ticket to
30 * zero so that we know to steal the initial transaction overhead from the
42 * set the current reservation to zero so we know to steal the basic in xlog_cil_ticket_alloc()
62 * We can't rely on just the log item being in the CIL, we have to check
80 * current sequence, we're in a new checkpoint. in xlog_item_in_current_chkpt()
140 * We're in the middle of switching cil contexts. Reset the in xlog_cil_push_pcp_aggregate()
141 * counter we use to detect when the current context is nearing in xlog_cil_push_pcp_aggregate()
151 * limit threshold so we can switch to atomic counter aggregation for accurate
167 * We can race with other cpus setting cil_pcpmask. However, we've in xlog_cil_insert_pcp_aggregate()
197 * After the first stage of log recovery is done, we know where the head and
198 * tail of the log are. We need this log initialisation done before we can
201 * Here we allocate a log ticket to track space usage during a CIL push. This
202 * ticket is passed to xlog_write() directly so that we don't slowly leak log
233 * If we do this allocation within xlog_cil_insert_format_items(), it is done
235 * the memory allocation. This means that we have a potential deadlock situation
236 * under low memory conditions when we have lots of dirty metadata pinned in
237 * the CIL and we need a CIL commit to occur to free memory.
239 * To avoid this, we need to move the memory allocation outside the
246 * process, we cannot share the buffer between the transaction commit (which
249 * unreliable, but we most definitely do not want to be allocating and freeing
256 * the incoming modification. Then during the formatting of the item we can swap
257 * the active buffer with the new one if we can't reuse the existing buffer. We
259 * it's size is right, otherwise we'll free and reallocate it at that point.
292 * Ordered items need to be tracked but we do not wish to write in xlog_cil_alloc_shadow_bufs()
293 * them. We need a logvec to track the object, but we do not in xlog_cil_alloc_shadow_bufs()
303 * We 64-bit align the length of each iovec so that the start of in xlog_cil_alloc_shadow_bufs()
304 * the next one is naturally aligned. We'll need to account for in xlog_cil_alloc_shadow_bufs()
307 * We also add the xlog_op_header to each region when in xlog_cil_alloc_shadow_bufs()
309 * at this point. Hence we'll need an addition number of bytes in xlog_cil_alloc_shadow_bufs()
321 * that space to ensure we can align it appropriately and not in xlog_cil_alloc_shadow_bufs()
327 * if we have no shadow buffer, or it is too small, we need to in xlog_cil_alloc_shadow_bufs()
333 * We free and allocate here as a realloc would copy in xlog_cil_alloc_shadow_bufs()
334 * unnecessary data. We don't use kvzalloc() for the in xlog_cil_alloc_shadow_bufs()
335 * same reason - we don't need to zero the data area in in xlog_cil_alloc_shadow_bufs()
387 * If there is no old LV, this is the first time we've seen the item in in xfs_cil_prepare_item()
388 * this CIL context and so we need to pin it. If we are replacing the in xfs_cil_prepare_item()
390 * buffer for later freeing. In both cases we are now switching to the in xfs_cil_prepare_item()
409 * CIL, store the sequence number on the log item so we can in xfs_cil_prepare_item()
420 * For delayed logging, we need to hold a formatted buffer containing all the
428 * guaranteed to be large enough for the current modification, but we will only
429 * use that if we can't reuse the existing lv. If we can't reuse the existing
430 * lv, then simple swap it out for the shadow lv. We don't free it - that is
433 * We don't set up region headers during this process; we simply copy the
434 * regions into the flat buffer. We can do this because we still have to do a
436 * ophdrs during the iclog write means that we can support splitting large
440 * Hence what we need to do now is change the rewrite the vector array to point
441 * to the copied region inside the buffer we just allocated. This allows us to
453 /* Bail out if we didn't find a log item. */ in xlog_cil_insert_format_items()
543 * as well. Remove the amount of space we added to the checkpoint ticket from
565 * We can do this safely because the context can't checkpoint until we in xlog_cil_insert_items()
566 * are done so it doesn't matter exactly how we update the CIL. in xlog_cil_insert_items()
571 * Subtract the space released by intent cancelation from the space we in xlog_cil_insert_items()
572 * consumed so that we remove it from the CIL space and add it back to in xlog_cil_insert_items()
578 * Grab the per-cpu pointer for the CIL before we start any accounting. in xlog_cil_insert_items()
579 * That ensures that we are running with pre-emption disabled and so we in xlog_cil_insert_items()
591 * We need to take the CIL checkpoint unit reservation on the first in xlog_cil_insert_items()
592 * commit into the CIL. Test the XLOG_CIL_EMPTY bit first so we don't in xlog_cil_insert_items()
593 * unnecessarily do an atomic op in the fast path here. We can clear the in xlog_cil_insert_items()
594 * XLOG_CIL_EMPTY bit as we are under the xc_ctx_lock here and that in xlog_cil_insert_items()
602 * Check if we need to steal iclog headers. atomic_read() is not a in xlog_cil_insert_items()
603 * locked atomic operation, so we can check the value before we do any in xlog_cil_insert_items()
604 * real atomic ops in the fast path. If we've already taken the CIL unit in xlog_cil_insert_items()
605 * reservation from this commit, we've already got one iclog header in xlog_cil_insert_items()
606 * space reserved so we have to account for that otherwise we risk in xlog_cil_insert_items()
609 * If the CIL is already at the hard limit, we might need more header in xlog_cil_insert_items()
611 * commit that occurs once we are over the hard limit to ensure the CIL in xlog_cil_insert_items()
614 * This can steal more than we need, but that's OK. in xlog_cil_insert_items()
645 * If we just transitioned over the soft limit, we need to in xlog_cil_insert_items()
660 * We do this here so we only need to take the CIL lock once during in xlog_cil_insert_items()
677 * If we've overrun the reservation, dump the tx details before we move in xlog_cil_insert_items()
722 * not the commit record LSN. This is because we can pipeline multiple
739 * If we are called with the aborted flag set, it is because a log write during
743 * iclog write error even though we haven't started any IO yet. Hence in this
744 * case all we need to do is iop_committed processing, followed by an
771 * higher LSN than the current head. We do this before insertion of the in xlog_cil_ail_insert()
773 * space that this checkpoint has already consumed. We call in xlog_cil_ail_insert()
787 * We move the AIL head forwards to account for the space used in the in xlog_cil_ail_insert()
788 * log before we remove that space from the grant heads. This prevents a in xlog_cil_ail_insert()
820 * if we are aborting the operation, no point in inserting the in xlog_cil_ail_insert()
821 * object into the AIL as we are in a shutdown situation. in xlog_cil_ail_insert()
835 * we have the ail lock. Then unpin the item. This does in xlog_cil_ail_insert()
858 /* make sure we insert the remainder! */ in xlog_cil_ail_insert()
882 * Mark all items committed and clear busy extents. We free the log vector
883 * chains in a separate pass so that we unpin the log items as quickly as
894 * If the I/O failed, we're aborting the commit and already shutdown. in xlog_cil_committed()
895 * Wake any commit waiters before aborting the log items so we don't in xlog_cil_committed()
943 * Record the LSN of the iclog we were just granted space to start writing into.
960 * The LSN we need to pass to the log items on transaction in xlog_cil_set_ctx_write_state()
962 * the commit lsn. If we use the commit record lsn then we can in xlog_cil_set_ctx_write_state()
971 * Make sure the metadata we are about to overwrite in the log in xlog_cil_set_ctx_write_state()
982 * Take a reference to the iclog for the context so that we still hold in xlog_cil_set_ctx_write_state()
990 * iclog for an entire commit record, so we can attach the context in xlog_cil_set_ctx_write_state()
991 * callbacks now. This needs to be done before we make the commit_lsn in xlog_cil_set_ctx_write_state()
1001 * Now we can record the commit LSN and wake anyone waiting for this in xlog_cil_set_ctx_write_state()
1035 * Avoid getting stuck in this loop because we were woken by the in xlog_cil_order_write()
1142 * Build a checkpoint transaction header to begin the journal transaction. We
1146 * This is the only place we write a transaction header, so we also build the
1148 * transaction header. We keep the start record in it's own log vector rather
1200 * CIL item reordering compare function. We want to order in ascending ID order,
1201 * but we want to leave items with the same ID in the order they were added to
1202 * the list. This is important for operations like reflink where we log 4 order
1203 * dependent intents in a single transaction when we overwrite an existing
1221 * the CIL. We don't need the CIL lock here because it's only needed on the
1224 * If a log item is marked with a whiteout, we do not need to write it to the
1225 * journal and so we just move them to the whiteout list for the caller to
1251 /* we don't write ordered log vectors */ in xlog_cil_build_lv_chain()
1279 * If the current sequence is the same as xc_push_seq we need to do a flush. If
1281 * flushed and we don't need to do anything - the caller will wait for it to
1285 * Hence we can allow log forces to run racily and not issue pushes for the
1286 * same sequence twice. If we get a race between multiple pushes for the same
1291 * allocation context. However, we do not want to block on memory reclaim
1293 * by memory reclaim itself. Hence we really need to run under full GFP_NOFS
1328 * As we are about to switch to a new, empty CIL context, we no longer in xlog_cil_push_work()
1341 * Check if we've anything to push. If there is nothing, then we don't in xlog_cil_push_work()
1342 * move on to a new sequence number and so we have to be able to push in xlog_cil_push_work()
1359 * We are now going to push this context, so add it to the committing in xlog_cil_push_work()
1360 * list before we do anything else. This ensures that anyone waiting on in xlog_cil_push_work()
1369 * waiting on. If the CIL is not empty, we get put on the committing in xlog_cil_push_work()
1371 * an empty CIL and an unchanged sequence number means we jumped out in xlog_cil_push_work()
1388 * Switch the contexts so we can drop the context lock and move out in xlog_cil_push_work()
1389 * of a shared context. We can't just go straight to the commit record, in xlog_cil_push_work()
1390 * though - we need to synchronise with previous and future commits so in xlog_cil_push_work()
1392 * that we process items during log IO completion in the correct order. in xlog_cil_push_work()
1394 * For example, if we get an EFI in one checkpoint and the EFD in the in xlog_cil_push_work()
1395 * next (e.g. due to log forces), we do not want the checkpoint with in xlog_cil_push_work()
1397 * we must strictly order the commit records of the checkpoints so in xlog_cil_push_work()
1402 * Hence we need to add this context to the committing context list so in xlog_cil_push_work()
1408 * committing list. This also ensures that we can do unlocked checks in xlog_cil_push_work()
1418 * Sort the log vector chain before we add the transaction headers. in xlog_cil_push_work()
1419 * This ensures we always have the transaction headers at the start in xlog_cil_push_work()
1426 * begin the transaction. We need to account for the space used by the in xlog_cil_push_work()
1428 * Add the lvhdr to the head of the lv chain we pass to xlog_write() so in xlog_cil_push_work()
1450 * Grab the ticket from the ctx so we can ungrant it after releasing the in xlog_cil_push_work()
1451 * commit_iclog. The ctx may be freed by the time we return from in xlog_cil_push_work()
1453 * callback run) so we can't reference the ctx after the call to in xlog_cil_push_work()
1460 * to complete before we submit the commit_iclog. We can't use state in xlog_cil_push_work()
1464 * In the latter case, if it's a future iclog and we wait on it, the we in xlog_cil_push_work()
1466 * wakeup until this commit_iclog is written to disk. Hence we use the in xlog_cil_push_work()
1467 * iclog header lsn and compare it to the commit lsn to determine if we in xlog_cil_push_work()
1478 * iclogs older than ic_prev. Hence we only need to wait in xlog_cil_push_work()
1486 * We need to issue a pre-flush so that the ordering for this in xlog_cil_push_work()
1543 * We need to push CIL every so often so we don't cache more than we can fit in
1557 * The cil won't be empty because we are called while holding the in xlog_cil_push_background()
1558 * context lock so whatever we added to the CIL will still be there. in xlog_cil_push_background()
1563 * We are done if: in xlog_cil_push_background()
1564 * - we haven't used up all the space available yet; or in xlog_cil_push_background()
1565 * - we've already queued up a push; and in xlog_cil_push_background()
1566 * - we're not over the hard limit; and in xlog_cil_push_background()
1569 * If so, we don't need to take the push lock as there's nothing to do. in xlog_cil_push_background()
1586 * Drop the context lock now, we can't hold that if we need to sleep in xlog_cil_push_background()
1587 * because we are over the blocking threshold. The push_lock is still in xlog_cil_push_background()
1594 * If we are well over the space limit, throttle the work that is being in xlog_cil_push_background()
1619 * If the caller is performing a synchronous force, we will flush the workqueue
1624 * If the caller is performing an async push, we need to ensure that the
1625 * checkpoint is fully flushed out of the iclogs when we finish the push. If we
1629 * mechanism. Hence in this case we need to pass a flag to the push work to
1652 * If this is an async flush request, we always need to set the in xlog_cil_push_now()
1661 * If the CIL is empty or we've already pushed the sequence then in xlog_cil_push_now()
1662 * there's no more work that we need to do. in xlog_cil_push_now()
1691 * committed in the current (same) CIL checkpoint, we don't need to write either
1693 * journalled atomically within this checkpoint. As we cannot remove items from
1729 * To do this, we need to format the item, pin it in memory if required and
1730 * account for the space used by the transaction. Once we have done that we
1732 * transaction to the checkpoint context so we carry the busy extents through
1751 * Do all necessary memory allocation before we lock the CIL. in xlog_cil_commit()
1776 * This needs to be done before we drop the CIL context lock because we in xlog_cil_commit()
1778 * to disk. If we don't, then the CIL checkpoint can race with us and in xlog_cil_commit()
1779 * we can run checkpoint completion before we've updated and unlocked in xlog_cil_commit()
1821 * We only need to push if we haven't already pushed the sequence number given.
1822 * Hence the only time we will trigger a push here is if the push sequence is
1825 * We return the current commit lsn to allow the callers to determine if a
1844 * check to see if we need to force out the current context. in xlog_cil_force_seq()
1852 * See if we can find a previous sequence still committing. in xlog_cil_force_seq()
1853 * We need to wait for all previous sequence commits to complete in xlog_cil_force_seq()
1860 * Avoid getting stuck in this loop because we were woken by the in xlog_cil_force_seq()
1885 * Hence by the time we have got here it our sequence may not have been in xlog_cil_force_seq()
1891 * Hence if we don't find the context in the committing list and the in xlog_cil_force_seq()
1895 * it means we haven't yet started the push, because if it had started in xlog_cil_force_seq()
1896 * we would have found the context on the committing list. in xlog_cil_force_seq()
1908 * We detected a shutdown in progress. We need to trigger the log force in xlog_cil_force_seq()
1910 * we are already in a shutdown state. Hence we can't return in xlog_cil_force_seq()
1912 * LSN is already stable), so we return a zero LSN instead. in xlog_cil_force_seq()