Lines Matching full:buffers

84  * Returns if the folio has dirty or writeback buffers. If all the buffers
86 * any of the buffers are locked, it is assumed they are locked for IO.
181 * But it's the page lock which protects the buffers. To get around this,
223 /* we might be here because some of the buffers on this page are in __find_get_block_slow()
226 * elsewhere, don't buffer_error if we had some unmapped buffers in __find_get_block_slow()
418 * If a page's buffers are under async readin (end_buffer_async_read
420 * control could lock one of the buffers after it has completed
421 * but while some of the other buffers have not completed. This
426 * The page comes unlocked when it has no locked buffer_async buffers
430 * the buffers.
467 * management of a list of dependent buffers at ->i_mapping->i_private_list.
469 * Locking is a little subtle: try_to_free_buffers() will remove buffers
472 * at the time, not against the S_ISREG file which depends on those buffers.
474 * which backs the buffers. Which is different from the address_space
475 * against which the buffers are listed. So for a particular address_space,
480 * Which introduces a requirement: all buffers on an address_space's
483 * address_spaces which do not place buffers at ->i_private_list via these
494 * mark_buffer_dirty_fsync() to clearly define why those buffers are being
501 * that buffers are taken *off* the old inode's list when they are freed
528 * as you dirty the buffers, and then use osync_inode_buffers to wait for
529 * completion. Any other dirty buffers which are not yet queued for
558 * sync_mapping_buffers - write out & wait upon a mapping's "associated" buffers
559 * @mapping: the mapping which wants those buffers written
561 * Starts I/O against the buffers at mapping->i_private_list, and waits upon
565 * @mapping is a file or directory which needs those buffers to be written for
590 * filesystems which track all non-inode metadata in the buffers list
615 /* check and advance again to catch errors after syncing out buffers */ in generic_buffers_fsync_noflush()
633 * filesystems which track all non-inode metadata in the buffers list
698 * If the folio has buffers, the uptodate buffers are set dirty, to
699 * preserve dirty-state coherency between the folio and the buffers.
700 * Buffers added to a dirty folio are created dirty.
702 * The buffers are dirtied before the folio is dirtied. There's a small
705 * dirty before the buffers, writeback could clear the folio dirty flag,
706 * see a bunch of clean buffers and we'd end up with dirty buffers/clean
710 * using the folio's buffer list. This also prevents clean buffers
756 * Write out and wait upon a list of buffers.
759 * initially dirty buffers get waited on, but that any subsequently
760 * dirtied buffers don't. After all, we don't want fsync to last
763 * Do this in two main stages: first we copy dirty buffers to a
771 * the osync code to catch these locked, dirty buffers without requeuing
772 * any newly dirty buffers for write.
853 * Invalidate any and all dirty buffers on a given inode. We are
855 * done a sync(). Just drop the buffers from the inode list.
858 * assumes that all the buffers are against the blockdev. Not true
877 * Remove any clean buffers from the inode's buffer list. This is called
878 * when we're trying to free the inode itself. Those buffers can pin it.
880 * Returns true if all buffers were removed.
906 * Create the appropriate buffers when given a folio for data area and
908 * follow the buffers created. Return NULL if unable to create more
909 * buffers.
995 * Initialise the state of a blockdev folio's buffers.
1057 * writeback, or buffers may be cleaned. This should not in grow_dev_folio()
1058 * happen very often; maybe we have old buffers attached to in grow_dev_folio()
1073 * Link the folio to the buffers and initialise them. Take the in grow_dev_folio()
1088 * Create buffers for the specified block device block's folio. If
1089 * that folio was dirty, the buffers are set dirty also. Returns false
1108 /* Create a folio with the proper size buffers */ in grow_buffers()
1141 * The relationship between dirty buffers and dirty pages:
1143 * Whenever a page has any dirty buffers, the page's dirty bit is set, and
1146 * At all times, the dirtiness of the buffers represents the dirtiness of
1147 * subsections of the page. If the page has buffers, the page dirty bit is
1150 * When a page is set dirty in its entirety, all its buffers are marked dirty
1151 * (if the page has buffers).
1154 * buffers are not.
1156 * Also. When blockdev buffers are explicitly read with bread(), they
1158 * uptodate - even if all of its buffers are uptodate. A subsequent
1160 * buffers, will set the folio uptodate and will perform no I/O.
1281 * The bhs[] array is sorted - newest buffer is at bhs[0]. Buffers have their
1601 * block_invalidate_folio() does not have to release all buffers, but it must
1645 * We release buffers only if the entire folio is being invalidated. in block_invalidate_folio()
1657 * We attach and possibly dirty the buffers atomically wrt
1695 * clean_bdev_aliases: clean a range of buffers in block device
1696 * @bdev: Block device to clean buffers in
1710 * writeout I/O going on against recently-freed buffers. We don't wait on that
1736 * to pin buffers here since we can afford to sleep and in clean_bdev_aliases()
1797 * While block_write_full_folio is writing back the dirty buffers under
1798 * the page lock, whoever dirtied the buffers may decide to clean them
1828 * here, and the (potentially unmapped) buffers may become dirty at in __block_write_full_folio()
1832 * Buffers outside i_size may be dirtied by block_dirty_folio; in __block_write_full_folio()
1843 * Get all the dirty buffers mapped to disk addresses and in __block_write_full_folio()
1849 * mapped buffers outside i_size will occur, because in __block_write_full_folio()
1900 * The folio and its buffers are protected by the writeback flag, in __block_write_full_folio()
1921 * The folio was marked dirty, but the buffers were in __block_write_full_folio()
1942 /* Recovery: lock and submit the mapped buffers */ in __block_write_full_folio()
1976 * If a folio has any new buffers, zero them out here, and mark them uptodate
2204 * If this is a partial write which happened to make all buffers in __block_commit_write()
2251 * The buffers that were written will now be uptodate, so in block_write_end()
2316 * block_is_partially_uptodate checks whether buffers within a folio are
2319 * Returns true if all buffers which correspond to the specified part
2427 * All buffers are uptodate or get_block() returned an in block_read_full_folio()
2434 /* Stage two: lock the buffers */ in block_read_full_folio()
2913 * try_to_free_buffers - Release buffers attached to this folio.
2916 * If any buffers are in use (dirty, under writeback, elevated refcount),
2917 * no buffers will be freed.
2919 * If the folio is dirty but all the buffers are clean then we need to
2921 * may be against a block device, and a later reattachment of buffers
2922 * to a dirty folio will set *all* buffers dirty. Which would corrupt
2925 * The same applies to regular filesystem folios: if all the buffers are
2934 * Return: true if all buffers attached to this folio were freed.
2955 * If the filesystem writes its buffers by hand (eg ext3) in try_to_free_buffers()
2956 * then we can have clean buffers against a dirty folio. We in try_to_free_buffers()
2961 * the folio's buffers clean. We discover that here and clean in try_to_free_buffers()
3104 * __bh_read_batch - Submit read for a batch of unlocked buffers