Lines Matching full:table

222  *                              table of any level.
389 * struct pvr_page_table_l2_entry_raw - A single entry in a level 2 page table.
397 * .. flat-table::
402 * - **Level 1 Page Table Base Address:** Bits 39..12 of the L1
403 * page table base address, which is 4KiB aligned.
416 * table. If the valid bit is not set, then an attempted use of
433 * page table.
434 * @entry: Target raw level 2 page table entry.
435 * @child_table_dma_addr: DMA address of the level 1 page table to be
460 * struct pvr_page_table_l1_entry_raw - A single entry in a level 1 page table.
468 * .. flat-table::
481 * - **Level 0 Page Table Base Address:** The way this value is
483 * table below (e.g. bits 11..5 for page size 4KiB) should be
486 * This table shows the bits used in an L1 page table entry to
487 * represent the Physical Table Base Address for a given Page Size.
488 * Since each L1 page table entry covers 2MiB of address space, the
491 * .. flat-table::
497 * - L0 page table base address bits
498 * - Number of L0 page table entries
499 * - Size of L0 page table
538 * - **Valid:** Indicates that the entry contains a valid L0 page table.
556 * page table.
557 * @entry: Target raw level 1 page table entry.
558 * @child_table_dma_addr: DMA address of the level 0 page table to be
576 * page table address is aligned to the size of the in pvr_page_table_l1_entry_raw_set()
577 * largest (a 4KB) table currently holds. in pvr_page_table_l1_entry_raw_set()
589 * struct pvr_page_table_l0_entry_raw - A single entry in a level 0 page table.
597 * .. flat-table::
619 * on the page size. Bits not specified in the table below (e.g. bits
622 * This table shows the bits used in an L0 page table entry to represent
624 * associated L1 page table entry).
626 * .. flat-table::
661 * - **PM Src:** Set on Parameter Manager (PM) allocated page table
692 * level 0 page table.
716 * page table.
717 * @entry: Target raw level 0 page table entry.
747 * table entry.
756 * entries in the table under &struct pvr_page_table_l0_entry_raw.
778 * struct pvr_page_table_l2_raw - The raw data of a level 2 page table.
784 /** @entries: The raw values of this table. */
791 * struct pvr_page_table_l1_raw - The raw data of a level 1 page table.
797 /** @entries: The raw values of this table. */
804 * struct pvr_page_table_l0_raw - The raw data of a level 0 page table.
812 * specified in the associated level 1 page table entry. Since the device
819 /** @entries: The raw values of this table. */
837 * struct pvr_page_table_l2 - A wrapped level 2 page table.
839 * To access the raw part of this table, use pvr_page_table_l2_get_raw().
843 * A level 2 page table forms the root of the page table tree structure, so
848 * @entries: The children of this node in the page table tree
857 * equivalent of this table. **For internal use only.**
863 * in this table. This value is essentially a refcount - the table is
871 * pvr_page_table_l2_init() - Initialize a level 2 page table.
872 * @table: Target level 2 page table.
875 * It is expected that @table be zeroed (e.g. from kzalloc()) before calling
880 * * Any error encountered while intializing &table->backing_page using
884 pvr_page_table_l2_init(struct pvr_page_table_l2 *table, in pvr_page_table_l2_init() argument
887 return pvr_mmu_backing_page_init(&table->backing_page, pvr_dev); in pvr_page_table_l2_init()
891 * pvr_page_table_l2_fini() - Teardown a level 2 page table.
892 * @table: Target level 2 page table.
894 * It is an error to attempt to use @table after calling this function.
897 pvr_page_table_l2_fini(struct pvr_page_table_l2 *table) in pvr_page_table_l2_fini() argument
899 pvr_mmu_backing_page_fini(&table->backing_page); in pvr_page_table_l2_fini()
903 * pvr_page_table_l2_sync() - Flush a level 2 page table from the CPU to the
905 * @table: Target level 2 page table.
909 * you're sure you have no more changes to make to** @table **in the immediate
912 * If child level 1 page tables of @table also need to be flushed, this should
916 pvr_page_table_l2_sync(struct pvr_page_table_l2 *table) in pvr_page_table_l2_sync() argument
918 pvr_mmu_backing_page_sync(&table->backing_page, PVR_MMU_SYNC_LEVEL_2_FLAGS); in pvr_page_table_l2_sync()
923 * page table.
924 * @table: Target level 2 page table.
926 * Essentially returns the CPU address of the raw equivalent of @table, cast to
932 * The raw equivalent of @table.
935 pvr_page_table_l2_get_raw(struct pvr_page_table_l2 *table) in pvr_page_table_l2_get_raw() argument
937 return table->backing_page.host_ptr; in pvr_page_table_l2_get_raw()
942 * of a mirror level 2 page table.
943 * @table: Target level 2 page table.
947 * table, since the returned "entry" is not guaranteed to be valid. The caller
952 * ensure @idx refers to a valid index within @table before dereferencing the
956 * A pointer to the requested raw level 2 page table entry.
959 pvr_page_table_l2_get_entry_raw(struct pvr_page_table_l2 *table, u16 idx) in pvr_page_table_l2_get_entry_raw() argument
961 return &pvr_page_table_l2_get_raw(table)->entries[idx]; in pvr_page_table_l2_get_entry_raw()
965 * pvr_page_table_l2_entry_is_valid() - Check if a level 2 page table entry is
967 * @table: Target level 2 page table.
971 * ensure @idx refers to a valid index within @table before calling this
975 pvr_page_table_l2_entry_is_valid(struct pvr_page_table_l2 *table, u16 idx) in pvr_page_table_l2_entry_is_valid() argument
978 *pvr_page_table_l2_get_entry_raw(table, idx); in pvr_page_table_l2_entry_is_valid()
984 * struct pvr_page_table_l1 - A wrapped level 1 page table.
986 * To access the raw part of this table, use pvr_page_table_l1_get_raw().
992 * @entries: The children of this node in the page table tree
1001 * equivalent of this table. **For internal use only.**
1007 * @parent: The parent of this node in the page table tree structure.
1009 * This is also a mirror table.
1011 * Only valid when the L1 page table is active. When the L1 page table
1018 * @next_free: Pointer to the next L1 page table to take/free.
1021 * when preallocating tables and when the page table has been
1028 * @parent_idx: The index of the entry in the parent table (see
1029 * @parent) which corresponds to this table.
1035 * in this table. This value is essentially a refcount - the table is
1043 * pvr_page_table_l1_init() - Initialize a level 1 page table.
1044 * @table: Target level 1 page table.
1047 * When this function returns successfully, @table is still not considered
1048 * valid. It must be inserted into the page table tree structure with
1051 * It is expected that @table be zeroed (e.g. from kzalloc()) before calling
1056 * * Any error encountered while intializing &table->backing_page using
1060 pvr_page_table_l1_init(struct pvr_page_table_l1 *table, in pvr_page_table_l1_init() argument
1063 table->parent_idx = PVR_IDX_INVALID; in pvr_page_table_l1_init()
1065 return pvr_mmu_backing_page_init(&table->backing_page, pvr_dev); in pvr_page_table_l1_init()
1069 * pvr_page_table_l1_free() - Teardown a level 1 page table.
1070 * @table: Target level 1 page table.
1072 * It is an error to attempt to use @table after calling this function, even
1077 pvr_page_table_l1_free(struct pvr_page_table_l1 *table) in pvr_page_table_l1_free() argument
1079 pvr_mmu_backing_page_fini(&table->backing_page); in pvr_page_table_l1_free()
1080 kfree(table); in pvr_page_table_l1_free()
1084 * pvr_page_table_l1_sync() - Flush a level 1 page table from the CPU to the
1086 * @table: Target level 1 page table.
1090 * you're sure you have no more changes to make to** @table **in the immediate
1093 * If child level 0 page tables of @table also need to be flushed, this should
1097 pvr_page_table_l1_sync(struct pvr_page_table_l1 *table) in pvr_page_table_l1_sync() argument
1099 pvr_mmu_backing_page_sync(&table->backing_page, PVR_MMU_SYNC_LEVEL_1_FLAGS); in pvr_page_table_l1_sync()
1104 * page table.
1105 * @table: Target level 1 page table.
1107 * Essentially returns the CPU address of the raw equivalent of @table, cast to
1113 * The raw equivalent of @table.
1116 pvr_page_table_l1_get_raw(struct pvr_page_table_l1 *table) in pvr_page_table_l1_get_raw() argument
1118 return table->backing_page.host_ptr; in pvr_page_table_l1_get_raw()
1123 * of a mirror level 1 page table.
1124 * @table: Target level 1 page table.
1128 * table, since the returned "entry" is not guaranteed to be valid. The caller
1133 * ensure @idx refers to a valid index within @table before dereferencing the
1137 * A pointer to the requested raw level 1 page table entry.
1140 pvr_page_table_l1_get_entry_raw(struct pvr_page_table_l1 *table, u16 idx) in pvr_page_table_l1_get_entry_raw() argument
1142 return &pvr_page_table_l1_get_raw(table)->entries[idx]; in pvr_page_table_l1_get_entry_raw()
1146 * pvr_page_table_l1_entry_is_valid() - Check if a level 1 page table entry is
1148 * @table: Target level 1 page table.
1152 * ensure @idx refers to a valid index within @table before calling this
1156 pvr_page_table_l1_entry_is_valid(struct pvr_page_table_l1 *table, u16 idx) in pvr_page_table_l1_entry_is_valid() argument
1159 *pvr_page_table_l1_get_entry_raw(table, idx); in pvr_page_table_l1_entry_is_valid()
1165 * struct pvr_page_table_l0 - A wrapped level 0 page table.
1167 * To access the raw part of this table, use pvr_page_table_l0_get_raw().
1177 * equivalent of this table. **For internal use only.**
1183 * @parent: The parent of this node in the page table tree structure.
1185 * This is also a mirror table.
1187 * Only valid when the L0 page table is active. When the L0 page table
1194 * @next_free: Pointer to the next L0 page table to take/free.
1197 * when preallocating tables and when the page table has been
1204 * @parent_idx: The index of the entry in the parent table (see
1205 * @parent) which corresponds to this table.
1211 * in this table. This value is essentially a refcount - the table is
1219 * pvr_page_table_l0_init() - Initialize a level 0 page table.
1220 * @table: Target level 0 page table.
1223 * When this function returns successfully, @table is still not considered
1224 * valid. It must be inserted into the page table tree structure with
1227 * It is expected that @table be zeroed (e.g. from kzalloc()) before calling
1232 * * Any error encountered while intializing &table->backing_page using
1236 pvr_page_table_l0_init(struct pvr_page_table_l0 *table, in pvr_page_table_l0_init() argument
1239 table->parent_idx = PVR_IDX_INVALID; in pvr_page_table_l0_init()
1241 return pvr_mmu_backing_page_init(&table->backing_page, pvr_dev); in pvr_page_table_l0_init()
1245 * pvr_page_table_l0_free() - Teardown a level 0 page table.
1246 * @table: Target level 0 page table.
1248 * It is an error to attempt to use @table after calling this function, even
1253 pvr_page_table_l0_free(struct pvr_page_table_l0 *table) in pvr_page_table_l0_free() argument
1255 pvr_mmu_backing_page_fini(&table->backing_page); in pvr_page_table_l0_free()
1256 kfree(table); in pvr_page_table_l0_free()
1260 * pvr_page_table_l0_sync() - Flush a level 0 page table from the CPU to the
1262 * @table: Target level 0 page table.
1266 * you're sure you have no more changes to make to** @table **in the immediate
1269 * If child pages of @table also need to be flushed, this should be done first
1274 pvr_page_table_l0_sync(struct pvr_page_table_l0 *table) in pvr_page_table_l0_sync() argument
1276 pvr_mmu_backing_page_sync(&table->backing_page, PVR_MMU_SYNC_LEVEL_0_FLAGS); in pvr_page_table_l0_sync()
1281 * page table.
1282 * @table: Target level 0 page table.
1284 * Essentially returns the CPU address of the raw equivalent of @table, cast to
1290 * The raw equivalent of @table.
1293 pvr_page_table_l0_get_raw(struct pvr_page_table_l0 *table) in pvr_page_table_l0_get_raw() argument
1295 return table->backing_page.host_ptr; in pvr_page_table_l0_get_raw()
1300 * of a mirror level 0 page table.
1301 * @table: Target level 0 page table.
1305 * table, since the returned "entry" is not guaranteed to be valid. The caller
1310 * ensure @idx refers to a valid index within @table before dereferencing the
1315 * A pointer to the requested raw level 0 page table entry.
1318 pvr_page_table_l0_get_entry_raw(struct pvr_page_table_l0 *table, u16 idx) in pvr_page_table_l0_get_entry_raw() argument
1320 return &pvr_page_table_l0_get_raw(table)->entries[idx]; in pvr_page_table_l0_get_entry_raw()
1324 * pvr_page_table_l0_entry_is_valid() - Check if a level 0 page table entry is
1326 * @table: Target level 0 page table.
1330 * ensure @idx refers to a valid index within @table before calling this
1334 pvr_page_table_l0_entry_is_valid(struct pvr_page_table_l0 *table, u16 idx) in pvr_page_table_l0_entry_is_valid() argument
1337 *pvr_page_table_l0_get_entry_raw(table, idx); in pvr_page_table_l0_entry_is_valid()
1350 /** @page_table_l2: The MMU table root. */
1356 * by the page table structure.
1362 * @l1_table: A cached handle to the level 1 page table the
1368 * @l0_table: A cached handle to the level 0 page table the
1374 * @l2_idx: Index into the level 2 page table the context is
1380 * @l1_idx: Index into the level 1 page table the context is
1386 * @l0_idx: Index into the level 0 page table the context is
1403 * @sgt: Scatter gather table containing pages pinned for use by
1413 * @l1_prealloc_tables: Preallocated l1 page table objects
1420 * @l0_prealloc_tables: Preallocated l0 page table objects
1430 * @l1_free_tables: Collects page table objects freed by unmap
1436 * @l0_free_tables: Collects page table objects freed by unmap
1444 * page table structure.
1449 * @sync_level_required: The maximum level of the page table tree
1461 * table into a level 2 page table.
1463 * table into.
1464 * @child_table: Target level 1 page table to be referenced by the new entry.
1493 * pvr_page_table_l2_remove() - Remove a level 1 page table from a level 2 page
1494 * table.
1524 * table into a level 1 page table.
1526 * table into.
1527 * @child_table: L0 page table to insert.
1554 * pvr_page_table_l1_remove() - Remove a level 0 page table from a level 1 page
1555 * table.
1558 * If this function results in the L1 table becoming empty, it will be removed
1559 * from its parent level 2 page table and destroyed.
1583 /* Clear the parent L2 page table entry. */ in pvr_page_table_l1_remove()
1591 * into a level 0 page table.
1619 * table.
1622 * If this function results in the L0 table becoming empty, it will be removed
1623 * from its parent L1 page table and destroyed.
1643 /* Clear the parent L1 page table entry. */ in pvr_page_table_l0_remove()
1650 * DOC: Page table index utilities
1654 * pvr_page_table_l2_idx() - Calculate the level 2 page table index for a
1663 * The index into a level 2 page table corresponding to @device_addr.
1673 * pvr_page_table_l1_idx() - Calculate the level 1 page table index for a
1682 * The index into a level 1 page table corresponding to @device_addr.
1692 * pvr_page_table_l0_idx() - Calculate the level 0 page table index for a
1701 * The index into a level 0 page table corresponding to @device_addr.
1711 * DOC: High-level page table operations
1716 * necessary) a level 1 page table from the specified level 2 page table entry.
1719 * when empty page table entries are encountered during traversal.
1725 * * -%ENXIO if a level 1 page table would have been inserted.
1728 * * Any error encountered while inserting the level 1 page table.
1736 struct pvr_page_table_l1 *table; in pvr_page_table_l1_get_or_insert() local
1748 /* Take a prealloced table. */ in pvr_page_table_l1_get_or_insert()
1749 table = op_ctx->map.l1_prealloc_tables; in pvr_page_table_l1_get_or_insert()
1750 if (!table) in pvr_page_table_l1_get_or_insert()
1754 op_ctx->map.l1_prealloc_tables = table->next_free; in pvr_page_table_l1_get_or_insert()
1755 table->next_free = NULL; in pvr_page_table_l1_get_or_insert()
1757 /* Ensure new table is fully written out before adding to L2 page table. */ in pvr_page_table_l1_get_or_insert()
1760 pvr_page_table_l2_insert(op_ctx, table); in pvr_page_table_l1_get_or_insert()
1767 * necessary) a level 0 page table from the specified level 1 page table entry.
1770 * when empty page table entries are encountered during traversal.
1776 * * -%ENXIO if a level 0 page table would have been inserted.
1779 * * Any error encountered while inserting the level 0 page table.
1785 struct pvr_page_table_l0 *table; in pvr_page_table_l0_get_or_insert() local
1797 /* Take a prealloced table. */ in pvr_page_table_l0_get_or_insert()
1798 table = op_ctx->map.l0_prealloc_tables; in pvr_page_table_l0_get_or_insert()
1799 if (!table) in pvr_page_table_l0_get_or_insert()
1803 op_ctx->map.l0_prealloc_tables = table->next_free; in pvr_page_table_l0_get_or_insert()
1804 table->next_free = NULL; in pvr_page_table_l0_get_or_insert()
1806 /* Ensure new table is fully written out before adding to L1 page table. */ in pvr_page_table_l0_get_or_insert()
1809 pvr_page_table_l1_insert(op_ctx, table); in pvr_page_table_l0_get_or_insert()
1852 * page table structure behind a VM context.
1865 * * Newly created page table object on success, or
1874 struct pvr_page_table_l1 *table = in pvr_page_table_l1_alloc() local
1875 kzalloc(sizeof(*table), GFP_KERNEL); in pvr_page_table_l1_alloc()
1877 if (!table) in pvr_page_table_l1_alloc()
1880 err = pvr_page_table_l1_init(table, ctx->pvr_dev); in pvr_page_table_l1_alloc()
1882 kfree(table); in pvr_page_table_l1_alloc()
1886 return table; in pvr_page_table_l1_alloc()
1894 * * Newly created page table object on success, or
1903 struct pvr_page_table_l0 *table = in pvr_page_table_l0_alloc() local
1904 kzalloc(sizeof(*table), GFP_KERNEL); in pvr_page_table_l0_alloc()
1906 if (!table) in pvr_page_table_l0_alloc()
1909 err = pvr_page_table_l0_init(table, ctx->pvr_dev); in pvr_page_table_l0_alloc()
1911 kfree(table); in pvr_page_table_l0_alloc()
1915 return table; in pvr_page_table_l0_alloc()
1922 * @level: Maximum page table level for which a sync is required.
1936 * @level: Maximum page table level to sync.
1948 * We sync the page table levels in ascending order (starting from the in pvr_mmu_op_context_sync_manual()
1976 * @level: Requested page table level to sync up to (inclusive).
2007 * pvr_mmu_op_context_sync() - Trigger a sync of every page table referenced by
2025 * the page table tree structure needed to reference the physical page
2029 * empty page table entries are encountered during traversal.
2030 * @load_level_required: Maximum page table level to load.
2036 * Since there is only one root page table, it is technically incorrect to call
2065 /* Get or create L1 page table. */ in pvr_mmu_op_context_load_tables()
2070 * If @should_create is %false and no L1 page table was in pvr_mmu_op_context_load_tables()
2083 /* Get or create L0 page table. */ in pvr_mmu_op_context_load_tables()
2088 * If @should_create is %false and no L0 page table was in pvr_mmu_op_context_load_tables()
2098 * At this point, an L1 page table could have been in pvr_mmu_op_context_load_tables()
2100 * at inserting an L0 page table. In this instance, we in pvr_mmu_op_context_load_tables()
2101 * must remove the empty L1 page table ourselves as in pvr_mmu_op_context_load_tables()
2116 * A sync is only needed if table objects were inserted. This can be in pvr_mmu_op_context_load_tables()
2135 * empty page table entries are encountered during traversal.
2165 * empty page table entries are encountered during traversal.
2168 * the state of the table references in @op_ctx is valid on return. If -%ENXIO
2169 * is returned, at least one of the table references is invalid. It should be
2173 * to be valid, since it represents the root of the page table tree structure.
2177 * * -%EPERM if the operation would wrap at the top of the page table
2179 * * -%ENXIO if @should_create is %false and a page table of any level would
2207 * zero here. However, that would wrap the top layer of the page table in pvr_mmu_op_context_next_page()
2212 "%s(%p) attempted to loop the top of the page table hierarchy", in pvr_mmu_op_context_next_page()
2232 * a level 0 page table.
2236 * @flags: Page options saved on the level 0 page table entry for reading by
2262 * parent level 0 page table.
2274 /* Clear the parent L0 page table entry. */ in pvr_page_destroy()
2333 * @sgt: Scatter gather table containing pages pinned for use by this context.
2362 * The number of page table objects we need to prealloc is in pvr_mmu_op_context_create()
2365 * identical to that for the index into a table for a device in pvr_mmu_op_context_create()
2376 * Alloc and push page table entries until we have enough of in pvr_mmu_op_context_create()
2435 * advance beforehand. If the L0 page table reference in in pvr_mmu_op_context_unmap_curr_page()
2445 * If the page table tree structure at @op_ctx.curr_page is in pvr_mmu_op_context_unmap_curr_page()
2489 * pvr_mmu_map_sgl() - Map part of a scatter-gather table entry to
2493 * @sgl: Target scatter-gather table entry.
2598 /* Map scatter gather table */ in pvr_mmu_map()
2617 * Flag the L0 page table as requiring a flush when the MMU op in pvr_mmu_map()