/linux-6.12.1/arch/s390/kernel/ |
D | debug.c | 34 #define ALL_AREAS 0 /* copy all debug areas */ 35 #define NO_AREAS 1 /* copy no debug areas */ 174 * - Debug areas are implemented as a threedimensonal array: 175 * areas[areanumber][pagenumber][pageoffset] 180 debug_entry_t ***areas; in debug_areas_alloc() local 183 areas = kmalloc_array(nr_areas, sizeof(debug_entry_t **), GFP_KERNEL); in debug_areas_alloc() 184 if (!areas) in debug_areas_alloc() 188 areas[i] = kmalloc_array(pages_per_area, in debug_areas_alloc() 191 if (!areas[i]) in debug_areas_alloc() 194 areas[i][j] = kzalloc(PAGE_SIZE, GFP_KERNEL); in debug_areas_alloc() [all …]
|
/linux-6.12.1/Documentation/userspace-api/media/v4l/ |
D | metafmt-vsp1-hgt.rst | 25 between 1 and 16 depending on the Hue areas configuration. Finding the 34 how the HGT Hue areas are configured. There are 6 user configurable Hue 35 Areas which can be configured to cover overlapping Hue values: 58 When two consecutive areas don't overlap (n+1L is equal to nU) the boundary 63 with a hue value included in the overlapping region between two areas (between 64 n+1L and nU excluded) are attributed to both areas and given a weight for each 65 of these areas proportional to their position along the diagonal lines
|
/linux-6.12.1/Documentation/arch/s390/ |
D | s390dbf.rst | 28 debug log for the caller. For each debug log exists a number of debug areas 30 pages in memory. In the debug areas there are stored debug entries (log records) 44 The debug areas themselves are also ordered in form of a ring buffer. 140 /* register 4 debug areas with one page each and 4 byte data field */ 173 /* register 4 debug areas with one page each and data field for */ 228 Flushing debug areas 230 Debug areas can be flushed with piping the number of the desired 231 area (0...n) to the debugfs file "flush". When using "-" all debug areas 240 2. Flush all debug areas:: 244 Changing the size of debug areas [all …]
|
/linux-6.12.1/Documentation/core-api/ |
D | swiotlb.rst | 162 Slots are also grouped into "areas", with the constraint that a slot set exists 164 manipulate the slots in that area. The division into areas avoids contending 166 VM. The number of areas defaults to the number of CPUs in the system for 169 number of areas can also be set via the "swiotlb=" kernel boot parameter. 172 does not have enough free space, areas associated with other CPUs are tried 175 overall. But an allocation request does not fail unless all areas do not have 178 IO_TLB_SIZE, IO_TLB_SEGSIZE, and the number of areas must all be powers of 2 as 180 number of areas is rounded up to a power of 2 if necessary to meet this 217 The number of areas in a dynamic pool may be different from the number of areas 219 the number of areas will likely be smaller. For example, with a new pool size [all …]
|
/linux-6.12.1/kernel/dma/ |
D | swiotlb.c | 125 * otherwise a segment may span two or more areas. It conflicts with free 145 * swiotlb_adjust_nareas() - adjust the number of areas and slots 146 * @nareas: Desired number of areas. Zero is treated as 1. 148 * Adjust the default number of areas in a memory pool. 168 * limit_nareas() - get the maximum number of areas for a given memory pool size 169 * @nareas: Desired number of areas. 172 * Limit the number of areas to the maximum possible number of areas in 175 * Return: Maximum possible number of areas. 283 spin_lock_init(&mem->areas[i].lock); in swiotlb_init_io_tlb_pool() 284 mem->areas[i].index = 0; in swiotlb_init_io_tlb_pool() [all …]
|
/linux-6.12.1/arch/s390/include/asm/ |
D | debug.h | 20 #define DEBUG_FLUSH_ALL -1 /* parameter to flush all areas */ 54 debug_entry_t ***areas; member 408 * Define static areas for early trace data. During boot debug_register_static() 409 * will replace these with dynamically allocated areas to allow custom page and 424 static debug_entry_t **VNAME(var, areas)[EARLY_AREAS] __initdata = { \ 440 .areas = VNAME(var, areas), \ 451 #define __REGISTER_STATIC_DEBUG_INFO(var, name, pages, areas, view) \ argument 454 debug_register_static(&var, (pages), (areas)); \ 466 * @nr_areas: Number of debug areas 476 * Note: Tracing will start with a fixed number of initial pages and areas.
|
/linux-6.12.1/include/linux/ |
D | memblock.h | 165 * for_each_physmem_range - iterate through physmem areas not included in type. 178 * __for_each_mem_range - iterate through memblock areas from type_a and not 198 * __for_each_mem_range_rev - reverse iterate through memblock areas from 219 * for_each_mem_range - iterate through memory areas. 230 * for_each_mem_range_rev - reverse iterate through memblock areas from 242 * for_each_reserved_mem_range - iterate over all reserved memblock areas 247 * Walks over reserved areas of memblock. Available as soon as memblock 305 * free memblock areas from a given point 311 * Walks over free (memory && !reserved) areas of memblock in a specific 322 * for_each_free_mem_range - iterate through free memblock areas [all …]
|
D | swiotlb.h | 62 * @nareas: Number of areas in the pool. 64 * @areas: Array of memory area descriptors. 78 struct io_tlb_area *areas; member 101 * across all areas. Used only for calculating used_hiwater in 106 * are currently used across all areas.
|
/linux-6.12.1/drivers/staging/media/atomisp/pci/isp/kernels/xnr/xnr_3.0/ |
D | ia_css_xnr3_param.h | 40 * for dark areas, and a scaled diff towards the value for bright areas. */ 51 * for dark areas, and a scaled diff towards the value for bright areas. */
|
D | ia_css_xnr3_types.h | 45 * the three YUV planes: one for dark areas and one for bright areas. All 62 * each of the two chroma planes: one for dark areas and one for bright areas.
|
/linux-6.12.1/mm/damon/tests/ |
D | vaddr-kunit.h | 44 * discontiguous regions which cover every mapped areas. However, the three 45 * regions should not include the two biggest unmapped areas in the original 46 * mapping, because the two biggest areas are normally the areas between 1) 48 * Because these two unmapped areas are very huge but obviously never accessed, 53 * unmapped areas. After that, based on the information, it constructs the 61 * and end with 305. The process also has three unmapped areas, 25-200, 63 * unmapped areas, and thus it should be converted to three regions of 10-25,
|
/linux-6.12.1/drivers/iommu/iommufd/ |
D | io_pagetable.h | 18 * Each io_pagetable is composed of intervals of areas which cover regions of 19 * the iova that are backed by something. iova not covered by areas is not 164 * Iterate over a contiguous list of areas that span the iova,last_iova range. 166 * contiguous areas existed. 179 * This holds a pinned page list for multiple areas of IO address space. The
|
/linux-6.12.1/Documentation/admin-guide/mm/ |
D | ksm.rst | 17 The KSM daemon ksmd periodically scans those areas of user memory 33 KSM only operates on those areas of address space which an application 62 Like other madvise calls, they are intended for use on mapped areas of 64 includes unmapped gaps (though working on the intervening mapped areas), 68 restricting its use to areas likely to benefit. KSM's scans may use a lot 115 leave mergeable areas registered for next run. 210 how many times all mergeable areas have been scanned
|
D | cma_debugfs.rst | 6 different CMA areas and to test allocation/release in each of the areas.
|
/linux-6.12.1/include/trace/events/ |
D | vmalloc.h | 54 * purge_vmap_area_lazy - called when vmap areas were lazily freed 57 * @npurged: numbed of purged vmap areas 94 * outstanding areas and a maximum allowed threshold before
|
/linux-6.12.1/include/pcmcia/ |
D | ss.h | 231 * - pccard_static_ops iomem and ioport areas are assigned statically 232 * - pccard_iodyn_ops iomem areas is assigned statically, ioport 233 * areas dynamically 236 * - pccard_nonstatic_ops iomem and ioport areas are assigned dynamically.
|
/linux-6.12.1/Documentation/ABI/testing/ |
D | sysfs-devices-platform-docg3 | 7 keylocked. Each docg3 chip (or floor) has 2 protection areas, 24 (0 or 1). Each docg3 chip (or floor) has 2 protection areas,
|
/linux-6.12.1/Documentation/security/ |
D | self-protection.rst | 35 areas of the kernel that can be used to redirect execution. This ranges 37 APIs hard to use incorrectly, minimizing the areas of writable kernel 50 Any areas of the kernel with executable memory must not be writable. 73 Vast areas of kernel memory contain function pointers that are looked 177 other memory areas. 251 initializations. If the base address of these areas is not the same
|
/linux-6.12.1/arch/sh/boards/mach-sdk7786/ |
D | fpga.c | 17 * The FPGA can be mapped in any of the generally available areas, 30 * Iterate over all of the areas where the FPGA could be mapped. in sdk7786_fpga_probe()
|
/linux-6.12.1/Documentation/virt/hyperv/ |
D | overview.rst | 71 allocated memory. Many shared areas are kept to 1 page so that a 72 single GPA is sufficient. Larger shared areas require a list of 187 main areas: 193 3. individual device driver areas such as drivers/scsi, drivers/net,
|
/linux-6.12.1/drivers/s390/net/ |
D | ctcm_dbug.c | 46 /* register the areas */ in ctcm_register_dbf_views() 49 ctcm_dbf[x].areas, in ctcm_register_dbf_views()
|
/linux-6.12.1/arch/x86/kernel/cpu/sgx/ |
D | sgx.h | 52 * The firmware can define multiple chunks of EPC to the different areas of the 53 * physical memory e.g. for memory areas of the each node. This structure is
|
/linux-6.12.1/arch/x86/kernel/ |
D | setup_percpu.c | 143 * percpu areas are aligned to PMD. This, in the future, in setup_per_cpu_areas() 167 /* alrighty, percpu areas up and running */ in setup_per_cpu_areas() 176 * initial arrays to the per cpu data areas. These in setup_per_cpu_areas()
|
/linux-6.12.1/Documentation/userspace-api/accelerators/ |
D | ocxl.rst | 66 work with, the size of its MMIO areas, ... 73 OpenCAPI defines two MMIO areas for each AFU: 158 MMIO areas, the AFU version, and the PASID for the current context.
|
/linux-6.12.1/arch/x86/virt/vmx/tdx/ |
D | tdx.c | 336 * number of reserved areas. in tdmr_size_single() 569 * PAMTs. This helps minimize the PAMT's use of reserved areas in tdmr_set_up_pamt() 731 pr_warn("initialization failed: TDMR [0x%llx, 0x%llx): reserved areas exhausted.\n", in tdmr_add_rsvd_area() 738 * optimize or reduce the number of reserved areas which are in tdmr_add_rsvd_area() 739 * consumed by contiguous reserved areas, for instance. in tdmr_add_rsvd_area() 750 * Go through @tmb_list to find holes between memory areas. If any of 783 * Skip over memory areas that in tdmr_populate_rsvd_holes() 855 /* Compare function called by sort() for TDMR reserved areas */ 866 /* Reserved areas cannot overlap. The caller must guarantee. */ in rsvd_area_cmp_func() 872 * Populate reserved areas for the given @tdmr, including memory holes [all …]
|