/linux-6.12.1/arch/arm64/mm/ |
D | cache.S | 20 * Ensure that the I and D caches are coherent within specified region. 48 * Ensure that the I and D caches are coherent within specified region. 64 * Ensure that the I and D caches are coherent within specified region. 87 * Ensure that the I cache is invalid within specified region. 105 * Ensure that any D-cache lines for the interval [start, end) 120 * Ensure that any D-cache lines for the interval [start, end) 138 * Ensure that any D-cache lines for the interval [start, end) 169 * Ensure that any D-cache lines for the interval [start, end) 184 * Ensure that any D-cache lines for the interval [start, end)
|
/linux-6.12.1/mm/kmsan/ |
D | kmsan_test.c | 164 /* Test case: ensure that kmalloc() returns uninitialized memory. */ 177 * Test case: ensure that kmalloc'ed memory becomes initialized after memset(). 191 /* Test case: ensure that kzalloc() returns initialized memory. */ 203 /* Test case: ensure that local variables are uninitialized by default. */ 214 /* Test case: ensure that local variables with initializers are initialized. */ 271 * Test case: ensure that uninitialized values are tracked through function 295 * Test case: ensure kmsan_check_memory() reports an error when checking 344 * Test case: ensure that memset() can initialize a buffer allocated via 364 /* Test case: ensure that use-after-free reporting works. */ 382 * Test case: ensure that uninitialized values are propagated through per-CPU [all …]
|
/linux-6.12.1/arch/arm/include/asm/ |
D | cacheflush.h | 71 * Ensure coherency between the Icache and the Dcache in the 79 * Ensure coherency between the Icache and the Dcache in the 87 * Ensure that the data held in page is written back. 136 * Their sole purpose is to ensure that data held in the cache 155 * Their sole purpose is to ensure that data held in the cache 264 * flush_icache_user_range is used when we want to ensure that the 271 * Perform necessary cache operations to ensure that data previously 277 * Perform necessary cache operations to ensure that the TLB will 328 * data, we need to do a full cache flush to ensure that writebacks 356 * to always ensure proper cache maintenance to update main memory right [all …]
|
D | fncpy.h | 19 * the alignment of functions must be preserved when copying. To ensure this, 23 * function to be copied is defined, and ensure that your allocator for the 66 * Ensure alignment of source and destination addresses, \
|
/linux-6.12.1/tools/testing/selftests/powerpc/papr_sysparm/ |
D | papr_sysparm.c | 57 // Ensure expected error in get_bad_parameter() 61 // Ensure the buffer is unchanged in get_bad_parameter() 80 // Ensure expected error in check_efault_common() 111 // Ensure expected error in set_hmc0() 133 // Ensure expected error in set_with_ro_fd() 176 .description = "ensure EPERM on attempt to update HMC0",
|
/linux-6.12.1/drivers/crypto/intel/keembay/ |
D | ocs-aes.c | 358 /* Ensure DMA error interrupts are enabled */ in aes_irq_enable() 379 /* Ensure AES interrupts are disabled */ in aes_irq_enable() 564 /* Ensure interrupts are disabled and pending interrupts cleared. */ in ocs_aes_init() 608 /* Ensure cipher, mode and instruction are valid. */ in ocs_aes_validate_inputs() 642 /* Ensure input length is multiple of block size */ in ocs_aes_validate_inputs() 646 /* Ensure source and destination linked lists are created */ in ocs_aes_validate_inputs() 654 /* Ensure input length is multiple of block size */ in ocs_aes_validate_inputs() 658 /* Ensure source and destination linked lists are created */ in ocs_aes_validate_inputs() 663 /* Ensure IV is present and block size in length */ in ocs_aes_validate_inputs() 670 /* Ensure input length of 1 byte or greater */ in ocs_aes_validate_inputs() [all …]
|
/linux-6.12.1/include/linux/ |
D | balloon_compaction.h | 18 * ensure following these simple rules: 31 * the aforementioned balloon page corner case, as well as to ensure the simple 88 * Caller must ensure the page is locked and the spin_lock protecting balloon 105 * Caller must ensure the page is locked and the spin_lock protecting balloon 162 * Caller must ensure the page is private and protect the list. 174 * Caller must ensure the page is private and protect the list.
|
/linux-6.12.1/fs/nfs/ |
D | io.c | 30 * Declare that a buffered read operation is about to start, and ensure 33 * and holds a shared lock on inode->i_rwsem to ensure that the flag 74 * Declare that a buffered read operation is about to start, and ensure 110 * Declare that a direct I/O operation is about to start, and ensure 113 * and holds a shared lock on inode->i_rwsem to ensure that the flag
|
/linux-6.12.1/drivers/iio/accel/ |
D | mma9551_core.c | 211 * Locking is not handled inside the function. Callers should ensure they 236 * Locking is not handled inside the function. Callers should ensure they 261 * Locking is not handled inside the function. Callers should ensure they 286 * Locking is not handled inside the function. Callers should ensure they 320 * Locking is not handled inside the function. Callers should ensure they 347 * Locking is not handled inside the function. Callers should ensure they 380 * Locking is not handled inside the function. Callers should ensure they 419 * Locking is not handled inside the function. Callers should ensure they 458 * Locking is not handled inside the function. Callers should ensure they 493 * Locking is not handled inside the function. Callers should ensure they [all …]
|
/linux-6.12.1/fs/ceph/ |
D | io.c | 38 * Declare that a buffered read operation is about to start, and ensure 41 * and holds a shared lock on inode->i_rwsem to ensure that the flag 83 * Declare that a buffered write operation is about to start, and ensure 124 * Declare that a direct I/O operation is about to start, and ensure 127 * and holds a shared lock on inode->i_rwsem to ensure that the flag
|
/linux-6.12.1/tools/testing/selftests/user_events/ |
D | abi_test.c | 243 /* Ensure kernel clears bit after disable */ in TEST_F() 249 /* Ensure doesn't change after unreg */ in TEST_F() 261 /* Ensure it exists after close and disable */ in TEST_F() 264 /* Ensure we can delete it */ in TEST_F() 271 /* Ensure it does not exist after invalid flags */ in TEST_F() 338 /* Ensure bit 1 and 2 are tied together, should not delete yet */ in TEST_F() 347 /* Ensure COW pages get updated after fork */ in TEST_F() 373 /* Ensure child doesn't disable parent */ in TEST_F()
|
D | user_events_selftests.h | 28 /* Ensure tracefs is installed */ in tracefs_enabled() 36 /* Ensure mounted tracefs */ in tracefs_enabled() 78 /* Ensure user_events is installed */ in user_events_enabled()
|
/linux-6.12.1/fs/netfs/ |
D | locking.c | 44 * Declare that a buffered read operation is about to start, and ensure 47 * and holds a shared lock on inode->i_rwsem to ensure that the flag 98 * Declare that a buffered read operation is about to start, and ensure 155 * Declare that a direct I/O operation is about to start, and ensure 158 * and holds a shared lock on inode->i_rwsem to ensure that the flag
|
/linux-6.12.1/rust/kernel/sync/ |
D | lock.rs | 23 /// - Implementers must ensure that only one thread/CPU may access the protected data once the lock 25 /// - Implementers must also ensure that [`relock`] uses the same locking method as the original 57 /// Callers must ensure that [`Backend::init`] has been previously called. 72 /// Callers must ensure that `guard_state` comes from a previous call to [`Backend::lock`] (or 75 // SAFETY: The safety requirements ensure that the lock is initialised. in relock() 189 /// The caller must ensure that it owns the lock.
|
/linux-6.12.1/arch/arm64/kvm/hyp/nvhe/ |
D | tlb.c | 34 * - ensure that the page table updates are visible to all in enter_vmid_context() 83 * TLB fill. For guests, we ensure that the S1 MMU is in enter_vmid_context() 135 /* Ensure write of the old VMID */ in exit_vmid_context() 165 * We have to ensure completion of the invalidation at Stage-2, in __kvm_tlb_flush_vmid_ipa() 195 * We have to ensure completion of the invalidation at Stage-2, in __kvm_tlb_flush_vmid_ipa_nsh()
|
/linux-6.12.1/arch/um/kernel/skas/ |
D | mmu.c | 19 /* Ensure the stub_data struct covers the allocated area */ 44 * Ensure the new MM is clean and nothing unwanted is mapped. in init_new_context() 46 * TODO: We should clear the memory up to STUB_START to ensure there is in init_new_context()
|
/linux-6.12.1/tools/testing/selftests/ftrace/test.d/00basic/ |
D | snapshot.tc | 15 echo "Ensure keep tracing off" 24 echo "Ensure keep tracing on"
|
/linux-6.12.1/arch/arm/mm/ |
D | cache-v4.S | 71 * Ensure coherency between the Icache and the Dcache in the 85 * Ensure coherency between the Icache and the Dcache in the 100 * Ensure no D cache aliasing occurs, either with itself or
|
D | dma.h | 13 * Their sole purpose is to ensure that data held in the cache 24 * Their sole purpose is to ensure that data held in the cache
|
/linux-6.12.1/rust/kernel/ |
D | page.rs | 162 /// * Callers must ensure that `dst` is valid for writing `len` bytes. 163 /// * Callers must ensure that this call does not race with a write to the same page that 184 /// * Callers must ensure that `src` is valid for reading `len` bytes. 185 /// * Callers must ensure that this call does not race with a read or write to the same page 205 /// Callers must ensure that this call does not race with a read or write to the same page that 228 /// Callers must ensure that this call does not race with a read or write to the same page that
|
D | types.rs | 82 // SAFETY: The safety requirements for this function ensure that the object is still alive, in borrow() 84 // The safety requirements of `from_foreign` also ensure that the object remains alive for in borrow() 90 // SAFETY: The safety requirements of this function ensure that `ptr` comes from a previous in from_foreign() 105 // SAFETY: The safety requirements for this function ensure that the object is still alive, in borrow() 107 // The safety requirements of `from_foreign` also ensure that the object remains alive for in borrow() 116 // SAFETY: The safety requirements of this function ensure that `ptr` comes from a previous in from_foreign() 328 /// Implementers must ensure that increments to the reference count keep the object alive in memory 331 /// Implementers must also ensure that all instances are reference-counted. (Otherwise they 344 /// Callers must ensure that there was a previous matching increment to the reference count, 387 /// Callers must ensure that the reference count was incremented at least once, and that they
|
/linux-6.12.1/tools/testing/selftests/powerpc/papr_vpd/ |
D | papr_vpd.c | 57 /* Ensure EOF */ in dev_papr_vpd_get_handle_all() 289 /* Ensure EOF */ in papr_vpd_system_loc_code() 312 .description = "ensure EINVAL on unterminated location code", 316 .description = "ensure EFAULT on bad handle addr", 332 .description = "ensure re-read yields same results"
|
/linux-6.12.1/tools/testing/selftests/livepatch/test_modules/ |
D | Makefile | 16 # Ensure that KDIR exists, otherwise skip the compilation 22 # Ensure that KDIR exists, otherwise skip the clean target
|
/linux-6.12.1/Documentation/networking/device_drivers/cellular/qualcomm/ |
D | rmnet.rst | 49 ensure 4 byte alignment. 75 ensure 4 byte alignment. 99 ensure 4 byte alignment. 129 ensure 4 byte alignment.
|
/linux-6.12.1/drivers/gpu/drm/msm/disp/dpu1/ |
D | dpu_hw_interrupts.c | 293 /* ensure register writes go through */ in dpu_core_irq() 320 * under irq_lock and it's the caller's responsibility to ensure that's in dpu_hw_intr_enable_irq_locked() 344 /* ensure register write goes through */ in dpu_hw_intr_enable_irq_locked() 376 * under irq_lock and it's the caller's responsibility to ensure that's in dpu_hw_intr_disable_irq_locked() 396 /* ensure register write goes through */ in dpu_hw_intr_disable_irq_locked() 423 /* ensure register writes go through */ in dpu_clear_irqs() 441 /* ensure register writes go through */ in dpu_disable_all_irqs() 471 /* ensure register writes go through */ in dpu_core_irq_read()
|