/linux-6.12.1/include/linux/ |
D | nvme-fc-driver.h | 24 * struct nvmefc_ls_req - Request structure passed from the transport 25 * to the LLDD to perform a NVME-FC LS request and obtain 30 * Used by the nvmet-fc transport (controller) to send 33 * Values set by the requestor prior to calling the LLDD ls_req entrypoint: 40 * @timeout: Maximum amount of time, in seconds, to wait for the LS response. 43 * @private: pointer to memory allocated alongside the ls request structure 44 * that is specifically for the LLDD to use while processing the 45 * request. The length of the buffer corresponds to the 46 * lsrqst_priv_sz value specified in the xxx_template supplied 47 * by the LLDD. [all …]
|
/linux-6.12.1/Documentation/scsi/ |
D | st.rst | 4 The SCSI Tape Driver 7 This file contains brief information about the SCSI tape driver. 8 The driver is currently maintained by Kai Mäkisara (email 17 The driver is generic, i.e., it does not contain any code tailored 18 to any specific tape drive. The tape parameters can be specified with 19 one of the following three methods: 21 1. Each user can specify the tape parameters he/she wants to use 24 in a multiuser environment the next user finds the tape parameters in 25 state the previous user left them. 27 2. The system manager (root) can define default values for some tape [all …]
|
/linux-6.12.1/Documentation/admin-guide/pm/ |
D | cpuidle.rst | 19 Modern processors are generally able to enter states in which the execution of 21 memory or executed. Those states are the *idle* states of the processor. 23 Since part of the processor hardware is not used in idle states, entering them 24 generally allows power drawn by the processor to be reduced and, in consequence, 28 the idle states of processors for this purpose. 33 CPU idle time management operates on CPUs as seen by the *CPU scheduler* (that 34 is the part of the kernel responsible for the distribution of computational 35 work in the system). In its view, CPUs are *logical* units. That is, they need 42 First, if the whole processor can only follow one sequence of instructions (one 43 program) at a time, it is a CPU. In that case, if the hardware is asked to [all …]
|
D | cpufreq.rst | 15 The Concept of CPU Performance Scaling 18 The majority of modern processors are capable of operating in a number of 21 the higher the clock frequency and the higher the voltage, the more instructions 22 can be retired by the CPU over a unit of time, but also the higher the clock 23 frequency and the higher the voltage, the more energy is consumed over a unit of 24 time (or the more power is drawn) by the CPU in the given P-state. Therefore 25 there is a natural tradeoff between the CPU capacity (the number of instructions 26 that can be executed over a unit of time) and the power drawn by the CPU. 28 In some situations it is desirable or even necessary to run the program as fast 29 as possible and then there is no reason to use any P-states different from the [all …]
|
/linux-6.12.1/Documentation/admin-guide/device-mapper/ |
D | vdo-design.rst | 7 The dm-vdo (virtual data optimizer) target provides inline deduplication, 13 Permabit was acquired by Red Hat. This document describes the design of 14 dm-vdo. For usage, see vdo.rst in the same directory as this file. 16 Because deduplication rates fall drastically as the block size increases, a 25 The design of dm-vdo is based on the idea that deduplication is a two-part 26 problem. The first is to recognize duplicate data. The second is to avoid 30 maps from logical block addresses to the actual storage location of the 36 Due to the complexity of data optimization, the number of metadata 47 thread will access the portion of the data structure in that zone. 49 request object (the "data_vio") which will be added to a work queue when [all …]
|
D | dm-integrity.rst | 5 The dm-integrity target emulates a block device that has additional 9 writing the sector and the integrity tag must be atomic - i.e. in case of 12 To guarantee write atomicity, the dm-integrity target uses journal, it 13 writes sector data and integrity tags into a journal, commits the journal 14 and then copies the data and integrity tags to their respective location. 16 The dm-integrity target can be used with the dm-crypt target - in this 17 situation the dm-crypt target creates the integrity data and passes them 18 to the dm-integrity target via bio_integrity_payload attached to the bio. 19 In this mode, the dm-crypt and dm-integrity targets provide authenticated 20 disk encryption - if the attacker modifies the encrypted device, an I/O [all …]
|
/linux-6.12.1/Documentation/crypto/ |
D | userspace-if.rst | 7 The concepts of the kernel crypto API visible to kernel space is fully 8 applicable to the user space interface as well. Therefore, the kernel 9 crypto API high level discussion for the in-kernel use cases applies 12 The major difference, however, is that user space can only act as a 16 The following covers the user space interface exported by the kernel 19 applications that require cryptographic services from the kernel. 21 Some details of the in-kernel kernel crypto API aspects do not apply to 22 user space, however. This includes the difference between synchronous 23 and asynchronous invocations. The user space API call is fully 31 The kernel crypto API is accessible from user space. Currently, the [all …]
|
/linux-6.12.1/Documentation/input/ |
D | multi-touch-protocol.rst | 13 In order to utilize the full power of the new multi-touch and multi-user 15 objects in direct contact with the device surface, is needed. This 16 document describes the multi-touch (MT) protocol which allows kernel 19 The protocol is divided into two types, depending on the capabilities of the 20 hardware. For devices handling anonymous contacts (type A), the protocol 21 describes how to send the raw data for all contacts to the receiver. For 22 devices capable of tracking identifiable contacts (type B), the protocol 33 events. Only the ABS_MT events are recognized as part of a contact 35 applications, the MT protocol can be implemented on top of the ST protocol 39 input_mt_sync() at the end of each packet. This generates a SYN_MT_REPORT [all …]
|
/linux-6.12.1/Documentation/filesystems/xfs/ |
D | xfs-delayed-logging-design.rst | 10 This document describes the design and algorithms that the XFS journalling 11 subsystem is based on. This document describes the design and algorithms that 12 the XFS journalling subsystem is based on so that readers may familiarize 13 themselves with the general concepts of how transaction processing in XFS works. 19 the basic concepts covered, the design of the delayed logging mechanism is 26 XFS uses Write Ahead Logging for ensuring changes to the filesystem metadata 27 are atomic and recoverable. For reasons of space and time efficiency, the 29 physical logging mechanisms to provide the necessary recovery guarantees the 32 Some objects, such as inodes and dquots, are logged in logical format where the 33 details logged are made up of the changes to in-core structures rather than [all …]
|
D | xfs-online-fsck-design.rst | 15 does in the kernel. 21 This document captures the design of the online filesystem check feature for 23 The purpose of this document is threefold: 25 - To help kernel distributors understand exactly what the XFS online fsck 28 - To help people reading the code to familiarize themselves with the relevant 29 concepts and design points before they start digging into the code. 31 - To help developers maintaining the system by capturing the reasons 34 As the online fsck code is merged, the links in this document to topic branches 37 This document is licensed under the terms of the GNU Public License, v2. 38 The primary author is Darrick J. Wong. [all …]
|
/linux-6.12.1/Documentation/power/ |
D | pci.rst | 7 An overview of concepts and the Linux kernel's interfaces related to PCI power 11 This document only covers the aspects of power management specific to PCI 12 devices. For general description of the kernel's interfaces related to device 31 devices into states in which they draw less power (low-power states) at the 35 completely inactive. However, when it is necessary to use the device once 36 again, it has to be put back into the "fully functional" state (full-power 37 state). This may happen when there are some data for the device to handle or 38 as a result of an external event requiring the device to be active, which may 39 be signaled by the device itself. 41 PCI devices may be put into low-power states in two ways, by using the device [all …]
|
/linux-6.12.1/Documentation/locking/ |
D | rt-mutex-design.rst | 7 Licensed under the GNU Free Documentation License, Version 1.2 10 This document tries to describe the design of the rtmutex.c implementation. 11 It doesn't describe the reasons why rtmutex.c exists. For that please see 13 that happen without this code, but that is in the concept to understand 14 what the code actually is doing. 16 The goal of this document is to help others understand the priority 17 inheritance (PI) algorithm that is used, as well as reasons for the 18 decisions that were made to implement PI in the manner that was done. 26 most of the time it can't be helped. Anytime a high priority process wants 28 the high priority process must wait until the lower priority process is done [all …]
|
/linux-6.12.1/LICENSES/preferred/ |
D | LGPL-2.1 | 7 To use this license in source code, put one of the following SPDX 8 tag/value pairs into a comment according to the placement 9 guidelines in the licensing rules documentation. 26 [This is the first released version of the Lesser GPL. It also counts as 27 the successor of the GNU Library Public License, version 2, hence the 32 The licenses for most software are designed to take away your freedom to 33 share and change it. By contrast, the GNU General Public Licenses are 35 make sure the software is free for all its users. 37 This license, the Lesser General Public License, applies to some specially 38 designated software packages--typically libraries--of the Free Software [all …]
|
/linux-6.12.1/Documentation/networking/ |
D | ppp_generic.rst | 12 The generic PPP driver in linux-2.4 provides an implementation of the 15 * the network interface unit (ppp0 etc.) 16 * the interface to the networking code 19 * the interface to pppd, via a /dev/ppp character device 25 For sending and receiving PPP frames, the generic PPP driver calls on 26 the services of PPP ``channels``. A PPP channel encapsulates a 29 has a very simple interface with the generic PPP code: it merely has 37 be linked to each ppp network interface unit. The generic layer is 45 See include/linux/ppp_channel.h for the declaration of the types and 46 functions used to communicate between the generic PPP layer and PPP [all …]
|
/linux-6.12.1/Documentation/mm/ |
D | hugetlbfs_reserv.rst | 10 in a task's address space at page fault time if the VMA indicates huge pages 11 are to be used. If no huge page exists at page fault time, the task is sent 14 of huge pages at mmap() time. The idea is that if there were not enough 15 huge pages to cover the mapping, the mmap() would fail. This was first 16 done with a simple check in the code at mmap() time to determine if there 17 were enough free huge pages to cover the mapping. Like most things in the 18 kernel, the code has evolved over time. However, the basic idea was to 20 available for page faults in that mapping. The description below attempts to 21 describe how huge page reserve processing is done in the v4.10 kernel. 30 The Data Structures [all …]
|
/linux-6.12.1/Documentation/admin-guide/ |
D | spkguide.txt | 2 The Speakup User's Guide 11 Copyright (c) 2009, 2010 the Speakup Team 14 under the terms of the GNU Free Documentation License, Version 1.2 or 15 any later version published by the Free Software Foundation; with no 17 copy of the license is included in the section entitled "GNU Free 22 The purpose of this document is to familiarize users with the user 24 for installing or obtaining Speakup, visit the web site at 25 http://linux-speakup.org/. Speakup is a set of patches to the standard 27 a part of a monolithic kernel. These details are beyond the scope of 28 this manual, but the user may need to be aware of the module [all …]
|
/linux-6.12.1/Documentation/driver-api/ |
D | ipmi.rst | 2 The Linux IPMI Driver 7 The Intelligent Platform Management Interface, or IPMI, is a 9 It provides for dynamic discovery of sensors in the system and the 10 ability to monitor the sensors and be informed when the sensor's 17 management software that can use the IPMI system. 19 This document describes how to use the IPMI driver for Linux. If you 20 are not familiar with IPMI itself, see the web site at 27 The Linux IPMI driver is modular, which means you have to pick several 29 these are available in the 'Character Devices' menu then the IPMI 35 The message handler does not provide any user-level interfaces. [all …]
|
/linux-6.12.1/Documentation/core-api/ |
D | debug-objects.rst | 2 The object-lifetime debugging infrastructure 10 debugobjects is a generic infrastructure to track the life time of 11 kernel objects and validate the operations on those. 13 debugobjects is useful to check for the following error patterns: 21 debugobjects is not changing the data structure of the real object so it 28 A kernel subsystem needs to provide a data structure which describes the 29 object type and add calls into the debug code at appropriate places. The 30 data structure to describe the object type needs at minimum the name of 31 the object type. Optional functions can and should be provided to fixup 32 detected problems so the kernel can continue to work and the debug [all …]
|
/linux-6.12.1/Documentation/virt/hyperv/ |
D | vpci.rst | 7 that are mapped directly into the VM's physical address space. 8 Guest device drivers can interact directly with the hardware 9 without intermediation by the host hypervisor. This approach 10 provides higher bandwidth access to the device with lower 11 latency, compared with devices that are virtualized by the 12 hypervisor. The device should appear to the guest just as it 14 to the Linux device drivers for the device. 24 and produces the same benefits by allowing a guest device 25 driver to interact directly with the hardware. See Hyper-V 36 it is operating, so the Linux device driver for the device can [all …]
|
/linux-6.12.1/Documentation/userspace-api/media/v4l/ |
D | dev-decoder.rst | 9 A stateful video decoder takes complete chunks of the bytestream (e.g. Annex-B 11 display order. The decoder is expected not to require any additional information 12 from the client to process these buffers. 14 Performing software parsing, processing etc. of the stream in the driver in 16 operations are needed, use of the Stateless Video Decoder Interface (in 22 1. The general V4L2 API rules apply if not specified in this document 25 2. The meaning of words "must", "may", "should", etc. is as per `RFC 36 depending on decoder capabilities and following the general V4L2 guidelines. 41 7. Given an ``OUTPUT`` buffer A, then A' represents a buffer on the ``CAPTURE`` 50 the destination buffer queue; for decoders, the queue of buffers containing [all …]
|
D | dev-subdev.rst | 9 The complex nature of V4L2 devices, where hardware is often made of 11 controlled way, leads to complex V4L2 drivers. The drivers usually 12 reflect the hardware model in software, and model the different hardware 15 V4L2 sub-devices are usually kernel-only objects. If the V4L2 driver 16 implements the media device API, they will automatically inherit from 17 media entities. Applications will be able to enumerate the sub-devices 18 and discover the hardware topology using the media entities, pads and 22 make them directly configurable by applications. When both the 23 sub-device driver and the V4L2 device driver support this, sub-devices 32 - inspect and modify internal data routing between pads of the same entity [all …]
|
/linux-6.12.1/tools/perf/pmu-events/arch/x86/silvermont/ |
D | pipeline.json | 3 "BriefDescription": "Counts the number of branch instructions retired...", 8 …the number of any branch instructions retired. Branch prediction predicts the branch target and e… 12 "BriefDescription": "Counts the number of taken branch instructions retired", 17 …the number of all taken branch instructions retired. Branch prediction predicts the branch target… 22 "BriefDescription": "Counts the number of near CALL branch instructions retired", 27 …the number of near CALL branch instructions retired. Branch prediction predicts the branch target… 32 "BriefDescription": "Counts the number of far branch instructions retired", 37 …the number of far branch instructions retired. Branch prediction predicts the branch target and e… 42 "BriefDescription": "Counts the number of near indirect CALL branch instructions retired", 47 …the number of near indirect CALL branch instructions retired. Branch prediction predicts the bran… [all …]
|
/linux-6.12.1/Documentation/admin-guide/mm/ |
D | numa_memory_policy.rst | 8 In the Linux kernel, "memory policy" determines from which node the kernel will 11 The current memory policy support was added to Linux 2.6 around May 2004. This 12 document attempts to describe the concepts and APIs of the 2.6 memory policy 17 which is an administrative mechanism for restricting the nodes from which 20 both cpusets and policies are applied to a task, the restrictions of the cpuset 30 The Linux kernel supports _scopes_ of memory policy, described here from 34 this policy is "hard coded" into the kernel. It is the policy 36 one of the more specific policy scopes discussed below. When 37 the system is "up and running", the system default policy will 39 up, the system default policy will be set to interleave [all …]
|
D | userfaultfd.rst | 8 Userfaults allow the implementation of on-demand paging from userland 10 memory page faults, something otherwise only the kernel code could do. 13 of the ``PROT_NONE+SIGSEGV`` trick. 19 regions of virtual memory with it. Then, any page faults which occur within the 20 region(s) result in a message being delivered to the userfaultfd, notifying 21 userspace of the fault. 23 The ``userfaultfd`` (aside from registering and unregistering virtual 26 1) ``read/POLLIN`` protocol to notify a userland thread of the faults 29 2) various ``UFFDIO_*`` ioctls that can manage the virtual memory regions 30 registered in the ``userfaultfd`` that allows userland to efficiently [all …]
|
/linux-6.12.1/Documentation/timers/ |
D | highres.rst | 5 Further information can be found in the paper of the OLS 2006 talk "hrtimers 6 and beyond". The paper is part of the OLS 2006 Proceedings Volume 1, which can 7 be found on the OLS website: 10 The slides to this talk are available from: 13 The slides contain five figures (pages 2, 15, 18, 20, 22), which illustrate the 14 changes in the time(r) related Linux subsystems. Figure #1 (p. 2) shows the 15 design of the Linux time(r) system before hrtimers and other building blocks 18 Note: the paper and the slides are talking about "clock event source", while we 19 switched to the name "clock event devices" in meantime. 21 The design contains the following basic building blocks: [all …]
|