1============================================
2Dynamic DMA mapping using the generic device
3============================================
4
5:Author: James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
6
7This document describes the DMA API.  For a more gentle introduction
8of the API (and actual examples), see Documentation/core-api/dma-api-howto.rst.
9
10This API is split into two pieces.  Part I describes the basic API.
11Part II describes extensions for supporting non-consistent memory
12machines.  Unless you know that your driver absolutely has to support
13non-consistent platforms (this is usually only legacy platforms) you
14should only use the API described in part I.
15
16Part I - dma_API
17----------------
18
19To get the dma_API, you must #include <linux/dma-mapping.h>.  This
20provides dma_addr_t and the interfaces described below.
21
22A dma_addr_t can hold any valid DMA address for the platform.  It can be
23given to a device to use as a DMA source or target.  A CPU cannot reference
24a dma_addr_t directly because there may be translation between its physical
25address space and the DMA address space.
26
27Part Ia - Using large DMA-coherent buffers
28------------------------------------------
29
30::
31
32	void *
33	dma_alloc_coherent(struct device *dev, size_t size,
34			   dma_addr_t *dma_handle, gfp_t flag)
35
36Consistent memory is memory for which a write by either the device or
37the processor can immediately be read by the processor or device
38without having to worry about caching effects.  (You may however need
39to make sure to flush the processor's write buffers before telling
40devices to read that memory.)
41
42This routine allocates a region of <size> bytes of consistent memory.
43
44It returns a pointer to the allocated region (in the processor's virtual
45address space) or NULL if the allocation failed.
46
47It also returns a <dma_handle> which may be cast to an unsigned integer the
48same width as the bus and given to the device as the DMA address base of
49the region.
50
51Note: consistent memory can be expensive on some platforms, and the
52minimum allocation length may be as big as a page, so you should
53consolidate your requests for consistent memory as much as possible.
54The simplest way to do that is to use the dma_pool calls (see below).
55
56The flag parameter (dma_alloc_coherent() only) allows the caller to
57specify the ``GFP_`` flags (see kmalloc()) for the allocation (the
58implementation may choose to ignore flags that affect the location of
59the returned memory, like GFP_DMA).
60
61::
62
63	void
64	dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
65			  dma_addr_t dma_handle)
66
67Free a region of consistent memory you previously allocated.  dev,
68size and dma_handle must all be the same as those passed into
69dma_alloc_coherent().  cpu_addr must be the virtual address returned by
70the dma_alloc_coherent().
71
72Note that unlike their sibling allocation calls, these routines
73may only be called with IRQs enabled.
74
75
76Part Ib - Using small DMA-coherent buffers
77------------------------------------------
78
79To get this part of the dma_API, you must #include <linux/dmapool.h>
80
81Many drivers need lots of small DMA-coherent memory regions for DMA
82descriptors or I/O buffers.  Rather than allocating in units of a page
83or more using dma_alloc_coherent(), you can use DMA pools.  These work
84much like a struct kmem_cache, except that they use the DMA-coherent allocator,
85not __get_free_pages().  Also, they understand common hardware constraints
86for alignment, like queue heads needing to be aligned on N-byte boundaries.
87
88
89::
90
91	struct dma_pool *
92	dma_pool_create(const char *name, struct device *dev,
93			size_t size, size_t align, size_t alloc);
94
95dma_pool_create() initializes a pool of DMA-coherent buffers
96for use with a given device.  It must be called in a context which
97can sleep.
98
99The "name" is for diagnostics (like a struct kmem_cache name); dev and size
100are like what you'd pass to dma_alloc_coherent().  The device's hardware
101alignment requirement for this type of data is "align" (which is expressed
102in bytes, and must be a power of two).  If your device has no boundary
103crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated
104from this pool must not cross 4KByte boundaries.
105
106::
107
108	void *
109	dma_pool_zalloc(struct dma_pool *pool, gfp_t mem_flags,
110		        dma_addr_t *handle)
111
112Wraps dma_pool_alloc() and also zeroes the returned memory if the
113allocation attempt succeeded.
114
115
116::
117
118	void *
119	dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags,
120		       dma_addr_t *dma_handle);
121
122This allocates memory from the pool; the returned memory will meet the
123size and alignment requirements specified at creation time.  Pass
124GFP_ATOMIC to prevent blocking, or if it's permitted (not
125in_interrupt, not holding SMP locks), pass GFP_KERNEL to allow
126blocking.  Like dma_alloc_coherent(), this returns two values:  an
127address usable by the CPU, and the DMA address usable by the pool's
128device.
129
130::
131
132	void
133	dma_pool_free(struct dma_pool *pool, void *vaddr,
134		      dma_addr_t addr);
135
136This puts memory back into the pool.  The pool is what was passed to
137dma_pool_alloc(); the CPU (vaddr) and DMA addresses are what
138were returned when that routine allocated the memory being freed.
139
140::
141
142	void
143	dma_pool_destroy(struct dma_pool *pool);
144
145dma_pool_destroy() frees the resources of the pool.  It must be
146called in a context which can sleep.  Make sure you've freed all allocated
147memory back to the pool before you destroy it.
148
149
150Part Ic - DMA addressing limitations
151------------------------------------
152
153::
154
155	int
156	dma_set_mask_and_coherent(struct device *dev, u64 mask)
157
158Checks to see if the mask is possible and updates the device
159streaming and coherent DMA mask parameters if it is.
160
161Returns: 0 if successful and a negative error if not.
162
163::
164
165	int
166	dma_set_mask(struct device *dev, u64 mask)
167
168Checks to see if the mask is possible and updates the device
169parameters if it is.
170
171Returns: 0 if successful and a negative error if not.
172
173::
174
175	int
176	dma_set_coherent_mask(struct device *dev, u64 mask)
177
178Checks to see if the mask is possible and updates the device
179parameters if it is.
180
181Returns: 0 if successful and a negative error if not.
182
183::
184
185	u64
186	dma_get_required_mask(struct device *dev)
187
188This API returns the mask that the platform requires to
189operate efficiently.  Usually this means the returned mask
190is the minimum required to cover all of memory.  Examining the
191required mask gives drivers with variable descriptor sizes the
192opportunity to use smaller descriptors as necessary.
193
194Requesting the required mask does not alter the current mask.  If you
195wish to take advantage of it, you should issue a dma_set_mask()
196call to set the mask to the value returned.
197
198::
199
200	size_t
201	dma_max_mapping_size(struct device *dev);
202
203Returns the maximum size of a mapping for the device. The size parameter
204of the mapping functions like dma_map_single(), dma_map_page() and
205others should not be larger than the returned value.
206
207::
208
209	size_t
210	dma_opt_mapping_size(struct device *dev);
211
212Returns the maximum optimal size of a mapping for the device.
213
214Mapping larger buffers may take much longer in certain scenarios. In
215addition, for high-rate short-lived streaming mappings, the upfront time
216spent on the mapping may account for an appreciable part of the total
217request lifetime. As such, if splitting larger requests incurs no
218significant performance penalty, then device drivers are advised to
219limit total DMA streaming mappings length to the returned value.
220
221::
222
223	bool
224	dma_need_sync(struct device *dev, dma_addr_t dma_addr);
225
226Returns %true if dma_sync_single_for_{device,cpu} calls are required to
227transfer memory ownership.  Returns %false if those calls can be skipped.
228
229::
230
231	unsigned long
232	dma_get_merge_boundary(struct device *dev);
233
234Returns the DMA merge boundary. If the device cannot merge any the DMA address
235segments, the function returns 0.
236
237Part Id - Streaming DMA mappings
238--------------------------------
239
240::
241
242	dma_addr_t
243	dma_map_single(struct device *dev, void *cpu_addr, size_t size,
244		       enum dma_data_direction direction)
245
246Maps a piece of processor virtual memory so it can be accessed by the
247device and returns the DMA address of the memory.
248
249The direction for both APIs may be converted freely by casting.
250However the dma_API uses a strongly typed enumerator for its
251direction:
252
253======================= =============================================
254DMA_NONE		no direction (used for debugging)
255DMA_TO_DEVICE		data is going from the memory to the device
256DMA_FROM_DEVICE		data is coming from the device to the memory
257DMA_BIDIRECTIONAL	direction isn't known
258======================= =============================================
259
260.. note::
261
262	Not all memory regions in a machine can be mapped by this API.
263	Further, contiguous kernel virtual space may not be contiguous as
264	physical memory.  Since this API does not provide any scatter/gather
265	capability, it will fail if the user tries to map a non-physically
266	contiguous piece of memory.  For this reason, memory to be mapped by
267	this API should be obtained from sources which guarantee it to be
268	physically contiguous (like kmalloc).
269
270	Further, the DMA address of the memory must be within the
271	dma_mask of the device (the dma_mask is a bit mask of the
272	addressable region for the device, i.e., if the DMA address of
273	the memory ANDed with the dma_mask is still equal to the DMA
274	address, then the device can perform DMA to the memory).  To
275	ensure that the memory allocated by kmalloc is within the dma_mask,
276	the driver may specify various platform-dependent flags to restrict
277	the DMA address range of the allocation (e.g., on x86, GFP_DMA
278	guarantees to be within the first 16MB of available DMA addresses,
279	as required by ISA devices).
280
281	Note also that the above constraints on physical contiguity and
282	dma_mask may not apply if the platform has an IOMMU (a device which
283	maps an I/O DMA address to a physical memory address).  However, to be
284	portable, device driver writers may *not* assume that such an IOMMU
285	exists.
286
287.. warning::
288
289	Memory coherency operates at a granularity called the cache
290	line width.  In order for memory mapped by this API to operate
291	correctly, the mapped region must begin exactly on a cache line
292	boundary and end exactly on one (to prevent two separately mapped
293	regions from sharing a single cache line).  Since the cache line size
294	may not be known at compile time, the API will not enforce this
295	requirement.  Therefore, it is recommended that driver writers who
296	don't take special care to determine the cache line size at run time
297	only map virtual regions that begin and end on page boundaries (which
298	are guaranteed also to be cache line boundaries).
299
300	DMA_TO_DEVICE synchronisation must be done after the last modification
301	of the memory region by the software and before it is handed off to
302	the device.  Once this primitive is used, memory covered by this
303	primitive should be treated as read-only by the device.  If the device
304	may write to it at any point, it should be DMA_BIDIRECTIONAL (see
305	below).
306
307	DMA_FROM_DEVICE synchronisation must be done before the driver
308	accesses data that may be changed by the device.  This memory should
309	be treated as read-only by the driver.  If the driver needs to write
310	to it at any point, it should be DMA_BIDIRECTIONAL (see below).
311
312	DMA_BIDIRECTIONAL requires special handling: it means that the driver
313	isn't sure if the memory was modified before being handed off to the
314	device and also isn't sure if the device will also modify it.  Thus,
315	you must always sync bidirectional memory twice: once before the
316	memory is handed off to the device (to make sure all memory changes
317	are flushed from the processor) and once before the data may be
318	accessed after being used by the device (to make sure any processor
319	cache lines are updated with data that the device may have changed).
320
321::
322
323	void
324	dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
325			 enum dma_data_direction direction)
326
327Unmaps the region previously mapped.  All the parameters passed in
328must be identical to those passed in (and returned) by the mapping
329API.
330
331::
332
333	dma_addr_t
334	dma_map_page(struct device *dev, struct page *page,
335		     unsigned long offset, size_t size,
336		     enum dma_data_direction direction)
337
338	void
339	dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
340		       enum dma_data_direction direction)
341
342API for mapping and unmapping for pages.  All the notes and warnings
343for the other mapping APIs apply here.  Also, although the <offset>
344and <size> parameters are provided to do partial page mapping, it is
345recommended that you never use these unless you really know what the
346cache width is.
347
348::
349
350	dma_addr_t
351	dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size,
352			 enum dma_data_direction dir, unsigned long attrs)
353
354	void
355	dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size,
356			   enum dma_data_direction dir, unsigned long attrs)
357
358API for mapping and unmapping for MMIO resources. All the notes and
359warnings for the other mapping APIs apply here. The API should only be
360used to map device MMIO resources, mapping of RAM is not permitted.
361
362::
363
364	int
365	dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
366
367In some circumstances dma_map_single(), dma_map_page() and dma_map_resource()
368will fail to create a mapping. A driver can check for these errors by testing
369the returned DMA address with dma_mapping_error(). A non-zero return value
370means the mapping could not be created and the driver should take appropriate
371action (e.g. reduce current DMA mapping usage or delay and try again later).
372
373::
374
375	int
376	dma_map_sg(struct device *dev, struct scatterlist *sg,
377		   int nents, enum dma_data_direction direction)
378
379Returns: the number of DMA address segments mapped (this may be shorter
380than <nents> passed in if some elements of the scatter/gather list are
381physically or virtually adjacent and an IOMMU maps them with a single
382entry).
383
384Please note that the sg cannot be mapped again if it has been mapped once.
385The mapping process is allowed to destroy information in the sg.
386
387As with the other mapping interfaces, dma_map_sg() can fail. When it
388does, 0 is returned and a driver must take appropriate action. It is
389critical that the driver do something, in the case of a block driver
390aborting the request or even oopsing is better than doing nothing and
391corrupting the filesystem.
392
393With scatterlists, you use the resulting mapping like this::
394
395	int i, count = dma_map_sg(dev, sglist, nents, direction);
396	struct scatterlist *sg;
397
398	for_each_sg(sglist, sg, count, i) {
399		hw_address[i] = sg_dma_address(sg);
400		hw_len[i] = sg_dma_len(sg);
401	}
402
403where nents is the number of entries in the sglist.
404
405The implementation is free to merge several consecutive sglist entries
406into one (e.g. with an IOMMU, or if several pages just happen to be
407physically contiguous) and returns the actual number of sg entries it
408mapped them to. On failure 0, is returned.
409
410Then you should loop count times (note: this can be less than nents times)
411and use sg_dma_address() and sg_dma_len() macros where you previously
412accessed sg->address and sg->length as shown above.
413
414::
415
416	void
417	dma_unmap_sg(struct device *dev, struct scatterlist *sg,
418		     int nents, enum dma_data_direction direction)
419
420Unmap the previously mapped scatter/gather list.  All the parameters
421must be the same as those and passed in to the scatter/gather mapping
422API.
423
424Note: <nents> must be the number you passed in, *not* the number of
425DMA address entries returned.
426
427::
428
429	void
430	dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
431				size_t size,
432				enum dma_data_direction direction)
433
434	void
435	dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
436				   size_t size,
437				   enum dma_data_direction direction)
438
439	void
440	dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
441			    int nents,
442			    enum dma_data_direction direction)
443
444	void
445	dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
446			       int nents,
447			       enum dma_data_direction direction)
448
449Synchronise a single contiguous or scatter/gather mapping for the CPU
450and device. With the sync_sg API, all the parameters must be the same
451as those passed into the sg mapping API. With the sync_single API,
452you can use dma_handle and size parameters that aren't identical to
453those passed into the single mapping API to do a partial sync.
454
455
456.. note::
457
458   You must do this:
459
460   - Before reading values that have been written by DMA from the device
461     (use the DMA_FROM_DEVICE direction)
462   - After writing values that will be written to the device using DMA
463     (use the DMA_TO_DEVICE) direction
464   - before *and* after handing memory to the device if the memory is
465     DMA_BIDIRECTIONAL
466
467See also dma_map_single().
468
469::
470
471	dma_addr_t
472	dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size,
473			     enum dma_data_direction dir,
474			     unsigned long attrs)
475
476	void
477	dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr,
478			       size_t size, enum dma_data_direction dir,
479			       unsigned long attrs)
480
481	int
482	dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl,
483			 int nents, enum dma_data_direction dir,
484			 unsigned long attrs)
485
486	void
487	dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl,
488			   int nents, enum dma_data_direction dir,
489			   unsigned long attrs)
490
491The four functions above are just like the counterpart functions
492without the _attrs suffixes, except that they pass an optional
493dma_attrs.
494
495The interpretation of DMA attributes is architecture-specific, and
496each attribute should be documented in
497Documentation/core-api/dma-attributes.rst.
498
499If dma_attrs are 0, the semantics of each of these functions
500is identical to those of the corresponding function
501without the _attrs suffix. As a result dma_map_single_attrs()
502can generally replace dma_map_single(), etc.
503
504As an example of the use of the ``*_attrs`` functions, here's how
505you could pass an attribute DMA_ATTR_FOO when mapping memory
506for DMA::
507
508	#include <linux/dma-mapping.h>
509	/* DMA_ATTR_FOO should be defined in linux/dma-mapping.h and
510	* documented in Documentation/core-api/dma-attributes.rst */
511	...
512
513		unsigned long attr;
514		attr |= DMA_ATTR_FOO;
515		....
516		n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, attr);
517		....
518
519Architectures that care about DMA_ATTR_FOO would check for its
520presence in their implementations of the mapping and unmapping
521routines, e.g.:::
522
523	void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr,
524				     size_t size, enum dma_data_direction dir,
525				     unsigned long attrs)
526	{
527		....
528		if (attrs & DMA_ATTR_FOO)
529			/* twizzle the frobnozzle */
530		....
531	}
532
533
534Part II - Non-coherent DMA allocations
535--------------------------------------
536
537These APIs allow to allocate pages that are guaranteed to be DMA addressable
538by the passed in device, but which need explicit management of memory ownership
539for the kernel vs the device.
540
541If you don't understand how cache line coherency works between a processor and
542an I/O device, you should not be using this part of the API.
543
544::
545
546	struct page *
547	dma_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle,
548			enum dma_data_direction dir, gfp_t gfp)
549
550This routine allocates a region of <size> bytes of non-coherent memory.  It
551returns a pointer to first struct page for the region, or NULL if the
552allocation failed. The resulting struct page can be used for everything a
553struct page is suitable for.
554
555It also returns a <dma_handle> which may be cast to an unsigned integer the
556same width as the bus and given to the device as the DMA address base of
557the region.
558
559The dir parameter specified if data is read and/or written by the device,
560see dma_map_single() for details.
561
562The gfp parameter allows the caller to specify the ``GFP_`` flags (see
563kmalloc()) for the allocation, but rejects flags used to specify a memory
564zone such as GFP_DMA or GFP_HIGHMEM.
565
566Before giving the memory to the device, dma_sync_single_for_device() needs
567to be called, and before reading memory written by the device,
568dma_sync_single_for_cpu(), just like for streaming DMA mappings that are
569reused.
570
571::
572
573	void
574	dma_free_pages(struct device *dev, size_t size, struct page *page,
575			dma_addr_t dma_handle, enum dma_data_direction dir)
576
577Free a region of memory previously allocated using dma_alloc_pages().
578dev, size, dma_handle and dir must all be the same as those passed into
579dma_alloc_pages().  page must be the pointer returned by dma_alloc_pages().
580
581::
582
583	int
584	dma_mmap_pages(struct device *dev, struct vm_area_struct *vma,
585		       size_t size, struct page *page)
586
587Map an allocation returned from dma_alloc_pages() into a user address space.
588dev and size must be the same as those passed into dma_alloc_pages().
589page must be the pointer returned by dma_alloc_pages().
590
591::
592
593	void *
594	dma_alloc_noncoherent(struct device *dev, size_t size,
595			dma_addr_t *dma_handle, enum dma_data_direction dir,
596			gfp_t gfp)
597
598This routine is a convenient wrapper around dma_alloc_pages that returns the
599kernel virtual address for the allocated memory instead of the page structure.
600
601::
602
603	void
604	dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr,
605			dma_addr_t dma_handle, enum dma_data_direction dir)
606
607Free a region of memory previously allocated using dma_alloc_noncoherent().
608dev, size, dma_handle and dir must all be the same as those passed into
609dma_alloc_noncoherent().  cpu_addr must be the virtual address returned by
610dma_alloc_noncoherent().
611
612::
613
614	struct sg_table *
615	dma_alloc_noncontiguous(struct device *dev, size_t size,
616				enum dma_data_direction dir, gfp_t gfp,
617				unsigned long attrs);
618
619This routine allocates  <size> bytes of non-coherent and possibly non-contiguous
620memory.  It returns a pointer to struct sg_table that describes the allocated
621and DMA mapped memory, or NULL if the allocation failed. The resulting memory
622can be used for struct page mapped into a scatterlist are suitable for.
623
624The return sg_table is guaranteed to have 1 single DMA mapped segment as
625indicated by sgt->nents, but it might have multiple CPU side segments as
626indicated by sgt->orig_nents.
627
628The dir parameter specified if data is read and/or written by the device,
629see dma_map_single() for details.
630
631The gfp parameter allows the caller to specify the ``GFP_`` flags (see
632kmalloc()) for the allocation, but rejects flags used to specify a memory
633zone such as GFP_DMA or GFP_HIGHMEM.
634
635The attrs argument must be either 0 or DMA_ATTR_ALLOC_SINGLE_PAGES.
636
637Before giving the memory to the device, dma_sync_sgtable_for_device() needs
638to be called, and before reading memory written by the device,
639dma_sync_sgtable_for_cpu(), just like for streaming DMA mappings that are
640reused.
641
642::
643
644	void
645	dma_free_noncontiguous(struct device *dev, size_t size,
646			       struct sg_table *sgt,
647			       enum dma_data_direction dir)
648
649Free memory previously allocated using dma_alloc_noncontiguous().  dev, size,
650and dir must all be the same as those passed into dma_alloc_noncontiguous().
651sgt must be the pointer returned by dma_alloc_noncontiguous().
652
653::
654
655	void *
656	dma_vmap_noncontiguous(struct device *dev, size_t size,
657		struct sg_table *sgt)
658
659Return a contiguous kernel mapping for an allocation returned from
660dma_alloc_noncontiguous().  dev and size must be the same as those passed into
661dma_alloc_noncontiguous().  sgt must be the pointer returned by
662dma_alloc_noncontiguous().
663
664Once a non-contiguous allocation is mapped using this function, the
665flush_kernel_vmap_range() and invalidate_kernel_vmap_range() APIs must be used
666to manage the coherency between the kernel mapping, the device and user space
667mappings (if any).
668
669::
670
671	void
672	dma_vunmap_noncontiguous(struct device *dev, void *vaddr)
673
674Unmap a kernel mapping returned by dma_vmap_noncontiguous().  dev must be the
675same the one passed into dma_alloc_noncontiguous().  vaddr must be the pointer
676returned by dma_vmap_noncontiguous().
677
678
679::
680
681	int
682	dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma,
683			       size_t size, struct sg_table *sgt)
684
685Map an allocation returned from dma_alloc_noncontiguous() into a user address
686space.  dev and size must be the same as those passed into
687dma_alloc_noncontiguous().  sgt must be the pointer returned by
688dma_alloc_noncontiguous().
689
690::
691
692	int
693	dma_get_cache_alignment(void)
694
695Returns the processor cache alignment.  This is the absolute minimum
696alignment *and* width that you must observe when either mapping
697memory or doing partial flushes.
698
699.. note::
700
701	This API may return a number *larger* than the actual cache
702	line, but it will guarantee that one or more cache lines fit exactly
703	into the width returned by this call.  It will also always be a power
704	of two for easy alignment.
705
706
707Part III - Debug drivers use of the DMA-API
708-------------------------------------------
709
710The DMA-API as described above has some constraints. DMA addresses must be
711released with the corresponding function with the same size for example. With
712the advent of hardware IOMMUs it becomes more and more important that drivers
713do not violate those constraints. In the worst case such a violation can
714result in data corruption up to destroyed filesystems.
715
716To debug drivers and find bugs in the usage of the DMA-API checking code can
717be compiled into the kernel which will tell the developer about those
718violations. If your architecture supports it you can select the "Enable
719debugging of DMA-API usage" option in your kernel configuration. Enabling this
720option has a performance impact. Do not enable it in production kernels.
721
722If you boot the resulting kernel will contain code which does some bookkeeping
723about what DMA memory was allocated for which device. If this code detects an
724error it prints a warning message with some details into your kernel log. An
725example warning message may look like this::
726
727	WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448
728		check_unmap+0x203/0x490()
729	Hardware name:
730	forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong
731		function [device address=0x00000000640444be] [size=66 bytes] [mapped as
732	single] [unmapped as page]
733	Modules linked in: nfsd exportfs bridge stp llc r8169
734	Pid: 0, comm: swapper Tainted: G        W  2.6.28-dmatest-09289-g8bb99c0 #1
735	Call Trace:
736	<IRQ>  [<ffffffff80240b22>] warn_slowpath+0xf2/0x130
737	[<ffffffff80647b70>] _spin_unlock+0x10/0x30
738	[<ffffffff80537e75>] usb_hcd_link_urb_to_ep+0x75/0xc0
739	[<ffffffff80647c22>] _spin_unlock_irqrestore+0x12/0x40
740	[<ffffffff8055347f>] ohci_urb_enqueue+0x19f/0x7c0
741	[<ffffffff80252f96>] queue_work+0x56/0x60
742	[<ffffffff80237e10>] enqueue_task_fair+0x20/0x50
743	[<ffffffff80539279>] usb_hcd_submit_urb+0x379/0xbc0
744	[<ffffffff803b78c3>] cpumask_next_and+0x23/0x40
745	[<ffffffff80235177>] find_busiest_group+0x207/0x8a0
746	[<ffffffff8064784f>] _spin_lock_irqsave+0x1f/0x50
747	[<ffffffff803c7ea3>] check_unmap+0x203/0x490
748	[<ffffffff803c8259>] debug_dma_unmap_page+0x49/0x50
749	[<ffffffff80485f26>] nv_tx_done_optimized+0xc6/0x2c0
750	[<ffffffff80486c13>] nv_nic_irq_optimized+0x73/0x2b0
751	[<ffffffff8026df84>] handle_IRQ_event+0x34/0x70
752	[<ffffffff8026ffe9>] handle_edge_irq+0xc9/0x150
753	[<ffffffff8020e3ab>] do_IRQ+0xcb/0x1c0
754	[<ffffffff8020c093>] ret_from_intr+0x0/0xa
755	<EOI> <4>---[ end trace f6435a98e2a38c0e ]---
756
757The driver developer can find the driver and the device including a stacktrace
758of the DMA-API call which caused this warning.
759
760Per default only the first error will result in a warning message. All other
761errors will only silently counted. This limitation exist to prevent the code
762from flooding your kernel log. To support debugging a device driver this can
763be disabled via debugfs. See the debugfs interface documentation below for
764details.
765
766The debugfs directory for the DMA-API debugging code is called dma-api/. In
767this directory the following files can currently be found:
768
769=============================== ===============================================
770dma-api/all_errors		This file contains a numeric value. If this
771				value is not equal to zero the debugging code
772				will print a warning for every error it finds
773				into the kernel log. Be careful with this
774				option, as it can easily flood your logs.
775
776dma-api/disabled		This read-only file contains the character 'Y'
777				if the debugging code is disabled. This can
778				happen when it runs out of memory or if it was
779				disabled at boot time
780
781dma-api/dump			This read-only file contains current DMA
782				mappings.
783
784dma-api/error_count		This file is read-only and shows the total
785				numbers of errors found.
786
787dma-api/num_errors		The number in this file shows how many
788				warnings will be printed to the kernel log
789				before it stops. This number is initialized to
790				one at system boot and be set by writing into
791				this file
792
793dma-api/min_free_entries	This read-only file can be read to get the
794				minimum number of free dma_debug_entries the
795				allocator has ever seen. If this value goes
796				down to zero the code will attempt to increase
797				nr_total_entries to compensate.
798
799dma-api/num_free_entries	The current number of free dma_debug_entries
800				in the allocator.
801
802dma-api/nr_total_entries	The total number of dma_debug_entries in the
803				allocator, both free and used.
804
805dma-api/driver_filter		You can write a name of a driver into this file
806				to limit the debug output to requests from that
807				particular driver. Write an empty string to
808				that file to disable the filter and see
809				all errors again.
810=============================== ===============================================
811
812If you have this code compiled into your kernel it will be enabled by default.
813If you want to boot without the bookkeeping anyway you can provide
814'dma_debug=off' as a boot parameter. This will disable DMA-API debugging.
815Notice that you can not enable it again at runtime. You have to reboot to do
816so.
817
818If you want to see debug messages only for a special device driver you can
819specify the dma_debug_driver=<drivername> parameter. This will enable the
820driver filter at boot time. The debug code will only print errors for that
821driver afterwards. This filter can be disabled or changed later using debugfs.
822
823When the code disables itself at runtime this is most likely because it ran
824out of dma_debug_entries and was unable to allocate more on-demand. 65536
825entries are preallocated at boot - if this is too low for you boot with
826'dma_debug_entries=<your_desired_number>' to overwrite the default. Note
827that the code allocates entries in batches, so the exact number of
828preallocated entries may be greater than the actual number requested. The
829code will print to the kernel log each time it has dynamically allocated
830as many entries as were initially preallocated. This is to indicate that a
831larger preallocation size may be appropriate, or if it happens continually
832that a driver may be leaking mappings.
833
834::
835
836	void
837	debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr);
838
839dma-debug interface debug_dma_mapping_error() to debug drivers that fail
840to check DMA mapping errors on addresses returned by dma_map_single() and
841dma_map_page() interfaces. This interface clears a flag set by
842debug_dma_map_page() to indicate that dma_mapping_error() has been called by
843the driver. When driver does unmap, debug_dma_unmap() checks the flag and if
844this flag is still set, prints warning message that includes call trace that
845leads up to the unmap. This interface can be called from dma_mapping_error()
846routines to enable DMA mapping error check debugging.
847