Lines Matching +full:image +full:- +full:sensor

1 .. SPDX-License-Identifier: GPL-2.0
6 Intel Image Processing Unit 3 (IPU3) Imaging Unit (ImgU) driver
14 This file documents the Intel IPU3 (3rd generation Image Processing Unit)
24 ImgU). The CIO2 driver is available as drivers/media/pci/intel/ipu3/ipu3-cio2*
36 Both of the drivers implement V4L2, Media Controller and V4L2 sub-device
38 MIPI CSI-2 interfaces through V4L2 sub-device sensor drivers.
44 interface to the user space. There is a video node for each CSI-2 receiver,
47 The CIO2 contains four independent capture channel, each with its own MIPI CSI-2
48 receiver and DMA engine. Each channel is modelled as a V4L2 sub-device exposed
49 to userspace as a V4L2 sub-device node and has two pads:
53 .. flat-table::
54 :header-rows: 1
56 * - Pad
57 - Direction
58 - Purpose
60 * - 0
61 - sink
62 - MIPI CSI-2 input, connected to the sensor subdev
64 * - 1
65 - source
66 - Raw video capture, connected to the V4L2 video interface
72 ------------------------------------
78 Image processing using IPU3 ImgU requires tools such as raw2pnm [#f1]_, and
82 -- The IPU3 CSI2 receiver outputs the captured frames from the sensor in packed
85 -- Multiple video nodes have to be operated simultaneously.
87 Let us take the example of ov5670 sensor connected to CSI2 port 0, for a
88 2592x1944 image capture.
90 Using the media controller APIs, the ov5670 sensor is configured to send
93 .. code-block:: none
98 # and that ov5670 sensor is connected to i2c bus 10 with address 0x36
99 export SDEV=$(media-ctl -d $MDEV -e "ov5670 10-0036")
101 # Establish the link for the media devices using media-ctl [#f3]_
102 media-ctl -d $MDEV -l "ov5670:0 -> ipu3-csi2 0:0[1]"
105 media-ctl -d $MDEV -V "ov5670:0 [fmt:SGRBG10/2592x1944]"
106 media-ctl -d $MDEV -V "ipu3-csi2 0:0 [fmt:SGRBG10/2592x1944]"
107 media-ctl -d $MDEV -V "ipu3-csi2 0:1 [fmt:SGRBG10/2592x1944]"
109 Once the media pipeline is configured, desired sensor specific settings
114 .. code-block:: none
116 yavta -w 0x009e0903 444 $SDEV
117 yavta -w 0x009e0913 1024 $SDEV
118 yavta -w 0x009e0911 2046 $SDEV
120 Once the desired sensor settings are set, frame captures can be done as below.
124 .. code-block:: none
126 yavta --data-prefix -u -c10 -n5 -I -s2592x1944 --file=/tmp/frame-#.bin \
127 -f IPU3_SGRBG10 $(media-ctl -d $MDEV -e "ipu3-cio2 0")
132 The captured frames are available as /tmp/frame-#.bin files.
144 The ImgU contains two independent pipes, each modelled as a V4L2 sub-device
145 exposed to userspace as a V4L2 sub-device node.
151 .. flat-table::
152 :header-rows: 1
154 * - Pad
155 - Direction
156 - Purpose
158 * - 0
159 - sink
160 - Input raw video stream
162 * - 1
163 - sink
164 - Processing parameters
166 * - 2
167 - source
168 - Output processed video stream
170 * - 3
171 - source
172 - Output viewfinder video stream
174 * - 4
175 - source
176 - 3A statistics
182 ----------------
184 With ImgU, once the input video node ("ipu3-imgu 0/1":0, in
185 <entity>:<pad-number> format) is queued with buffer (in packed raw Bayer
192 video nodes should be enabled for IPU3 to start image processing.
197 ----------------------------------------
205 :ref:`v4l2-pix-fmt-ipu3-sbggr10`.
209 Only the multi-planar API is supported. More details can be found at
210 :ref:`planar-apis`.
213 ---------------------
216 to configure how the ImgU algorithms process the image.
219 :ref:`v4l2-meta-fmt-params`.
222 ------------------------
237 ------------------------------------------
240 in time-sharing with single input frame data. Each pipe can run at certain mode
241 - "VIDEO" or "STILL", "VIDEO" mode is commonly used for video frames capture,
251 drivers/staging/media/ipu3/include/uapi/intel-ipu3.h) to query and set the
254 enabled and buffers need be queued, the statistics and the view-finder queues
264 Processing the image in raw Bayer format
265 ----------------------------------------
267 Configuring ImgU V4L2 subdev for image processing
273 Let us take "ipu3-imgu 0" subdev as an example.
275 .. code-block:: none
277 media-ctl -d $MDEV -r
278 media-ctl -d $MDEV -l "ipu3-imgu 0 input":0 -> "ipu3-imgu 0":0[1]
279 media-ctl -d $MDEV -l "ipu3-imgu 0":2 -> "ipu3-imgu 0 output":0[1]
280 media-ctl -d $MDEV -l "ipu3-imgu 0":3 -> "ipu3-imgu 0 viewfinder":0[1]
281 media-ctl -d $MDEV -l "ipu3-imgu 0":4 -> "ipu3-imgu 0 3a stat":0[1]
287 .. code-block:: none
289 yavta -w "0x009819A1 1" /dev/v4l-subdev7
294 There is also a block which can change the frame resolution - YUV Scaler, it is
298 processed image output to the DDR memory.
300 .. kernel-figure:: ipu3_rcb.svg
301 :alt: ipu3 resolution blocks image
307 Input Feeder gets the Bayer frame data from the sensor, it can enable cropping
313 Bayer Down Scaler is capable of performing image scaling in Bayer domain, the
320 and image filtering. It needs some extra filter and envelope padding pixels to
326 YUV Scaler which similar with BDS, but it is mainly do image down scaling in
338 intermediate resolutions can be generated by specific tool -
340 https://github.com/intel/intel-ipu3-pipecfg
345 https://chromium.googlesource.com/chromiumos/overlays/board-overlays/+/master
347 Under baseboard-poppy/media-libs/cros-camera-hal-configs-poppy/files/gcss
350 The following steps prepare the ImgU pipeline for the image processing.
371 For an image captured with 2592x1944 [#f4]_ resolution, with desired output
374 the desired results for the main output image and the viewfinder output, in NV12
377 .. code-block:: none
379 v4l2n --pipe=4 --load=/tmp/frame-#.bin --open=/dev/video4
380 --fmt=type:VIDEO_OUTPUT_MPLANE,width=2592,height=1944,pixelformat=0X47337069 \
381 --reqbufs=type:VIDEO_OUTPUT_MPLANE,count:1 --pipe=1 \
382 --output=/tmp/frames.out --open=/dev/video5 \
383 --fmt=type:VIDEO_CAPTURE_MPLANE,width=2560,height=1920,pixelformat=NV12 \
384 --reqbufs=type:VIDEO_CAPTURE_MPLANE,count:1 --pipe=2 \
385 --output=/tmp/frames.vf --open=/dev/video6 \
386 --fmt=type:VIDEO_CAPTURE_MPLANE,width=2560,height=1920,pixelformat=NV12 \
387 --reqbufs=type:VIDEO_CAPTURE_MPLANE,count:1 --pipe=3 --open=/dev/video7 \
388 --output=/tmp/frames.3A --fmt=type:META_CAPTURE,? \
389 --reqbufs=count:1,type:META_CAPTURE --pipe=1,2,3,4 --stream=5
393 .. code-block:: none
395 yavta --data-prefix -Bcapture-mplane -c10 -n5 -I -s2592x1944 \
396 --file=frame-#.out-f NV12 /dev/video5 & \
397 yavta --data-prefix -Bcapture-mplane -c10 -n5 -I -s2592x1944 \
398 --file=frame-#.vf -f NV12 /dev/video6 & \
399 yavta --data-prefix -Bmeta-capture -c10 -n5 -I \
400 --file=frame-#.3a /dev/video7 & \
401 yavta --data-prefix -Boutput-mplane -c10 -n5 -I -s2592x1944 \
402 --file=/tmp/frame-in.cio2 -f IPU3_SGRBG10 /dev/video4
407 Converting the raw Bayer image into YUV domain
408 ----------------------------------------------
416 .. code-block:: none
418 raw2pnm -x2560 -y1920 -fNV12 /tmp/frames.out /tmp/frames.out.ppm
426 .. code-block:: none
428 raw2pnm -x2560 -y1920 -fNV12 /tmp/frames.vf /tmp/frames.vf.ppm
438 https://chromium.googlesource.com/chromiumos/platform/arc-camera/+/master/
445 IPU3 pipeline has a number of image processing stages, each of which takes a
448 .. kernel-render:: DOT
478 { rank=same; a -> b -> c -> d -> e -> f -> g -> h -> i }
479 { rank=same; j -> k -> l -> m -> n -> o -> p -> q -> s -> t}
481 a -> j [style=invis, weight=10]
482 i -> j
483 q -> r
491 Optical Black Correction Optical Black Correction block subtracts a pre-defined
493 image quality.
496 address non-linearity sensor effects. The Lookup table
500 non-uniformity of the pixel response due to optical
504 BNR Bayer noise reduction block removes image noise by
511 DM Demosaicing converts raw sensor data in Bayer format
516 Color Correction Color Correction algo transforms sensor specific color
521 basic non-linear tone mapping correction that is
541 noise reduction algorithm used to improve image
543 captured image. Two related structs are being defined,
565 Image enhancement filter directed
575 Y-tone mapping
592 .. [#f5] drivers/staging/media/ipu3/include/uapi/intel-ipu3.h
598 .. [#f3] http://git.ideasonboard.org/?p=media-ctl.git;a=summary