Imaging solutions with Free Software & Open Hardware

Who's online

There are currently 0 users online.

01/11/17 [elphel-web-393][master] by Oleg Dzhimiev: more links

Elphel GIT logs - Wed, 01/11/2017 - 19:32
Oleg Dzhimiev committed changes to the Elphel git project :
more links

01/11/17 [elphel-web-393][] by Oleg Dzhimiev: more links

Elphel GIT logs - Wed, 01/11/2017 - 19:32
Oleg Dzhimiev committed changes to the Elphel git project :
more links

Tmp manual

Wiki Recent Changes - Wed, 01/11/2017 - 12:37

Record:

← Older revision Revision as of 19:37, 11 January 2017 Line 139: Line 139: ** recording to a partition with a file system - up to 80MB/s ** recording to a partition with a file system - up to 80MB/s ** (default) faster recording to a partition without a file system (raw partition) avoiding OS calls - up to 220MB/s ** (default) faster recording to a partition without a file system (raw partition) avoiding OS calls - up to 220MB/s -* To extract data from a raw partition use '''dd''' or [https://github.com/Elphel/elphel-tools-x393 these scripts] to get the data and split it into images.+* To extract data from a raw partition use '''dd''' or [https://github.com/Elphel/elphel-tools-x393 these scripts] to get the data and split it into images. Follow [[Extracting_images_from_raw_partition | this link]] for details. * Can record to an mmc partiton or usb. * Can record to an mmc partiton or usb. * <b><font size='3' color='red'>[[Using_camogm_with_Elphel393_camera|More info]]</font></b> * <b><font size='3' color='red'>[[Using_camogm_with_Elphel393_camera|More info]]</font></b> Mikhail

Camogmgui

Wiki Recent Changes - Wed, 01/11/2017 - 12:34

Web-based Graphical User Interface for camogm:

← Older revision Revision as of 19:34, 11 January 2017 Line 1: Line 1: ==Web-based Graphical User Interface for camogm== ==Web-based Graphical User Interface for camogm== -The web interface for ''camogm'' is intended for recording of video or images to internal or external storage just from your browser. Elphel393 series cameras support recording in two modes: normal recording to a file system and fast recording to a raw disk or disk partition without any file system on it.+The web interface for ''camogm'' is intended for recording of video or images to internal or external storage just from your browser. Elphel393 series cameras support recording in two modes: normal recording to a file system and fast recording to a raw disk or disk partition without any file system on it. The process of images extraction from raw partition is described on [[Extracting_images_from_raw_partition | this page]]. ==Prerequisites== ==Prerequisites== Mikhail

Extracting images from raw partition

Wiki Recent Changes - Wed, 01/11/2017 - 12:30

Created page with "As it was mentioned in camogmgui, ''camogm'' can save images to a raw partition or disk in fast recording mode and you need to do one additional step to extract t..."

New page

As it was mentioned in [[Camogmgui | camogmgui]], ''camogm'' can save images to a raw partition or disk in fast recording mode and you need to do one additional step to extract these images from such a partition. This short note describes how to get images from raw partition.

1. Download [[https://github.com/Elphel/elphel-tools-x393 these scripts]] to your PC.

2. Connect camera to eSATA port, power it on and wait until it has loaded.

3. Copy ssh key to camera. This step can be skipped if it was done before.
$ ssh-copy-id root@192.168.0.9

4. Run ''int_ssd_download.py'' script to dump camera's raw partition to PC.
$ ./int_ssd_download.py -c root@192.168.0.9 -n 1 -bs 100 -bc 1 .
root@192.168.0.9: connection ok
root@192.168.0.9: raw partition name: Crucial_CT250MX200SSD6_1531103B6ABA-part2
umounting /dev/sda1
root@192.168.0.9: Enabled connection: internal SSD <-> PC
Connect camera (eSATA) to PC (eSATA/SATA). Press Enter to continue...
[sudo] password for mk:
Getting raw partition data from /dev/sdb2
1+0 records in
1+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.501566 s, 209 MB/s
Waiting for disks to show up:
[]
root@192.168.0.9: Enabled connection: internal SSD <-> Camera
Done
Here, the ''-c'' parameter specifies which camera to use and script controls this camera to switch internal disk to eSATA and find raw partition. The ''-n'', ''-bs'' and ''-bc'' parameters specify the number of chunks to download, block size in MB and the number of blocks in each chunk respectively. The script will create a subdirectory, which name is made of disk manufacturer, model and partition number, and download disk dump into this subdirectory.
$ ls -l
total 84
drwxrwxr-x 2 mk mk 4096 Jan 11 10:53 Crucial_CT250MX200SSD6_1531103B6ABA-part2
-rw-rw-r-- 1 mk mk 4308 Jan 9 18:38 extract_images.php
-rwxrwxr-x 1 mk mk 3177 Jan 9 18:38 ext_ssd_download.py
-rwxrwxr-x 1 mk mk 4726 Jan 10 17:12 int_ssd_download.py
-rw-rw-r-- 1 mk mk 35141 Jan 9 18:38 LICENSE
-rw-rw-r-- 1 mk mk 620 Jan 9 18:38 README.md
-rw-rw-r-- 1 mk mk 6110 Jan 9 18:38 x393.py
-rw-rw-r-- 1 mk mk 10016 Jan 9 18:39 x393.pyc

5. Use ''extract_images.php'' sctipt to extract images from disk dump.
$ ./extract_images.php Crucial_CT250MX200SSD6_1531103B6ABA-part2
&lt;pre&gt;
Splitting Crucial_CT250MX200SSD6_1531103B6ABA-part2/file_0.img into jp4s
All images will be placed to ''result'' subdirectory.
$ ls -l Crucial_CT250MX200SSD6_1531103B6ABA-part2/
total 102404
-rw-r--r-- 1 root root 104857600 Jan 11 10:53 file_0.img
drwxrwxr-x 2 mk mk 4096 Jan 11 12:22 result Mikhail

01/10/17 [imagej-elphel][dct] by AndreyFilippov: combining with vignetting correction

Elphel GIT logs - Tue, 01/10/2017 - 23:19
AndreyFilippov committed changes to the Elphel git project :
combining with vignetting correction

01/10/17 [imagej-elphel][master] by AndreyFilippov: combining with vignetting correction

Elphel GIT logs - Tue, 01/10/2017 - 23:19
AndreyFilippov committed changes to the Elphel git project :
combining with vignetting correction

01/10/17 [meta-elphel393][master] by Oleg Dzhimiev: +parted

Elphel GIT logs - Tue, 01/10/2017 - 19:00
Oleg Dzhimiev committed changes to the Elphel git project :
+parted

01/10/17 [meta-elphel393][] by Oleg Dzhimiev: +parted

Elphel GIT logs - Tue, 01/10/2017 - 19:00
Oleg Dzhimiev committed changes to the Elphel git project :
+parted

Eyesis4Pi samples

Wiki Recent Changes - Mon, 01/09/2017 - 19:47

Eyesis4Pi samples

Wiki Recent Changes - Mon, 01/09/2017 - 18:47

← Older revision Revision as of 01:47, 10 January 2017 (One intermediate revision not shown)Line 1: Line 1: ==Notes== ==Notes== -* The posted below images are equirectangular projections made for the WebGL Viewer / Editor. More information about WebGL panorama viewer / editor is available on [http://blog.elphel.com/2011/06/eyesis-outdoor-panorama-sets-and-the-viewereditor/ Elphel blog]+* Source files - equirectangular projection (14268x7135).  +* Demos:  +{| class='wikitable'  +! Name  +! Source  +! Description  +|-  +| Elphel's WebGL Panorama Viewer/Editor  +| style='text-align:center' | [https://sourceforge.net/p/elphel/webgl_panorama_editor/ci/master/tree/ sf.net/elphel]  +| WebGL, Open Street Map, [http://blog.elphel.com/2011/06/eyesis-outdoor-panorama-sets-and-the-viewereditor/ '''More information''']  +|-  +| three.js  +| style='text-align:center' | [https://threejs.org/examples/?q=panoram#webgl_panorama_equirectangular three.js]  +| WebGL  +|-  +| aframe.js  +| style='text-align:center' | [https://aframe.io/examples/showcase/sky/ aframe.js]  +| WebVR, Mobile, based on three.js  +|} * WebGL Panorama Viewer requires a browser supporting WebGL like Firefox4 or Chrome and high-performance video card. * WebGL Panorama Viewer requires a browser supporting WebGL like Firefox4 or Chrome and high-performance video card. Line 30: Line 48: |<font size="3">[http://community.elphel.com/files/eyesis/pano-db-3/webgl_panorama_editor.html?kml=20120801_ro.kml&proto=20120801_ro.kml&ntxt=2&as_camera=95&start=result_1342928255_838636.jpeg&range=95&labels=false&keepzoom=false&closest2d=false&seethrough=0.4&transition=5&mask=&azimuth=59.9&elevation=-4.7&zoom=0.276&follow=false&mv3d=false&fovy=45 Open in WebGL Viewer]</font> |<font size="3">[http://community.elphel.com/files/eyesis/pano-db-3/webgl_panorama_editor.html?kml=20120801_ro.kml&proto=20120801_ro.kml&ntxt=2&as_camera=95&start=result_1342928255_838636.jpeg&range=95&labels=false&keepzoom=false&closest2d=false&seethrough=0.4&transition=5&mask=&azimuth=59.9&elevation=-4.7&zoom=0.276&follow=false&mv3d=false&fovy=45 Open in WebGL Viewer]</font> |} |}  +<!-- <br> <br> {| {| Line 39: Line 58: |<font size="3">[http://community.elphel.com/files/eyesis/pano-db-3/webgl_panorama_editor.html?kml=20120801_ro.kml&proto=20120801_ro.kml&ntxt=2&as_camera=95&start=result_1342928119_838636.jpeg&range=95&labels=false&keepzoom=false&closest2d=false&seethrough=0.4&transition=5&mask=&azimuth=353.2&elevation=0.1&zoom=0.284&follow=false&mv3d=false&fovy=45 Open in WebGL Viewer]</font> |<font size="3">[http://community.elphel.com/files/eyesis/pano-db-3/webgl_panorama_editor.html?kml=20120801_ro.kml&proto=20120801_ro.kml&ntxt=2&as_camera=95&start=result_1342928119_838636.jpeg&range=95&labels=false&keepzoom=false&closest2d=false&seethrough=0.4&transition=5&mask=&azimuth=353.2&elevation=0.1&zoom=0.284&follow=false&mv3d=false&fovy=45 Open in WebGL Viewer]</font> |} |}  +--> <br> <br> {| {| Oleg

01/09/17 [webgl_panorama_editor][master] by Oleg Dzhimiev: 1. updated maps 2. handle missing db

Elphel GIT logs - Mon, 01/09/2017 - 14:58
Oleg Dzhimiev committed changes to the Elphel git project :
1. updated maps 2. handle missing db

01/09/17 [x393][master] by AndreyFilippov: Merge remote-tracking branch 'origin/framepars'

Elphel GIT logs - Mon, 01/09/2017 - 10:41
AndreyFilippov committed changes to the Elphel git project :
Merge remote-tracking branch 'origin/framepars'

01/09/17 [x393][adding-sensors] by AndreyFilippov: Merge remote-tracking branch 'origin/framepars'

Elphel GIT logs - Mon, 01/09/2017 - 10:41
AndreyFilippov committed changes to the Elphel git project :
Merge remote-tracking branch 'origin/framepars'

01/09/17 [x393][dct] by AndreyFilippov: Merge remote-tracking branch 'origin/master' into framepars

Elphel GIT logs - Mon, 01/09/2017 - 10:38
AndreyFilippov committed changes to the Elphel git project :
Merge remote-tracking branch 'origin/master' into framepars

01/09/17 [x393][adding-sensors] by AndreyFilippov: Merge remote-tracking branch 'origin/master' into framepars

Elphel GIT logs - Mon, 01/09/2017 - 10:38
AndreyFilippov committed changes to the Elphel git project :
Merge remote-tracking branch 'origin/master' into framepars

01/09/17 [x393][dct] by AndreyFilippov: correcting histograms to system memory transfer

Elphel GIT logs - Mon, 01/09/2017 - 10:34
AndreyFilippov committed changes to the Elphel git project :
correcting histograms to system memory transfer

01/09/17 [x393][adding-sensors] by AndreyFilippov: correcting histograms to system memory transfer

Elphel GIT logs - Mon, 01/09/2017 - 10:34
AndreyFilippov committed changes to the Elphel git project :
correcting histograms to system memory transfer

Lens aberration correction with the lapped MDCT

Elphel Development Blog - Sat, 01/07/2017 - 18:19

Modern small-pixel image sensors exceed resolution of the lenses, so it is the optics of the camera, not the raw sensor “megapixels” that define how sharp are the images, especially in the off-center areas. Multi-sensor camera systems that depend on the tiled images do not have any center areas, so overall system resolution may be as low as that of is its worst part.

Fig. 1. Lateral chromatic aberration and Bayer mosaic: a) monochrome (green) PSF, b) composite color PSF, c) Bayer mosaic of the sensor, d) distorted mosaic for the chromatic aberration of b).

De-mosaic processing and chromatic aberrations

Our current cameras role is to preserve the raw sensor data while providing some moderate compression, all the image correction is applied during post-processing. Handling the lens aberration has to be done before color conversion (or de-mosaicing). When converting Bayer data to color images most cameras start with the calculation of the “missing” colors in the RG/GB pattern using 3×3 or 5×5 kernels, this procedure relies on the specific arrangement of the color filters.

Each of the red and blue pixels has 4 green ones at the same distance (pixel pitch) and 4 of the opposite (R for B and B for R) color at the equidistant diagonal locations. Fig.1. shows how lateral chromatic aberration disturbs these relations.

Fig.1a is the point-spread function (PSF) of the green channel of the sensor. The resolution of the PSF measurement is twice higher than the pixel pitch, so the lens is not that bad – horizontal distance between the 2 greens in Fig.1c corresponds to 4 pixels of Fig.1a. It is also clearly visible that the PSF is elongated and the radial resolution in this part of the image is better than the tangential one (lens center is left-down).

Fig.1b shows superposition of the 3 color channels: blue center is shifted up-and-right by approximately 2 PSF pixels (so one actual pixel period of the sensor) and the red one – half-pixel left-and-down from the green center. So the point light of a star, centered around some green pixel will not just spread uniformly to the two “R”s and two “B”s shown connected with lines in Fig.1c, but the other ones and in different order. Fig.1d illustrates the effective positions of the sensor pixels that match the lens aberration.

Aberrations correction at post-processing stage

When we perform off-line image correction we start with separating each color channel and re-sampling it at twice the pixel pitch frequency (adding zero sample between each measured one) – this increase allows to shift image by a fraction of a pixel both preserving resolution and not introducing the phase errors that may be visually OK but hurt when relying on sub-pixel resolution during correlation of images.

Next is the conversion of the full image into the overlapping square tiles to the frequency domain using 2-d DFT, then multiplication by the inverted PSF kernels – individual for each color channel and each part of the whole image (calibration procedure provides a 2-d array of PSF kernels). Such multiplication in the frequency domain is equivalent to (much more computationally expensive) image convolution (or deconvolution as the desired result is to reduce the effect of the convolution of the ideal image with the PSF of the actual lens). This is possible because of the famous convolution-multiplication property of Fourier transform and its discrete versions.

After each color channel tile is corrected and the phases of color components match (lateral chromatic aberration is compensated) it is the time when the data may be subject to non-linear processing that relies on the properties of the images (like detection of lines and edges) to combine the color channels trying to achieve highest spacial resolution and not to introduce color artifacts. Our current software performs it while data is in the frequency domain, before the inverse Fourier transform and merging the lapped tiles to the restored image.

Fig.2. Histogram of difference between original image and after direct and inverse MDCT (with 8×8 pixels DCT-IV)

MDCT of an image – there and back again

It would be very appealing to use DCT-based MDCT instead of DFT for aberration correction. With just 8×8 point DCT-IV it may be possible to calculate direct 16×16 -> 8×8 MDCT and 8×8 -> 16×16 IMDCT providing perfect reconstruction of the image. 8×8 pixels DCT should be able to handle convolution kernels with 8 pixel radius – same would require 16×16 pixels DFT. I knew there will be a challenge to handle non-symmetrical kernels but first I gave a try to a 2-d MDCT to convert and reconstruct back a camera image that way. I was not able to find an efficient Java implementation of the DCT-IV so I had to write some code following the algorithms presented in [1].

That worked nicely – when I obtained a histogram of the difference between the original image (pixel values were in the range of 0 to 255) and the restored one – IMDCT(MDCT(original)) it demonstrated negligible error. Of course I had to discard 8 pixel border of the image added by replication before the procedure – these border pixels do not belong to 4 overlapping tiles as all internal ones and so can not be reconstructed.

When this will be done in the camera FPGA the error will be higher – DCT implementation there uses just an integer DSP – not capable of the double precision calculations as the Java code. But for the small 8×8 transformations it should be rather easy to manage calculation precision to the required level.

Convolution with MDCT

It was also easy to implement a low-pass symmetrical filter by multiplying 8×8 pixel MDCT output tiles by a DCT-III transform of the desired convolution kernel. To convolve f ☼ g you need to multiply DCT_IV(f) by DCT_III(g) in the transform domain [2], but that does not mean that DCT-III has also be implemented in the FPGA – the de-convolution kernels can be prepared during off-line calibration and provided to the camera in the required form.

But not much more can be done for the convolution with asymmetric kernels – they either require additional DST (so DCT and DST) of the image and/or padding data with extra zeros [3],[4] – all that reduces advantage of DCT compared to DFT. Asymmetric kernels are required for the lens aberration corrections and Fig.1 shows two cases not easily suitable for MDCT:

  • lateral chromatic aberrations (or just shift in the image domain) – Fig.1b and
  • “diagonal” kernels (Fig.1a) – not an even function of each of the vertical and horizontal axes.

Fig.3. Convolution kernel factorization: a) required asymmetrical and shifted kernel, b) 4-point direct convolution with (sparse) Bayer color channel data, c) symmetric convolution kernel for MDCT, d) symmetric kernel – DCT-III of c) to multiply DCT-IV kernels of the image.

Symmetric kernels are like what you can do with a twice folded piece of paper, cut to some shape and unfolded, with folds oriented strictly vertically and horizontally.

Factorization of the convolution

Another way to handle convolution with non-symmetrical kernels is to split it in two – first convolve with an asymmetrical one directly and then – use MDCT and symmetrical kernel. The input data for combined convolution is split Bayer data, so each color channel receives sparse sequence – green one has only half non-zero elements and red and blue – only 1/4 such pixels. In the case of half-pixel grid (to handle fraction-pixel shifts) the relative amount of non-zero pixels is four times smaller, so the total number of multiplications is the same as for the whole-pixel grid.

The goal of such factorization is to minimize the number of the non-zero elements in the asymmetrical kernel, imposing no restrictions on the symmetrical one. Factorization does not have to be absolutely precise – the effect of deconvolution is limited by several factors, most important being the amplification of the sensor noise (such as shot noise). The required number of non-zero pixel may vary with the type of the distortion, for the lens we experimented with (Sunex DSL227 fisheye) just 4 pixels were sufficient to achieve 2-4% error for each of the kernel tiles. Four pixel kernels make it 1 multiplication per each of the red and blue pixels and 2 multiplications per green. As the kernels are calculated during the camera off-line calibration it should be possible to simultaneously generate scheduling of the the DSP and buffer memories to additionally reduce the required run-time FPGA resources.

Fig.3 illustrates how the deconvolution kernel for the aberration correction is split into two for the consecutive convolutions. Fig.1a shows the required deconvolution kernel determined during the existing calibration procedure. This kernel is shown far off-center even for the green channel – it appeared near the edge of the fish-eye lens field of view as the current lens model is based on the radial polynomial and is not efficient for the fish-eye (f-theta) lenses, so aberration correction by deconvolution had to absorb that extra shift. As the convolution kernel has fixed non-zero elements, the computation complexity does not depend on the maximal kernel dimensions. Fig.3b shows the determined asymmetric convolution kernel of 4 pixels, and Fig.3c – the kernel for symmetric convolution with MDCT, the unique 8×8 pixels part of it (inside of the red square) is replicated to the other3 quadrants by mirroring along row 0 and column 0 because of the whole pixel even symmetry – right boundary condition for DCT-III. Fig.3d contains result of the DCT-III applied to the data shown in Fig.3c.

Fig.4. Symmetric convolution kernel tiles in MDCT domain. Full image (click to open) has peripheral kernels replicated as there was no calibration data outside of the fisheye lens filed of view

There should be some more efficient ways to find optimal combinations of the two kernels, currently I used a combination of the Levenberg-Marquardt Algorithm (LMA) that minimizes approximation error (root mean square of the differences between the given kernel and the result of the convolution of the two calculated) and adding/replacing pixels in the asymmetrical kernel, sorting the variants for the best LMA fit. Experimental code (FactorConvKernel.java) for the kernel calculation is in the same GitHub repository.

Each kernel tile is processed independently of the neighbors, so while the aberration deconvolution kernels are changing smoothly between the adjacent tiles, the individual asymmetrical (for direct convolution with Bayer signal data) and symmetrical (for convolution by multiplication in the MDCT space) may change dramatically (see Fig.4). But when the direct convolution is applied before the window multiplication to the source pixels that contribute to a 16×16 pixel MDCT overlapping tile, then the result (after IMDCT) depends on the convolution of the two kernels, not the individual ones.

Deconvolving the test image

Next step was to apply the convolution to the test image, see if there are any visible blocking (or other) artifacts and if the image sharpness was improved. Only a single (green) channel was tested as there is no DCT-based color conversion code in this program yet. Program was tested with the whole pixel grid (not half pixel) so some reduction of sharpness caused by fractional pixel shift was expected. For the comparison “before/after” aberration correction I used two pairs – one with the raw Bayer data (half of the pixels are black in a checker-board pattern) and the other – with the Bayer pattern after 0.4 pix low-pass filter to reduce the checkerboard pattern. Without this filtering image would be either twice darker or (as on these pictures) saturated at lower levels (checkerboard 0/255 alternating pixels result in average gray level of only half of the full range).

Fig.5. Alternating images of a segment (green channel only): low-pass filter of the Bayer mosaic before and after deconvolution. Click image to show comparison with raw Bayer component.
Raw Bayer
Bayer data, low pass filter, sigma = 0.4 pix
Deconvolved

Fig.5 shows animated GIF of a fraction of the whole image, clicking the image shows comparison to the raw Bayer (with the limited gray level), caption links the full size images for these 3 modes.

No de-noise code is used, so amplification of the pixel shot noise is clearly visible, especially on the uniform surfaces, but aliasing cancellation remained functional even with abrupt changing of the convolution kernels as ones shown in Fig.4.

Conclusions

Algorithms suitable for FPGA implementation are tested with the simulation code. Processing of the images subject to the typical optical aberration of the fisheye lens DSL227 does not add significantly to the computational complexity compared to the pure symmetric convolution using lapped MDCT based on the 8×8 pixels two-dimensional DCT-IV.

This solution can be used as a first stage of the real time image correction and rectification, capable of sub-pixel resolution in multiple application areas, such as 3-d reconstruction and autonomous navigation.

References

[1] Plonka, Gerlind, and Manfred Tasche. “Fast and numerically stable algorithms for discrete cosine transforms.” Linear algebra and its applications 394 (2005): 309-345.
[2] Martucci, Stephen A. “Symmetric convolution and the discrete sine and cosine transforms.” IEEE Transactions on Signal Processing 42.5 (1994): 1038-1051. pdf
[3] Suresh, K., and T. V. Sreenivas. “Linear filtering in DCT IV/DST IV and MDCT/MDST domain.” Signal Processing 89.6 (2009): 1081-1089. Abstract and full text pdf.
[4] Reju, Vaninirappuputhenpurayil Gopalan, Soo Ngee Koh, and Ing Yann Soon. “Convolution using discrete sine and cosine transforms.” IEEE Signal Processing Letters 14.7 (2007): 445. pdf
[5] Malvar, Henrique S. “Extended lapped transforms: Properties, applications, and fast algorithms.” IEEE Transactions on Signal Processing 40.11 (1992): 2703-2714.

01/06/17 [x393][master] by AndreyFilippov: merged with framepars

Elphel GIT logs - Fri, 01/06/2017 - 22:11
AndreyFilippov committed changes to the Elphel git project :
merged with framepars

Pages

Subscribe to www3.elphel.com aggregator