Tiff file format for pre-processed quad-stereo sets
TIFF stacks for ML
← Older revision Revision as of 21:48, 11 July 2018 (15 intermediate revisions by the same user not shown)Line 3: Line 3: <font size='2'>'''1527256903_350165/''' <font size='2'>'''1527256903_350165/''' ├── 1527256903_350165.kml ├── 1527256903_350165.kml − ├── '''jp4'''+ ├── '''jp4''' (source files directory) │ ├── <font color='RoyalBlue'>1527256903_350165_0.jp4</font> │ ├── <font color='RoyalBlue'>1527256903_350165_0.jp4</font> │ ├── <font color='RoyalBlue'>...</font> │ ├── <font color='RoyalBlue'>...</font> Line 12: Line 12: ├── rating.txt ├── rating.txt ├── thumb.jpeg ├── thumb.jpeg − └── '''v03'''+ └── '''v03''' (model version) ├── <font color='Indigo'>1527256903_350165-00-D0.0.jpeg</font> ├── <font color='Indigo'>1527256903_350165-00-D0.0.jpeg</font> ├── <font color='Indigo'>...</font> ├── <font color='Indigo'>...</font> Line 26: Line 26: ├── <font color='DarkGoldenrod'>1527256903_350165.obj</font> ├── <font color='DarkGoldenrod'>1527256903_350165.obj</font> ├── <font color='DarkGoldenrod'>1527256903_350165.x3d</font> ├── <font color='DarkGoldenrod'>1527256903_350165.x3d</font> − └── '''ml'''+ └── '''ml''' (directory with processed data files for ML) ├── <font color='ForestGreen'>1527256903_350165-ML_DATA-08B-O-FZ0.05-OFFS-2.00000.tiff</font> ├── <font color='ForestGreen'>1527256903_350165-ML_DATA-08B-O-FZ0.05-OFFS-2.00000.tiff</font> ├── <font color='ForestGreen'>1527256903_350165-ML_DATA-08B-O-FZ0.05-OFFS-1.00000.tiff</font> ├── <font color='ForestGreen'>1527256903_350165-ML_DATA-08B-O-FZ0.05-OFFS-1.00000.tiff</font> Line 34: Line 34: where: where: * <font color='RoyalBlue'>'''*.jp4'''</font> - source files, 0..3 - quad stereo camera #1, 4..7 - quad stereo camera #2, and so on if there are more cameras in the system. * <font color='RoyalBlue'>'''*.jp4'''</font> - source files, 0..3 - quad stereo camera #1, 4..7 - quad stereo camera #2, and so on if there are more cameras in the system. −* <font color='Indigo'>'''*-D0.0.jpeg'''</font> - disparity = 0, images are undistorted to a common polynom+* <font color='Indigo'>'''*-D0.0.jpeg'''</font> - disparity = 0, images are undistorted to a distortion polynom common for each image * <font color='OrangeRed'>'''*.corr-xml'''</font> - ImageJ plugin's settings file? * <font color='OrangeRed'>'''*.corr-xml'''</font> - ImageJ plugin's settings file? * <font color='Maroon'>'''*-DSI_COMBO.tiff'''</font> - Disparity Space Image - tiff stack * <font color='Maroon'>'''*-DSI_COMBO.tiff'''</font> - Disparity Space Image - tiff stack Line 40: Line 40: * <font color='DarkGoldenrod'>'''*.x3d, *.png'''</font> - X3D format model with textures. The textures are shared with the OBJ format model * <font color='DarkGoldenrod'>'''*.x3d, *.png'''</font> - X3D format model with textures. The textures are shared with the OBJ format model * <font color='DarkGoldenrod'>'''*.obj, *.mtl, *.png'''</font> - OBJ format model with textures * <font color='DarkGoldenrod'>'''*.obj, *.mtl, *.png'''</font> - OBJ format model with textures −* <font color='ForestGreen'>'''ml/*.tiff'''</font> - TIFF stack of pre-processed images for ML+* <font color='ForestGreen'>'''*.tiff'''</font> - TIFF stack of pre-processed images for ML * *.kml, rating.txt, thumb.jpeg - files, related to the online viewer only * *.kml, rating.txt, thumb.jpeg - files, related to the online viewer only −==Stacked TIFF==+==<font color='ForestGreen'>TIFF stacks for ML</font>== − +* What's in each stack is described in the [https://community.elphel.com/files/presentations/Elphel_TP-CNN_slides.pdf presentation for CVPR2018, pp.19-21]: +** 5 layers in the stack: ['''diagm-pair''', '''diago-pair''', '''hor-pairs''', '''vert-pairs''', '''other'''] +*** '''diagm-pair''' +*** '''diago-pair''' +*** '''hor-pairs''' +*** '''vert-pairs''' +*** '''other''' - encoded values: estimated disparity, residual disparity and confidence for the residual disparity * The source files are processed using a plugin for ImageJ, the output file for each set is a tiff stack * The source files are processed using a plugin for ImageJ, the output file for each set is a tiff stack * There are a few ways to view the stack: * There are a few ways to view the stack: −** just open with ImageJ+====ImageJ==== +ImageJ - it has a native support for stacks, each stack has a name label stored (along with related xml info) in the ImageJ tiff tags. To read tiff tags in ImageJ, go '''Image > Show Info...''' +====Python==== +Use '''imagej_tiff.py''' from [https://git.elphel.com/Elphel/python3-imagej-tiff python3-imagej-tiff] to: +* get tiff tags values (Pillow) +* parse Properties xml data stored in the tiff tags used by ImageJ +* get tile dimensions from Properties +* read layers as numpy arrays for further computations or plotting +Example: + <font size='1' style='line-height:0.5;'>'''~$ python3 imagej_tiff.py 1527256903_350165-ML_DATA-08B-O-FZ0.05-OFFS0.00000.tiff''' + time: 1531344391.7055812 + time: 1531344392.5336654 + TIFF stack labels: ['diagm-pair', 'diago-pair', 'hor-pairs', 'vert-pairs', 'other'] + <?xml version="1.0" ?> + <properties> + <ML_OTHER_TARGET>0</ML_OTHER_TARGET> + <tileWidth>9</tileWidth> + <disparityRadiusMain>257.22231560274076</disparityRadiusMain> + <comment_ML_OTHER_GTRUTH_STRENGTH>Offset of the ground truth strength in the "other" layer tile</comment_ML_OTHER_GTRUTH_STRENGTH> + <data_min>-0.16894744988183344</data_min> + <comment_intercameraBaseline>Horizontal distance between the main and the auxiliary camera centers (mm). Disparity is specified for the main camera</comment_intercameraBaseline> + <ML_OTHER_GTRUTH>2</ML_OTHER_GTRUTH> + <data_max>0.6260986600450271</data_max> + <disparityRadiusAux>151.5308819757923</disparityRadiusAux> + <comment_disparityRadiusAux>Side of the square where 4 main camera subcameras are located (mm). Disparity is specified for the main camera</comment_disparityRadiusAux> + <comment_disparityRadiusMain>Side of the square where 4 main camera subcameras are located (mm)</comment_disparityRadiusMain> + <comment_dispOffset>Tile target disparity minum ground truth disparity</comment_dispOffset> + <comment_tileWidth>Square tile size for each 2d correlation, always odd</comment_tileWidth> + <comment_data_min>Defined only for 8bpp mode - value, corresponding to -127 (-128 is NaN)</comment_data_min> + <comment_data_max>Defined only for 8bpp mode - value, corresponding to +127 (-128 is NaN)</comment_data_max> + <comment_ML_OTHER_TARGET>Offset of the target disparity in the "other" layer tile</comment_ML_OTHER_TARGET> + <VERSION>1.0</VERSION> + <dispOffset>0.0</dispOffset> + <comment_ML_OTHER_GTRUTH>Offset of the ground truth disparity in the "other" layer tile</comment_ML_OTHER_GTRUTH> + <ML_OTHER_GTRUTH_STRENGTH>4</ML_OTHER_GTRUTH_STRENGTH> + <intercameraBaseline>1256.0</intercameraBaseline> + </properties> + Tiles shape: 9x9 + Data min: -0.16894744988183344 + Data max: 0.6260986600450271 + (2178, 2916, 5) + Stack of images shape: (242, 324, 9, 9, 4) + time: 1531344392.7290232 + Stack of values shape: (242, 324, 3) + time: 1531344393.5556033</font> + + + +Upon opening tiff the image shape will be '''(height,width,layer)''' - for further processing it needs to be reshaped to '''(height_in_tiles, width_in_tiles, tile_height, tile_width, layer)''', example: + <font size='2'>tiff stack shape: (2178, 2916, 5) + image data: (242, 324, 9, 9, 4) + values data: (242, 324, 3)</font> OlegTiff file format for pre-processed quad-stereo sets
TIFF stacks for ML
← Older revision Revision as of 20:42, 11 July 2018 (12 intermediate revisions by the same user not shown)Line 3: Line 3: <font size='2'>'''1527256903_350165/''' <font size='2'>'''1527256903_350165/''' ├── 1527256903_350165.kml ├── 1527256903_350165.kml − ├── '''jp4'''+ ├── '''jp4''' (source files directory) │ ├── <font color='RoyalBlue'>1527256903_350165_0.jp4</font> │ ├── <font color='RoyalBlue'>1527256903_350165_0.jp4</font> │ ├── <font color='RoyalBlue'>...</font> │ ├── <font color='RoyalBlue'>...</font> Line 12: Line 12: ├── rating.txt ├── rating.txt ├── thumb.jpeg ├── thumb.jpeg − └── '''v03'''+ └── '''v03''' (model version) ├── <font color='Indigo'>1527256903_350165-00-D0.0.jpeg</font> ├── <font color='Indigo'>1527256903_350165-00-D0.0.jpeg</font> ├── <font color='Indigo'>...</font> ├── <font color='Indigo'>...</font> Line 26: Line 26: ├── <font color='DarkGoldenrod'>1527256903_350165.obj</font> ├── <font color='DarkGoldenrod'>1527256903_350165.obj</font> ├── <font color='DarkGoldenrod'>1527256903_350165.x3d</font> ├── <font color='DarkGoldenrod'>1527256903_350165.x3d</font> − └── '''ml'''+ └── '''ml''' (directory with processed data files for ML) ├── <font color='ForestGreen'>1527256903_350165-ML_DATA-08B-O-FZ0.05-OFFS-2.00000.tiff</font> ├── <font color='ForestGreen'>1527256903_350165-ML_DATA-08B-O-FZ0.05-OFFS-2.00000.tiff</font> ├── <font color='ForestGreen'>1527256903_350165-ML_DATA-08B-O-FZ0.05-OFFS-1.00000.tiff</font> ├── <font color='ForestGreen'>1527256903_350165-ML_DATA-08B-O-FZ0.05-OFFS-1.00000.tiff</font> Line 34: Line 34: where: where: * <font color='RoyalBlue'>'''*.jp4'''</font> - source files, 0..3 - quad stereo camera #1, 4..7 - quad stereo camera #2, and so on if there are more cameras in the system. * <font color='RoyalBlue'>'''*.jp4'''</font> - source files, 0..3 - quad stereo camera #1, 4..7 - quad stereo camera #2, and so on if there are more cameras in the system. −* <font color='Indigo'>'''*-D0.0.jpeg'''</font> - disparity = 0, images are undistorted to a common polynom+* <font color='Indigo'>'''*-D0.0.jpeg'''</font> - disparity = 0, images are undistorted to a distortion polynom common for each image * <font color='OrangeRed'>'''*.corr-xml'''</font> - ImageJ plugin's settings file? * <font color='OrangeRed'>'''*.corr-xml'''</font> - ImageJ plugin's settings file? * <font color='Maroon'>'''*-DSI_COMBO.tiff'''</font> - Disparity Space Image - tiff stack * <font color='Maroon'>'''*-DSI_COMBO.tiff'''</font> - Disparity Space Image - tiff stack Line 40: Line 40: * <font color='DarkGoldenrod'>'''*.x3d, *.png'''</font> - X3D format model with textures. The textures are shared with the OBJ format model * <font color='DarkGoldenrod'>'''*.x3d, *.png'''</font> - X3D format model with textures. The textures are shared with the OBJ format model * <font color='DarkGoldenrod'>'''*.obj, *.mtl, *.png'''</font> - OBJ format model with textures * <font color='DarkGoldenrod'>'''*.obj, *.mtl, *.png'''</font> - OBJ format model with textures −* <font color='ForestGreen'>'''ml/*.tiff'''</font> - TIFF stack of pre-processed images for ML+* <font color='ForestGreen'>'''*.tiff'''</font> - TIFF stack of pre-processed images for ML * *.kml, rating.txt, thumb.jpeg - files, related to the online viewer only * *.kml, rating.txt, thumb.jpeg - files, related to the online viewer only −==Stacked TIFF==+==<font color='ForestGreen'>TIFF stacks for ML</font>== − +* What's in each stack is described in the [https://community.elphel.com/files/presentations/Elphel_TP-CNN_slides.pdf presentation for CVPR2018, pp.19-21]: +** 5 layers in the stack: ['''diagm-pair''', '''diago-pair''', '''hor-pairs''', '''vert-pairs''', '''other'''] +*** '''diagm-pair''' +*** '''diago-pair''' +*** '''hor-pairs''' +*** '''vert-pairs''' +*** '''other''' - encoded values: image disparity, estimated disparity and confidence for the estimated disparity * The source files are processed using a plugin for ImageJ, the output file for each set is a tiff stack * The source files are processed using a plugin for ImageJ, the output file for each set is a tiff stack * There are a few ways to view the stack: * There are a few ways to view the stack: −** just open with ImageJ+** Open with ImageJ - it has a native support for stacks, each stack has a name label stored (along with related xml info) in the ImageJ tiff tags. To read tiff tags in ImageJ, go '''Image > Show Info...''' +** Use '''imagej_tiff.py''' from [https://git.elphel.com/Elphel/python3-imagej-tiff python3-imagej-tiff] +*** Upon opening tiff the image shape will be '''(height,width,layer)''' - for further processing it needs to be reshaped to '''(height_in_tiles, width_in_tiles, tile_height, tile_width, layer)''': + <font size='2'>tiff stack shape: (2178, 2916, 5) + image data: (242, 324, 9, 9, 4) + values data: (242, 324, 3)</font> OlegTiff file format for pre-processed quad-stereo sets
← Older revision
Revision as of 19:39, 11 July 2018
(4 intermediate revisions by the same user not shown)Line 3:
Line 3:
<font size='2'>'''1527256903_350165/''' <font size='2'>'''1527256903_350165/'''
├── 1527256903_350165.kml ├── 1527256903_350165.kml
− ├── '''jp4'''+ ├── '''jp4''' (source files directory)
│ ├── <font color='RoyalBlue'>1527256903_350165_0.jp4</font> │ ├── <font color='RoyalBlue'>1527256903_350165_0.jp4</font>
│ ├── <font color='RoyalBlue'>...</font> │ ├── <font color='RoyalBlue'>...</font>
Line 12:
Line 12:
├── rating.txt ├── rating.txt
├── thumb.jpeg ├── thumb.jpeg
− └── '''v03'''+ └── '''v03''' (model version)
├── <font color='Indigo'>1527256903_350165-00-D0.0.jpeg</font> ├── <font color='Indigo'>1527256903_350165-00-D0.0.jpeg</font>
├── <font color='Indigo'>...</font> ├── <font color='Indigo'>...</font>
Line 26:
Line 26:
├── <font color='DarkGoldenrod'>1527256903_350165.obj</font> ├── <font color='DarkGoldenrod'>1527256903_350165.obj</font>
├── <font color='DarkGoldenrod'>1527256903_350165.x3d</font> ├── <font color='DarkGoldenrod'>1527256903_350165.x3d</font>
− └── '''ml'''+ └── '''ml''' (directory with processed data files for ML)
├── <font color='ForestGreen'>1527256903_350165-ML_DATA-08B-O-FZ0.05-OFFS-2.00000.tiff</font> ├── <font color='ForestGreen'>1527256903_350165-ML_DATA-08B-O-FZ0.05-OFFS-2.00000.tiff</font>
├── <font color='ForestGreen'>1527256903_350165-ML_DATA-08B-O-FZ0.05-OFFS-1.00000.tiff</font> ├── <font color='ForestGreen'>1527256903_350165-ML_DATA-08B-O-FZ0.05-OFFS-1.00000.tiff</font>
Line 34:
Line 34:
where: where:
* <font color='RoyalBlue'>'''*.jp4'''</font> - source files, 0..3 - quad stereo camera #1, 4..7 - quad stereo camera #2, and so on if there are more cameras in the system. * <font color='RoyalBlue'>'''*.jp4'''</font> - source files, 0..3 - quad stereo camera #1, 4..7 - quad stereo camera #2, and so on if there are more cameras in the system.
−* <font color='Indigo'>'''*-D0.0.jpeg'''</font> - disparity = 0, images are undistorted to a common polynom+* <font color='Indigo'>'''*-D0.0.jpeg'''</font> - disparity = 0, images are undistorted to a distortion polynom common for each image
* <font color='OrangeRed'>'''*.corr-xml'''</font> - ImageJ plugin's settings file? * <font color='OrangeRed'>'''*.corr-xml'''</font> - ImageJ plugin's settings file?
* <font color='Maroon'>'''*-DSI_COMBO.tiff'''</font> - Disparity Space Image - tiff stack * <font color='Maroon'>'''*-DSI_COMBO.tiff'''</font> - Disparity Space Image - tiff stack
Line 40:
Line 40:
* <font color='DarkGoldenrod'>'''*.x3d, *.png'''</font> - X3D format model with textures. The textures are shared with the OBJ format model * <font color='DarkGoldenrod'>'''*.x3d, *.png'''</font> - X3D format model with textures. The textures are shared with the OBJ format model
* <font color='DarkGoldenrod'>'''*.obj, *.mtl, *.png'''</font> - OBJ format model with textures * <font color='DarkGoldenrod'>'''*.obj, *.mtl, *.png'''</font> - OBJ format model with textures
−* <font color='ForestGreen'>'''ml/*.tiff'''</font> - TIFF stack of pre-processed images for ML+* <font color='ForestGreen'>'''*.tiff'''</font> - TIFF stack of pre-processed images for ML
* *.kml, rating.txt, thumb.jpeg - files, related to the online viewer only * *.kml, rating.txt, thumb.jpeg - files, related to the online viewer only
−==Stacked TIFF==+==<font color='ForestGreen'>TIFF stacks for ML</font>==
* The source files are processed using a plugin for ImageJ, the output file for each set is a tiff stack * The source files are processed using a plugin for ImageJ, the output file for each set is a tiff stack
* There are a few ways to view the stack: * There are a few ways to view the stack:
** just open with ImageJ ** just open with ImageJ
Oleg
Tiff file format for pre-processed quad-stereo sets
Created page with "==Image sets== Example tree of a single set: <font size='2'>'''1527256903_350165/''' ├── 1527256903_350165.kml ├── '''jp4''' │ ├── <font color='Royal..."
New page
==Image sets==Example tree of a single set:
<font size='2'>'''1527256903_350165/'''
├── 1527256903_350165.kml
├── '''jp4'''
│ ├── <font color='RoyalBlue'>1527256903_350165_0.jp4</font>
│ ├── <font color='RoyalBlue'>...</font>
│ ├── <font color='RoyalBlue'>1527256903_350165_3.jp4</font>
│ ├── <font color='RoyalBlue'>1527256903_350165_4.jp4</font>
│ ├── <font color='RoyalBlue'>...</font>
│ └── <font color='RoyalBlue'>1527256903_350165_7.jp4</font>
├── rating.txt
├── thumb.jpeg
└── '''v03'''
├── <font color='Indigo'>1527256903_350165-00-D0.0.jpeg</font>
├── <font color='Indigo'>...</font>
├── <font color='Indigo'>1527256903_350165-07-D0.0.jpeg</font>
├── <font color='OrangeRed'>1527256903_350165.corr-xml</font>
├── <font color='Maroon'>1527256903_350165-DSI_COMBO.tiff</font>
├── <font color='DarkSlateGray'>1527256903_350165-EXTRINSICS.corr-xml</font>
├── <font color='DarkGoldenrod'>1527256903_350165-img1-texture.png</font>
├── <font color='DarkGoldenrod'>...</font>
├── <font color='DarkGoldenrod'>1527256903_350165-img2001-texture.png</font>
├── <font color='DarkGoldenrod'>1527256903_350165-img_infinity-texture.png</font>
├── <font color='DarkGoldenrod'>1527256903_350165.mtl</font>
├── <font color='DarkGoldenrod'>1527256903_350165.obj</font>
├── <font color='DarkGoldenrod'>1527256903_350165.x3d</font>
└── '''ml'''
├── <font color='ForestGreen'>1527256903_350165-ML_DATA-08B-O-FZ0.05-OFFS-2.00000.tiff</font>
├── <font color='ForestGreen'>1527256903_350165-ML_DATA-08B-O-FZ0.05-OFFS-1.00000.tiff</font>
├── <font color='ForestGreen'>1527256903_350165-ML_DATA-08B-O-FZ0.05-OFFS0.00000.tiff</font>
├── <font color='ForestGreen'>1527256903_350165-ML_DATA-08B-O-FZ0.05-OFFS1.00000.tiff</font>
└── <font color='ForestGreen'>1527256903_350165-ML_DATA-08B-O-FZ0.05-OFFS2.00000.tiff</font></font>
where:
* <font color='RoyalBlue'>'''*.jp4'''</font> - source files, 0..3 - quad stereo camera #1, 4..7 - quad stereo camera #2, and so on if there are more cameras in the system.
* <font color='Indigo'>'''*-D0.0.jpeg'''</font> - disparity = 0, images are undistorted to a common polynom
* <font color='OrangeRed'>'''*.corr-xml'''</font> - ImageJ plugin's settings file?
* <font color='Maroon'>'''*-DSI_COMBO.tiff'''</font> - Disparity Space Image - tiff stack
* <font color='DarkSlateGray'>'''*-EXTRINSICS.corr-xml'''</font> - extrinsic parameters of the multicamera system
* <font color='DarkGoldenrod'>'''*.x3d, *.png'''</font> - X3D format model with textures. The textures are shared with the OBJ format model
* <font color='DarkGoldenrod'>'''*.obj, *.mtl, *.png'''</font> - OBJ format model with textures
* <font color='ForestGreen'>'''ml/*.tiff'''</font> - TIFF stack of pre-processed images for ML
* *.kml, rating.txt, thumb.jpeg - files, related to the online viewer only
==Stacked TIFF==
* The source files are processed using a plugin for ImageJ, the output file for each set is a tiff stack
* There are a few ways to view the stack:
** just open with ImageJ Oleg
Tensorflow with gpu
Setup (some details)
← Older revision Revision as of 17:07, 6 July 2018 Line 49: Line 49: # Export paths # Export paths <b>~$ export PATH=/usr/local/cuda-9.2/bin${PATH:+:${PATH}} <b>~$ export PATH=/usr/local/cuda-9.2/bin${PATH:+:${PATH}} − ~$ export LD_LIBRARY_PATH=/usr/local/cuda-9.2/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}</b></font>+ ~$ export LD_LIBRARY_PATH=/usr/local/cuda-9.2/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} + ~$ export LD_LIBRARY_PATH=/usr/local/cuda-9.2/extras/CUPTI/lib64/${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}</b></font> * Install TensorFlow (build from sources for cuda 9.2): * Install TensorFlow (build from sources for cuda 9.2): OlegPoky manual
Setup
← Older revision Revision as of 18:25, 26 June 2018 Line 85: Line 85: bitbake u-boot device-tree linux-xlnx core-image-elphel393 bitbake u-boot device-tree linux-xlnx core-image-elphel393 </font> </font> −* Poky 2.2 Jethro+* Poky 2.0 Jethro <font size='2'> <font size='2'> − git clone https://git.elphel.com/Elphel/elphel393.git # or git clone git@git.elphel.com:Elphel/elphel393.git # if ssh public key is uploaded+ git clone -b jethro https://git.elphel.com/Elphel/elphel393.git # or git clone git@git.elphel.com:Elphel/elphel393.git # if ssh public key is uploaded # then follow the same steps as for Rocko # then follow the same steps as for Rocko </font> </font> OlegFPGA bitstream versions
Draft notes
← Older revision Revision as of 17:19, 26 June 2018 (3 intermediate revisions by the same user not shown)Line 2: Line 2: === Modifications to the SATA code === === Modifications to the SATA code === SATA controller subsystem source code is maintained in a [https://git.elphel.com/Elphel/x393_sata x393_sata] repository. SATA controller subsystem source code is maintained in a [https://git.elphel.com/Elphel/x393_sata x393_sata] repository. + 1. Edit py393sata/create_ahci_registers.py and modify [https://git.elphel.com/Elphel/x393_sata/blob/master/py393sata/create_ahci_registers.py#L35 RID value] 1. Edit py393sata/create_ahci_registers.py and modify [https://git.elphel.com/Elphel/x393_sata/blob/master/py393sata/create_ahci_registers.py#L35 RID value] Line 24: Line 25: # Verify bitsteram with the hardware # Verify bitsteram with the hardware # Commit to git repository # Commit to git repository + + +==Draft notes== +====Bitstream type?==== +* In fpga-elphel/x393 x393_global.tcl: +** checks if HISPI is enabled and make some kind of changes + +* status_read.v +** writes fpga bitstream type OlegDriverless mode 393
pre
← Older revision Revision as of 00:18, 26 June 2018 Line 4: Line 4: ==Instructions== ==Instructions== −===pre===+===pre (editing files on the boot SD card)=== −* to load bitstream for MT9F002 appropriate changes need to be made to:+* Remove 10389 board if installed (otherwise will have to rewrite eeprom) +* To load bitstream for MT9F002 appropriate changes need to be made to: ** device tree ** device tree <font size=2>set sensors to '''mt9f002''' in ''elphel393-detect_sensors,sensors'' entry</font> <font size=2>set sensors to '''mt9f002''' in ''elphel393-detect_sensors,sensors'' entry</font> ** /etc/elphel393/default_10389.xml for setup w/o 10389 or 10389's eeprom otherwise write '''MT9F002''' as application ** /etc/elphel393/default_10389.xml for setup w/o 10389 or 10389's eeprom otherwise write '''MT9F002''' as application + Application name must be already added to '''/usr/local/autocampars.php''' * Disable driver * Disable driver <font size=2>root@elphel393~# touch /etc/elphel393/disable_driver <font size=2>root@elphel393~# touch /etc/elphel393/disable_driver OlegCVPR2018
← Older revision
Revision as of 06:08, 20 June 2018
Line 1:
Line 1:
+This page contains links relevant to Elphel presentation at CVPR 2018 Expo.
+
==Presentation== ==Presentation==
* [https://community.elphel.com/files/presentations/Elphel_TP-CNN_slides.pdf High Resolution Wide FoV CNN System for Target Classification, Ranging and Tracking (pdf)] * [https://community.elphel.com/files/presentations/Elphel_TP-CNN_slides.pdf High Resolution Wide FoV CNN System for Target Classification, Ranging and Tracking (pdf)]
Olga
CVPR2018
← Older revision
Revision as of 14:40, 19 June 2018
Line 4:
Line 4:
==3D models demo and image sets== ==3D models demo and image sets==
* [https://community.elphel.com/3d+biquad/?lat=40.74257500&lng=-112.06741333&zoom=10&rating=5 3D BiQuad] * [https://community.elphel.com/3d+biquad/?lat=40.74257500&lng=-112.06741333&zoom=10&rating=5 3D BiQuad]
+
+==Research and Development Focus==
+=== Passive 3D Reconstruction and Long Ranging===
+* [https://blog.elphel.com/category/3d/ 3D Reconstruction and Ranging]
+=== FPGA-RTL-ASIC: Efficient Implementation of Frequency Domain Processing ===
+* [https://blog.elphel.com/category/rtl/ RTL]
+=== Calibration for Aberration Correction ===
+* [https://blog.elphel.com/category/calibration/ Calibration]
+
+
==Development blog== ==Development blog==
Andrey.filippov
CVPR2018
Products
← Older revision Revision as of 05:09, 19 June 2018 (7 intermediate revisions by the same user not shown)Line 1: Line 1: −==Links==+==Presentation== −[https://blog.elphel.com/2018/05/capturing-aircraft-position-with-the-long-range-quad-stereo-camera/ Capturing Aircraft Position with the Long Range Quad Stereo Camera]+* [https://community.elphel.com/files/presentations/Elphel_TP-CNN_slides.pdf High Resolution Wide FoV CNN System for Target Classification, Ranging and Tracking (pdf)] + +==3D models demo and image sets== +* [https://community.elphel.com/3d+biquad/?lat=40.74257500&lng=-112.06741333&zoom=10&rating=5 3D BiQuad] + +==Development blog== +* [https://blog.elphel.com/2018/05/capturing-aircraft-position-with-the-long-range-quad-stereo-camera/ <font size='2'>'''2018/05/06'''</font> Capturing Aircraft Position with the Long Range Quad Stereo Camera] +* [https://blog.elphel.com/2018/03/dual-quad-camera-rig-for-capturing-image-sets/ <font size='2'>'''2018/03/20'''</font> Dual Quad-Camera Rig for Capturing Image Sets] +* [https://blog.elphel.com/2018/02/high-resolution-multi-vew-stereo-tile-processor-and-convolutional-neural-network/ <font size='2'>'''2018/02/05'''</font> High Resolution Multi-View Stereo: Tile Processor and Convolutional Neural Network] +* [https://blog.elphel.com/2018/01/complex-lapped-transform-bayer/ <font size='2'>'''2018/01/08'''</font> Efficient Complex Lapped Transform Implementation for the Space-Variant Frequency Domain Calculations of the Bayer Mosaic Color Images] + +==Products== +* [https://www3.elphel.com/products Our multiple and single camera systems products] OlegCVPR2018
Products
← Older revision Revision as of 05:09, 19 June 2018 (8 intermediate revisions by the same user not shown)Line 1: Line 1: −==Links==+==Presentation== +* [https://community.elphel.com/files/presentations/Elphel_TP-CNN_slides.pdf High Resolution Wide FoV CNN System for Target Classification, Ranging and Tracking (pdf)] + +==3D models demo and image sets== +* [https://community.elphel.com/3d+biquad/?lat=40.74257500&lng=-112.06741333&zoom=10&rating=5 3D BiQuad] + +==Development blog== +* [https://blog.elphel.com/2018/05/capturing-aircraft-position-with-the-long-range-quad-stereo-camera/ <font size='2'>'''2018/05/06'''</font> Capturing Aircraft Position with the Long Range Quad Stereo Camera] +* [https://blog.elphel.com/2018/03/dual-quad-camera-rig-for-capturing-image-sets/ <font size='2'>'''2018/03/20'''</font> Dual Quad-Camera Rig for Capturing Image Sets] +* [https://blog.elphel.com/2018/02/high-resolution-multi-vew-stereo-tile-processor-and-convolutional-neural-network/ <font size='2'>'''2018/02/05'''</font> High Resolution Multi-View Stereo: Tile Processor and Convolutional Neural Network] +* [https://blog.elphel.com/2018/01/complex-lapped-transform-bayer/ <font size='2'>'''2018/01/08'''</font> Efficient Complex Lapped Transform Implementation for the Space-Variant Frequency Domain Calculations of the Bayer Mosaic Color Images] + +==Products== +* [https://www3.elphel.com/products Our multiple and single camera systems products] OlegCVPR2018
← Older revision
Revision as of 04:52, 19 June 2018
(4 intermediate revisions by the same user not shown)Line 1:
Line 1:
−==Links==+==Presentation==
+* [https://community.elphel.com/files/presentations/Elphel_TP-CNN_slides.pdf High Resolution Wide FoV CNN System for Target Classification, Ranging and Tracking (pdf)]
+
+==3D models demo and image sets==
+* [https://community.elphel.com/3d+biquad/?lat=40.74257500&lng=-112.06741333&zoom=10&rating=5 3D BiQuad]
+
+==Development articles==
+* [https://blog.elphel.com/2018/02/high-resolution-multi-vew-stereo-tile-processor-and-convolutional-neural-network/ High Resolution Multi-View Stereo: Tile Processor and Convolutional Neural Network]
+* [https://blog.elphel.com/2018/03/dual-quad-camera-rig-for-capturing-image-sets/ Dual Quad-Camera Rig for Capturing Image Sets]
+* [https://blog.elphel.com/2018/05/capturing-aircraft-position-with-the-long-range-quad-stereo-camera/ Capturing Aircraft Position with the Long Range Quad Stereo Camera]
+
+==Products==
+* [https://www3.elphel.com/products Our products]
Oleg
CVPR2018
Links
← Older revision Revision as of 22:03, 17 June 2018 Line 1: Line 1: ==Links== ==Links== +[https://blog.elphel.com/2018/05/capturing-aircraft-position-with-the-long-range-quad-stereo-camera/ Capturing Aircraft Position with the Long Range Quad Stereo Camera] OlegCVPR2018
Created page with "==Links=="
New page
==Links== OlegTensorflow with gpu
Testing setup
← Older revision Revision as of 16:28, 14 June 2018 (2 intermediate revisions by the same user not shown)Line 1: Line 1: ==Requirements== ==Requirements== * Kubuntu 16.04.4 LTS * Kubuntu 16.04.4 LTS −==Setup==+==Setup (guide)== +* Just follow [http://www.python36.com/install-tensorflow141-gpu/ '''this guide'''] +==Setup (some details)== * Check device * Check device <font size='2'><b>~$ lspci | grep NVIDIA</b> <font size='2'><b>~$ lspci | grep NVIDIA</b> Line 87: Line 89: * Unsupported card '''GeForce GT 610''' * Unsupported card '''GeForce GT 610''' − ~$ python3+ '''~$ python3''' Python 3.5.2 (default, Nov 23 2017, 16:37:01) Python 3.5.2 (default, Nov 23 2017, 16:37:01) [GCC 5.4.0 20160609] on linux [GCC 5.4.0 20160609] on linux Type "help", "copyright", "credits" or "license" for more information. Type "help", "copyright", "credits" or "license" for more information. − >>> import tensorflow as tf+ '''>>> import tensorflow as tf''' − >>> hello = tf.constant('Hello, World!') + '''>>> hello = tf.constant('Hello, World!')''' − >>> sess = tf.Session() + '''>>> sess = tf.Session()''' 2018-04-26 13:00:19.050625: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2018-04-26 13:00:19.050625: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2018-04-26 13:00:19.181581: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties: 2018-04-26 13:00:19.181581: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties: Line 103: Line 105: 2018-04-26 13:00:19.181683: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0 2018-04-26 13:00:19.181683: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0 2018-04-26 13:00:19.181695: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N 2018-04-26 13:00:19.181695: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N − >>> print(sess.run(hello)) + '''>>> print(sess.run(hello))''' b'Hello, World!' b'Hello, World!' OlegTensorflow with gpu
Testing setup
Show changes Oleg