Imaging solutions with Free Software & Open Hardware

Who's online

There are currently 0 users online.

Subscribe to Wiki Recent Changes feed
Track the most recent changes to the wiki in this feed. MediaWiki 1.28.0
Updated: 17 min 5 sec ago

Quad stereo tensorflow eclipse

Fri, 12/27/2019 - 12:04

‎ImageJ plugin

← Older revision Revision as of 19:04, 27 December 2019 (2 intermediate revisions by the same user not shown)Line 2: Line 2:  * Install Eclipse * Install Eclipse  * Clone and Import [https://git.elphel.com/Elphel/imagej-elphel imagej-elphel] * Clone and Import [https://git.elphel.com/Elphel/imagej-elphel imagej-elphel] −<font color='tomato'>'''Note: if the project is updated/pulled outside Eclipse - need manual refresh'''</font>+<font color='red'>'''NOTE: if project is updated/pulled outside Eclipse - might need a manual refresh'''</font>  +* TF version is pulled from pom.xml  +* Trained TF model for EO sensors is auto-downloaded - [https://community.elphel.com/files/quad-stereo/ml/trained_model_v1.0.zip trained_model_v1.0.zip]  +* Get some image samples, provide paths  +* Before running the plugin (Eyesis_Correction), copy imagej options to /home/user/.imagejs/Eyesis_Correction.xml:  +<font size='1'>  + <?xml version="1.0" encoding="UTF-8"?>  + <!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">  + <properties>  +    <comment>last updated Thu Sep 08 14:09:47 MDT 2042</comment>  +    <entry key="ADVANCED_MODE">True</entry>  +    <entry key="DCT_MODE">True</entry>  +    <entry key="MODE_3D">False</entry>  +    <entry key="GPU_MODE">True</entry>  +    <entry key="LWIR_MODE">True</entry>  + </properties>  +</font> Oleg

Quad stereo tensorflow eclipse

Fri, 12/27/2019 - 11:57

Created page with "==ImageJ plugin== * Install Eclipse * Clone and Import [https://git.elphel.com/Elphel/imagej-elphel imagej-elphel] <font color='tomato'>'''Note: if the project is updated/pull..."

New page

==ImageJ plugin==
* Install Eclipse
* Clone and Import [https://git.elphel.com/Elphel/imagej-elphel imagej-elphel]
<font color='tomato'>'''Note: if the project is updated/pulled outside Eclipse - need manual refresh'''</font> Oleg

Tensorflow with gpu

Mon, 12/23/2019 - 17:39

‎Notes

← Older revision Revision as of 00:39, 24 December 2019 (2 intermediate revisions by the same user not shown)Line 177: Line 177:       −==Walkthrough for CUDA 10.2 (Dec 2019)==+==Setup walkthrough for CUDA 10.2 (Dec 2019)==     ===Install CUDA=== ===Install CUDA=== Line 222: Line 222:     * In the docs it's clear that Docker version 19.03+ should use nvidia-docker2. For Docker of older versions - nvidia-docker v1 should be used.   * In the docs it's clear that Docker version 19.03+ should use nvidia-docker2. For Docker of older versions - nvidia-docker v1 should be used.   −* It's not immediately clear about the '''nvidia-container-runtime'''. nvidia-docker v1 & v2 already register it.+* It's not immediately clear about the '''nvidia-container-runtime'''. nvidia-docker v1 & v2 should have already registered it.     ====Notes==== ====Notes==== Line 241: Line 241:  * How to run tensorboard from the container: * How to run tensorboard from the container:    <font size='2'># from [https://briancaffey.github.io/2017/11/20/using-tensorflow-and-tensor-board-with-docker.html here]   <font size='2'># from [https://briancaffey.github.io/2017/11/20/using-tensorflow-and-tensor-board-with-docker.html here] −  # From the running container's command line (since it was run with 'bash' in the step above):+  # From the running container's command line (since it was run with 'bash' in the step above).  + # set a correct --logdir    root@e9efee9e3fd3:/# '''tensorboard --bind_all --logdir=/app/log.txt'''  # remove --bind_all for TF 1.15   root@e9efee9e3fd3:/# '''tensorboard --bind_all --logdir=/app/log.txt'''  # remove --bind_all for TF 1.15    # Then open a browser:   # Then open a browser:    '''http://localhost:6006'''</font>   '''http://localhost:6006'''</font> Oleg

Tensorflow with gpu

Mon, 12/23/2019 - 17:39

‎Notes

← Older revision Revision as of 00:39, 24 December 2019 (35 intermediate revisions by the same user not shown)Line 176: Line 176:    https://www.tensorflow.org/install/docker   https://www.tensorflow.org/install/docker    −==Walkthrough for CUDA 10.2 (Dec 2019)==+   +==Setup walkthrough for CUDA 10.2 (Dec 2019)==  +   +===Install CUDA===  +* In this [https://www.tensorflow.org/install/gpu guide] there's a [https://developer.nvidia.com/cuda-toolkit-archive link to CUDA toolkit].  +** The toolkit (CUDA Toolkit 10.2) also updated the system driver to 440.33.01  +** Will have to reboot  +   +===Docker===  +====Instructions====  +'''https://www.tensorflow.org/install/docker'''  +   +Quote:  + Docker is the easiest way to enable TensorFlow GPU support on Linux since only the NVIDIA® GPU driver is required on the host machine (the NVIDIA® CUDA® Toolkit does not need to be installed).  +   +====Docker images====  +Where to browse: https://hub.docker.com/r/tensorflow/tensorflow/:  +{| class='wikitable'  +!TF version  +!Python major version  +!GPU support  +!NAME:TAG for Docker command  +|-  +|align='center'|1.15  +|align='center'|3  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:1.15.0-gpu-py3'''  +|-  +|align='center'|2.0.0+  +|align='center'|3  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu-py3'''  +|-  +|align='center'|2.0.0+  +|align='center'|2  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu'''  +|}  +   +====nvidia-docker====  +Somehow it was already installed.  +   +* Check NVIDIA docker version  + ~$ nvidia-docker version  +   +* In the docs it's clear that Docker version 19.03+ should use nvidia-docker2. For Docker of older versions - nvidia-docker v1 should be used.  +* It's not immediately clear about the '''nvidia-container-runtime'''. nvidia-docker v1 & v2 should have already registered it.  +   +====Notes====  +* Can mount a local directory in a 'binding' mode - i.e., update files locally so they are updated in the docker container as well:  + <font size='2'># this will bind-mount directory '''target''' located in '''$(pwd)''', which is a dir the command is run from  + # to '''/app''' in the docker container  +  + ~$ '''docker run \'''  +    '''-it \'''  +    '''--rm \'''  +    '''--name devtest \'''  +    '''-p 0.0.0.0:6006:6006 \'''  +    '''--mount type=bind,source="$(pwd)"/target,target=/app \'''  +    '''--gpus all \'''  +    <font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu-py3</font> \'''  +    '''bash'''</font>  +   +* How to run tensorboard from the container:  + <font size='2'># from [https://briancaffey.github.io/2017/11/20/using-tensorflow-and-tensor-board-with-docker.html here]  + # From the running container's command line (since it was run with 'bash' in the step above).  + # set a correct --logdir  + root@e9efee9e3fd3:/# '''tensorboard --bind_all --logdir=/app/log.txt'''  # remove --bind_all for TF 1.15  + # Then open a browser:  + '''http://localhost:6006'''</font> Oleg

Tensorflow with gpu

Mon, 12/23/2019 - 17:39

‎Notes

← Older revision Revision as of 00:39, 24 December 2019 (36 intermediate revisions by the same user not shown)Line 175: Line 175:    # Test 3: Run a local script (and include a local dir) in contatiner:   # Test 3: Run a local script (and include a local dir) in contatiner:    https://www.tensorflow.org/install/docker   https://www.tensorflow.org/install/docker  +  +  +==Setup walkthrough for CUDA 10.2 (Dec 2019)==  +  +===Install CUDA===  +* In this [https://www.tensorflow.org/install/gpu guide] there's a [https://developer.nvidia.com/cuda-toolkit-archive link to CUDA toolkit].  +** The toolkit (CUDA Toolkit 10.2) also updated the system driver to 440.33.01  +** Will have to reboot  +  +===Docker===  +====Instructions====  +'''https://www.tensorflow.org/install/docker'''  +  +Quote:  + Docker is the easiest way to enable TensorFlow GPU support on Linux since only the NVIDIA® GPU driver is required on the host machine (the NVIDIA® CUDA® Toolkit does not need to be installed).  +  +====Docker images====  +Where to browse: https://hub.docker.com/r/tensorflow/tensorflow/:  +{| class='wikitable'  +!TF version  +!Python major version  +!GPU support  +!NAME:TAG for Docker command  +|-  +|align='center'|1.15  +|align='center'|3  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:1.15.0-gpu-py3'''  +|-  +|align='center'|2.0.0+  +|align='center'|3  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu-py3'''  +|-  +|align='center'|2.0.0+  +|align='center'|2  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu'''  +|}  +  +====nvidia-docker====  +Somehow it was already installed.  +  +* Check NVIDIA docker version  + ~$ nvidia-docker version  +  +* In the docs it's clear that Docker version 19.03+ should use nvidia-docker2. For Docker of older versions - nvidia-docker v1 should be used.  +* It's not immediately clear about the '''nvidia-container-runtime'''. nvidia-docker v1 & v2 should have already registered it.  +  +====Notes====  +* Can mount a local directory in a 'binding' mode - i.e., update files locally so they are updated in the docker container as well:  + <font size='2'># this will bind-mount directory '''target''' located in '''$(pwd)''', which is a dir the command is run from  + # to '''/app''' in the docker container  +  + ~$ '''docker run \'''  +    '''-it \'''  +    '''--rm \'''  +    '''--name devtest \'''  +    '''-p 0.0.0.0:6006:6006 \'''  +    '''--mount type=bind,source="$(pwd)"/target,target=/app \'''  +    '''--gpus all \'''  +    <font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu-py3</font> \'''  +    '''bash'''</font>  +  +* How to run tensorboard from the container:  + <font size='2'># from [https://briancaffey.github.io/2017/11/20/using-tensorflow-and-tensor-board-with-docker.html here]  + # From the running container's command line (since it was run with 'bash' in the step above).  + # set a correct --logdir  + root@e9efee9e3fd3:/# '''tensorboard --bind_all --logdir=/app/log.txt'''  # remove --bind_all for TF 1.15  + # Then open a browser:  + '''http://localhost:6006'''</font> Oleg

Tensorflow with gpu

Mon, 12/23/2019 - 17:02

‎nvidia-docker

← Older revision Revision as of 00:02, 24 December 2019 (34 intermediate revisions by the same user not shown)Line 175: Line 175:    # Test 3: Run a local script (and include a local dir) in contatiner:   # Test 3: Run a local script (and include a local dir) in contatiner:    https://www.tensorflow.org/install/docker   https://www.tensorflow.org/install/docker  +  +  +==Walkthrough for CUDA 10.2 (Dec 2019)==  +  +===Install CUDA===  +* In this [https://www.tensorflow.org/install/gpu guide] there's a [https://developer.nvidia.com/cuda-toolkit-archive link to CUDA toolkit].  +** The toolkit (CUDA Toolkit 10.2) also updated the system driver to 440.33.01  +** Will have to reboot  +  +===Docker===  +====Instructions====  +'''https://www.tensorflow.org/install/docker'''  +  +Quote:  + Docker is the easiest way to enable TensorFlow GPU support on Linux since only the NVIDIA® GPU driver is required on the host machine (the NVIDIA® CUDA® Toolkit does not need to be installed).  +  +====Docker images====  +Where to browse: https://hub.docker.com/r/tensorflow/tensorflow/:  +{| class='wikitable'  +!TF version  +!Python major version  +!GPU support  +!NAME:TAG for Docker command  +|-  +|align='center'|1.15  +|align='center'|3  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:1.15.0-gpu-py3'''  +|-  +|align='center'|2.0.0+  +|align='center'|3  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu-py3'''  +|-  +|align='center'|2.0.0+  +|align='center'|2  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu'''  +|}  +  +====nvidia-docker====  +Somehow it was already installed.  +  +* Check NVIDIA docker version  + ~$ nvidia-docker version  +  +* In the docs it's clear that Docker version 19.03+ should use nvidia-docker2. For Docker of older versions - nvidia-docker v1 should be used.  +* It's not immediately clear about the '''nvidia-container-runtime'''. nvidia-docker v1 & v2 should have already registered it.  +  +====Notes====  +* Can mount a local directory in a 'binding' mode - i.e., update files locally so they are updated in the docker container as well:  + <font size='2'># this will bind-mount directory '''target''' located in '''$(pwd)''', which is a dir the command is run from  + # to '''/app''' in the docker container  +  + ~$ '''docker run \'''  +    '''-it \'''  +    '''--rm \'''  +    '''--name devtest \'''  +    '''-p 0.0.0.0:6006:6006 \'''  +    '''--mount type=bind,source="$(pwd)"/target,target=/app \'''  +    '''--gpus all \'''  +    <font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu-py3</font> \'''  +    '''bash'''</font>  +  +* How to run tensorboard from the container:  + <font size='2'># from [https://briancaffey.github.io/2017/11/20/using-tensorflow-and-tensor-board-with-docker.html here]  + # From the running container's command line (since it was run with 'bash' in the step above):  + root@e9efee9e3fd3:/# '''tensorboard --bind_all --logdir=/app/log.txt'''  # remove --bind_all for TF 1.15  + # Then open a browser:  + '''http://localhost:6006'''</font> Oleg

Tensorflow with gpu

Mon, 12/23/2019 - 15:49

‎Docker images

← Older revision Revision as of 22:49, 23 December 2019 (33 intermediate revisions by the same user not shown)Line 175: Line 175:    # Test 3: Run a local script (and include a local dir) in contatiner:   # Test 3: Run a local script (and include a local dir) in contatiner:    https://www.tensorflow.org/install/docker   https://www.tensorflow.org/install/docker  +  +  +==Walkthrough for CUDA 10.2 (Dec 2019)==  +  +===Install CUDA===  +* In this [https://www.tensorflow.org/install/gpu guide] there's a [https://developer.nvidia.com/cuda-toolkit-archive link to CUDA toolkit].  +** The toolkit (CUDA Toolkit 10.2) also updated the system driver to 440.33.01  +** Will have to reboot  +  +===Docker===  +====Instructions====  +'''https://www.tensorflow.org/install/docker'''  +  +Quote:  + Docker is the easiest way to enable TensorFlow GPU support on Linux since only the NVIDIA® GPU driver is required on the host machine (the NVIDIA® CUDA® Toolkit does not need to be installed).  +  +====Docker images====  +Where to browse: https://hub.docker.com/r/tensorflow/tensorflow/:  +{| class='wikitable'  +!TF version  +!Python major version  +!GPU support  +!NAME:TAG for Docker command  +|-  +|align='center'|1.15  +|align='center'|3  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:1.15.0-gpu-py3'''  +|-  +|align='center'|2.0.0+  +|align='center'|3  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu-py3'''  +|-  +|align='center'|2.0.0+  +|align='center'|2  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu'''  +|}  +  +====nvidia-docker====  +Somehow it was already installed.  +  +* Check NVIDIA docker version  + ~$ nvidia-docker version  +  +* In the docs it's clear that Docker version 19.03+ should use nvidia-docker2. For Docker of older versions - nvidia-docker v1 should be used.  +* It's not immediately clear about the '''nvidia-container-runtime'''. nvidia-docker v1 & v2 already register it.  +  +====Notes====  +* Can mount a local directory in a 'binding' mode - i.e., update files locally so they are updated in the docker container as well:  + <font size='2'># this will bind-mount directory '''target''' located in '''$(pwd)''', which is a dir the command is run from  + # to '''/app''' in the docker container  +  + ~$ '''docker run \'''  +    '''-it \'''  +    '''--rm \'''  +    '''--name devtest \'''  +    '''-p 0.0.0.0:6006:6006 \'''  +    '''--mount type=bind,source="$(pwd)"/target,target=/app \'''  +    '''--gpus all \'''  +    <font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu-py3</font> \'''  +    '''bash'''</font>  +  +* How to run tensorboard from the container:  + <font size='2'># from [https://briancaffey.github.io/2017/11/20/using-tensorflow-and-tensor-board-with-docker.html here]  + # From the running container's command line (since it was run with 'bash' in the step above):  + root@e9efee9e3fd3:/# '''tensorboard --bind_all --logdir=/app/log.txt'''  # remove --bind_all for TF 1.15  + # Then open a browser:  + '''http://localhost:6006'''</font> Oleg

Tensorflow with gpu

Mon, 12/23/2019 - 14:56

‎Notes

← Older revision Revision as of 21:56, 23 December 2019 (31 intermediate revisions by the same user not shown)Line 175: Line 175:    # Test 3: Run a local script (and include a local dir) in contatiner:   # Test 3: Run a local script (and include a local dir) in contatiner:    https://www.tensorflow.org/install/docker   https://www.tensorflow.org/install/docker  +  +  +==Walkthrough for CUDA 10.2 (Dec 2019)==  +  +===Install CUDA===  +* In this [https://www.tensorflow.org/install/gpu guide] there's a [https://developer.nvidia.com/cuda-toolkit-archive link to CUDA toolkit].  +** The toolkit (CUDA Toolkit 10.2) also updated the system driver to 440.33.01  +** Will have to reboot  +  +===Docker===  +====Instructions====  +'''https://www.tensorflow.org/install/docker'''  +  +Quote:  + Docker is the easiest way to enable TensorFlow GPU support on Linux since only the NVIDIA® GPU driver is required on the host machine (the NVIDIA® CUDA® Toolkit does not need to be installed).  +  +====Docker images====  +Where to browse: https://hub.docker.com/r/tensorflow/tensorflow/:  +{| class='wikitable'  +!TF version  +!Python major version  +!GPU support  +!TAG for Docker command  +|-  +|align='center'|1.15  +|align='center'|3  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:1.15.0-gpu-py3'''  +|-  +|align='center'|2.0.0+  +|align='center'|3  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu-py3'''  +|-  +|align='center'|2.0.0+  +|align='center'|2  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu'''  +|}  +  +====nvidia-docker====  +Somehow it was already installed.  +  +* Check NVIDIA docker version  + ~$ nvidia-docker version  +  +* In the docs it's clear that Docker version 19.03+ should use nvidia-docker2. For Docker of older versions - nvidia-docker v1 should be used.  +* It's not immediately clear about the '''nvidia-container-runtime'''. nvidia-docker v1 & v2 already register it.  +  +====Notes====  +* Can mount a local directory in a 'binding' mode - i.e., update files locally so they are updated in the docker container as well:  + <font size='2'># this will bind-mount directory '''target''' located in '''$(pwd)''', which is a dir the command is run from  + # to '''/app''' in the docker container  +  + ~$ '''docker run \'''  +    '''-it \'''  +    '''--rm \'''  +    '''--name devtest \'''  +    '''-p 0.0.0.0:6006:6006 \'''  +    '''--mount type=bind,source="$(pwd)"/target,target=/app \'''  +    '''--gpus all \'''  +    <font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu-py3</font> \'''  +    '''bash'''</font>  +  +* How to run tensorboard from the container:  + <font size='2'># from [https://briancaffey.github.io/2017/11/20/using-tensorflow-and-tensor-board-with-docker.html here]  + # From the running container's command line (since it was run with 'bash' in the step above):  + root@e9efee9e3fd3:/# '''tensorboard --bind_all --logdir=/app/log.txt'''  + # Then open a browser:  + '''http://localhost:6006'''</font> Oleg

Tensorflow with gpu

Mon, 12/23/2019 - 14:20

‎Notes

← Older revision Revision as of 21:20, 23 December 2019 (27 intermediate revisions by the same user not shown)Line 175: Line 175:    # Test 3: Run a local script (and include a local dir) in contatiner:   # Test 3: Run a local script (and include a local dir) in contatiner:    https://www.tensorflow.org/install/docker   https://www.tensorflow.org/install/docker  +  +  +==Walkthrough for CUDA 10.2 (Dec 2019)==  +  +===Install CUDA===  +* In this [https://www.tensorflow.org/install/gpu guide] there's a [https://developer.nvidia.com/cuda-toolkit-archive link to CUDA toolkit].  +** The toolkit (CUDA Toolkit 10.2) also updated the system driver to 440.33.01  +** Will have to reboot  +  +===Docker===  +====Instructions====  +'''https://www.tensorflow.org/install/docker'''  +  +Quote:  + Docker is the easiest way to enable TensorFlow GPU support on Linux since only the NVIDIA® GPU driver is required on the host machine (the NVIDIA® CUDA® Toolkit does not need to be installed).  +  +====Docker images====  +Where to browse: https://hub.docker.com/r/tensorflow/tensorflow/  +* tag for python2 + gpu: '''tensorflow/tensorflow:latest-gpu'''  +* tag for python3 + gpu: '''tensorflow/tensorflow:latest-gpu-py3'''  +  +====nvidia-docker====  +Somehow it was already installed.  +  +* Check NVIDIA docker version  + ~$ nvidia-docker version  +  +* In the docs it's clear that Docker version 19.03+ should use nvidia-docker2. For Docker of older versions - nvidia-docker v1 should be used.  +* It's not immediately clear about the '''nvidia-container-runtime'''. nvidia-docker v1 & v2 already register it.  +  +====Notes====  +* Can mount a local directory in a 'binding' mode - i.e., update files locally so they are updated in the docker container as well:  + <font size='2'># this will bind-mount directory '''target''' located in '''$(pwd)''', which is a dir the command is run from  + # to '''/app''' in the docker container  +  + ~$ '''docker run \'''  +    '''-it \'''  +    '''--rm \'''  +    '''--name devtest \'''  +    '''-p 0.0.0.0:6006:6006 \'''  +    '''--mount type=bind,source="$(pwd)"/target,target=/app \'''  +    '''--gpus all \'''  +    '''tensorflow/tensorflow:latest-gpu-py3 \'''  +    '''bash'''</font>  +  +* How to run tensorboard from the container:  + <font size='2'># from [https://briancaffey.github.io/2017/11/20/using-tensorflow-and-tensor-board-with-docker.html here]  + # From the running container's command line (since it was run with 'bash' in the step above):  + root@e9efee9e3fd3:/# '''tensorboard --bind_all --logdir=/app/log.txt'''  + # Then open a browser:  + '''http://localhost:6006'''</font> Oleg

Tensorflow with gpu

Mon, 12/23/2019 - 13:17

‎Walkthrough for CUDA 10.2 (Dec 2019)

← Older revision Revision as of 20:17, 23 December 2019 (10 intermediate revisions by the same user not shown)Line 175: Line 175:    # Test 3: Run a local script (and include a local dir) in contatiner:   # Test 3: Run a local script (and include a local dir) in contatiner:    https://www.tensorflow.org/install/docker   https://www.tensorflow.org/install/docker  +  +  +==Walkthrough for CUDA 10.2 (Dec 2019)==  +  +===Install CUDA===  +* In this [https://www.tensorflow.org/install/gpu guide] there's a [https://developer.nvidia.com/cuda-toolkit-archive link to CUDA toolkit].  +** The toolkit (CUDA Toolkit 10.2) also updated the system driver to 440.33.01  +** Reboot  +  +===Docker===  +  +====Instructions====  +https://www.tensorflow.org/install/docker  +  +====nvidia-docker====  +* Check NVIDIA docker version  + ~$ nvidia-docker version  +  +In the docs it's clear that Docker version 19.03+ should use nvidia-docker2. For Docker of older versions - nvidia-docker v1 should be used.  +  +It's not immediately clear about the '''nvidia-container-runtime'''. What? Why? Is it automatically installed with nvidia-docker thing?  +  +====Notes====  +* Can mount a local directory in a 'binding' mode - i.e., update files locally so they are updated in the docker container as well:  + <font size='3'># this will bind-mount directory '''target''' located in '''$(pwd)''', which is a dir the command is run from  + # to '''/app''' in the docker container  + docker run \  +    --gpus all \  +    --name sometest \  +    --mount type=bind,source="$(pwd)"/target,target=/app \  +    -it \  +    tensorflow/tensorflow:latest-gpu \  +    bash</font> Oleg

Tensorflow with gpu

Mon, 12/23/2019 - 12:15

‎Walkthrough for CUDA 10.2 (Dec 2019)

← Older revision Revision as of 19:15, 23 December 2019 (3 intermediate revisions by the same user not shown)Line 175: Line 175:    # Test 3: Run a local script (and include a local dir) in contatiner:   # Test 3: Run a local script (and include a local dir) in contatiner:    https://www.tensorflow.org/install/docker   https://www.tensorflow.org/install/docker  +  +  +==Walkthrough for CUDA 10.2 (Dec 2019)==  +  +===Install CUDA===  +* In this [https://www.tensorflow.org/install/gpu guide] there's a [https://developer.nvidia.com/cuda-toolkit-archive link to CUDA toolkit].  +** The toolkit (CUDA Toolkit 10.2) also updated the system driver to 440.33.01  +** Reboot Oleg

Publications

Thu, 12/05/2019 - 15:01

← Older revision Revision as of 22:01, 5 December 2019 Line 9: Line 9:  |[https://arxiv.org/pdf/1911.06975.pdf Filippov, Andrey and Dzhimiev, Oleg "Long Range 3D with Quadocular Thermal (LWIR) Camera" arXiv preprint arXiv:1911.06975 (2019).] |[https://arxiv.org/pdf/1911.06975.pdf Filippov, Andrey and Dzhimiev, Oleg "Long Range 3D with Quadocular Thermal (LWIR) Camera" arXiv preprint arXiv:1911.06975 (2019).]  |Elphel publication |Elphel publication  +|-  +|2019  +|[https://www.researchgate.net/profile/Cui_Xiangbin/publication/335406867_The_conditions_of_the_formation_and_existence_of_Blue_Ice_Areas_in_the_ice_flow_transition_region_from_the_Antarctic_Ice_Sheet_to_the_Amery_Ice_Shelf_in_the_Larsemann_Hills_area/links/5d675ab5299bf11adf29bb92/The-conditions-of-the-formation-and-existence-of-Blue-Ice-Areas-in-the-ice-flow-transition-region-from-the-Antarctic-Ice-Sheet-to-the-Amery-Ice-Shelf-in-the-Larsemann-Hills-area.pdf Markov, Aleksey and Polyakov, Sergey and Sun, Bо and Lukin, Valeriy and Popov, Sergey and Yang, Huigen and Zhang, Tijun and Cui, Xiangbin and Guo, Jingxue and Cui, Penghui and others "Polar Science"]  +|Used Ephel camera in experimental setup  +  |- |-  |2019 |2019  |[https://documat.unirioja.es/descarga/articulo/6802185.pdf Campbell, Andrew, Alan Both, and Qian Chayn Sun. "Detecting and mapping traffic signs from Google Street View images using deep learning and GIS."] |[https://documat.unirioja.es/descarga/articulo/6802185.pdf Campbell, Andrew, Alan Both, and Qian Chayn Sun. "Detecting and mapping traffic signs from Google Street View images using deep learning and GIS."]  |Elphel cameras for GSV referenced   |Elphel cameras for GSV referenced    +  |- |-  |2019 |2019 Andrey.filippov

Publications

Tue, 11/26/2019 - 18:12

← Older revision Revision as of 01:12, 27 November 2019 (One intermediate revision by the same user not shown)Line 9: Line 9:  |[https://arxiv.org/pdf/1911.06975.pdf Filippov, Andrey and Dzhimiev, Oleg "Long Range 3D with Quadocular Thermal (LWIR) Camera" arXiv preprint arXiv:1911.06975 (2019).] |[https://arxiv.org/pdf/1911.06975.pdf Filippov, Andrey and Dzhimiev, Oleg "Long Range 3D with Quadocular Thermal (LWIR) Camera" arXiv preprint arXiv:1911.06975 (2019).]  |Elphel publication |Elphel publication  +|-  +|2019  +|[https://documat.unirioja.es/descarga/articulo/6802185.pdf Campbell, Andrew, Alan Both, and Qian Chayn Sun. "Detecting and mapping traffic signs from Google Street View images using deep learning and GIS."]  +|Elphel cameras for GSV referenced  +|-  +|2019  +|[https://documat.unirioja.es/descarga/articulo/6802185.pdf Díaz, Hernán Porras, Duvan Yahir Sanabria Echeverry, and Johan Alexander Ortiz Ferreira. "Tendencia mundial en tecnologías de sistemas de mapeo móvil implementadas con láser."]  +|Elphel Eyesis referenced  |- |-  |2019 |2019 Andrey.filippov

Publications

Tue, 11/19/2019 - 14:34

← Older revision Revision as of 21:34, 19 November 2019 (2 intermediate revisions by the same user not shown)Line 5: Line 5:  ! Citation/link ! Citation/link  ! Comments ! Comments  +|-  +|2019  +|[https://arxiv.org/pdf/1911.06975.pdf Filippov, Andrey and Dzhimiev, Oleg "Long Range 3D with Quadocular Thermal (LWIR) Camera" arXiv preprint arXiv:1911.06975 (2019).]  +|Elphel publication  |- |-  |2019 |2019 Line 19: Line 23:  |- |-  |2018 |2018 −|[https://arxiv.org/pdf/1811.08032 Filippov, Andrey, and Oleg Dzhimiev. "See far with TPNET: a Tile Processor and a CNN Symbiosis." arXiv preprint arXiv:1811.08032 (2018).]+|[https://arxiv.org/pdf/1811.08032 Filippov, Andrey, and Dzhimiev, Oleg. "See far with TPNET: a Tile Processor and a CNN Symbiosis." arXiv preprint arXiv:1811.08032 (2018).]  |Elphel publication |Elphel publication  |- |- Andrey.filippov

Poky migration from rocko to warrior

Thu, 10/03/2019 - 17:49

‎[SOLVED] Note 14: fixdep: Permission denied

← Older revision Revision as of 23:49, 3 October 2019 (6 intermediate revisions by the same user not shown)Line 260: Line 260:    ...   ...    ---[ end Kernel panic - not syncing: Fatal exception in interrupt   ---[ end Kernel panic - not syncing: Fatal exception in interrupt  +  +==<font color='green'>'''[SOLVED]'''</font> Note 14: fixdep: Permission denied==  +* Description:  + - We've had this error for a while, probably since kernel 4.0  + - usually happened when running do_compile_kernelmodules  + - EXTRA_OEMAKE = "-s -w '''-B''' KCFLAGS='-v'"  + - That '''-B''' forces to rebuild all targets and we also have '''-j8''' (in PARALLEL_MAKE variable) for the parallel build  +  - so when running the parallel build fixdep gets rebuilt several times and at some point  +    one of the targets (e.g. sortextable or kallsyms) calls it while fixdep is being compiled and overwritten for another target (probably)  +  - the exec rights are correct after the fact  +  +* Solution:  +  Removed '''-B'''. It make fixdep build only once and the problem is gone.  +  +* Note:  +  ~$ make -h  +    ...  +    -B, --always-make          Unconditionally make all targets.  +    ... Oleg

Updating nand flash driver from linux kernel 4.14 to 4.19

Thu, 09/26/2019 - 14:10

‎Updating to 4.19

← Older revision Revision as of 20:10, 26 September 2019 (4 intermediate revisions by the same user not shown)Line 9: Line 9:  * ON-DIE ECC feature supported - they call it internal ECC in the datasheet: * ON-DIE ECC feature supported - they call it internal ECC in the datasheet:    Internal ECC enables 5-bit detection and 4-bit error correction in 512 bytes   Internal ECC enables 5-bit detection and 4-bit error correction in 512 bytes −* Subpage (or partial) writes are not supported.      ==Linux Kernel 4.14 and older== ==Linux Kernel 4.14 and older== Line 20: Line 19:  * A lot has changed since 4.14 * A lot has changed since 4.14  ** new ''->exec_op'' hook vs old ''->cmdfunc'' ** new ''->exec_op'' hook vs old ''->cmdfunc''  +** Xilinx tries to keep up with their driver - they might have tested this on [https://lwn.net/Articles/756725/ Micron MT29F2G08ABAEAWP (On-die capable) and AMD/Spansion S34ML01G1]  * OTP function are not supported yet, and need a minor change to make them more universal * OTP function are not supported yet, and need a minor change to make them more universal −* Old LOCK/UNLOCK function need to be updated to the new codebase+ That minor change is to set ops.mode to MTD_OPS_RAW instead of normal. Hooks for normal ops −* Set ''nand-ecc-mode = "on-die";'' in the device tree.+ can enable/disable ECC features which will reset OTP mode (which is also a feature set)  +   +* Old LOCK/UNLOCK functions from pre 4.14 kernel need to be updated to the new codebase  + Those op_parser patterns.  + Also UNLOCK command is 2 parts: 0x23 BOUND_LO 0x24 BOUND_HI - so, before 0x24 has to be some sort of a delay or the op should be split into 2.  + Datasheet does not provide any timing info  +   +* Set ''nand-ecc-mode = "on-die";'' in the device tree.  + In 4.14 it used to be detected as "on-die", but not anymore.  * Add NAND_NO_SUBPAGE_WRITE to chip->options for MT29F8G08ADBDAH4 * Add NAND_NO_SUBPAGE_WRITE to chip->options for MT29F8G08ADBDAH4 −===Results===+ Otherwise driver goes for subpage program support. '''Need to test this.'''  +   +===Driver code===  See the code here: See the code here:  * [https://git.elphel.com/Elphel/linux-elphel/blob/warrior/src/arch/arm/boot/dts/elphel393-zynq-base.dtsi#L299 device tree] * [https://git.elphel.com/Elphel/linux-elphel/blob/warrior/src/arch/arm/boot/dts/elphel393-zynq-base.dtsi#L299 device tree] −  Those ''arm,nand-cycle-t0-6'' can be removed+  Those ''arm,nand-cycle-t0-6'' are obsolete and can be removed from DT.  * [https://git.elphel.com/Elphel/linux-elphel/tree/d4f217cf68d8c3c9a84f680dd014b1a0de0571c8/src/drivers/mtd/nand/raw driver code] * [https://git.elphel.com/Elphel/linux-elphel/tree/d4f217cf68d8c3c9a84f680dd014b1a0de0571c8/src/drivers/mtd/nand/raw driver code] −  Didn't make a patch - the changes are for a specific MT29F8G08ADBDAH4. Although they might work for others as well we have only tested the chip we use.+  Didn't make a patch - the changes are for a specific MT29F8G08ADBDAH4. Although they might work for other chips as well we have only tested the one we use. Oleg

Updating nand flash driver from linux kernel 4.14 to 4.19

Thu, 09/26/2019 - 12:47

‎Updating to 4.19

← Older revision Revision as of 18:47, 26 September 2019 (One intermediate revision by the same user not shown)Line 20: Line 20:  * A lot has changed since 4.14 * A lot has changed since 4.14  ** new ''->exec_op'' hook vs old ''->cmdfunc'' ** new ''->exec_op'' hook vs old ''->cmdfunc''  +** Xilinx tries to keep up with their driver  * OTP function are not supported yet, and need a minor change to make them more universal * OTP function are not supported yet, and need a minor change to make them more universal −* Old LOCK/UNLOCK function need to be updated to the new codebase+ That minor change is to set ops.mode to MTD_OPS_RAW instead of normal. Hooks for normal ops −* Set ''nand-ecc-mode = "on-die";'' in the device tree.+ can enable/disable ECC features which will reset OTP mode (which is also a feature set)  +   +* Old LOCK/UNLOCK functions from pre 4.14 kernel need to be updated to the new codebase  + Those op_parser patterns.  + Also UNLOCK command is 2 parts: 0x23 BOUND_LO 0x24 BOUND_HI - so, before 0x24 has to be some sort of a delay or the op should be split into 2.  + Datasheet does not provide any timing info  +   +* Set ''nand-ecc-mode = "on-die";'' in the device tree.  + In 4.14 it used to be detected as "on-die", but not anymore.  * Add NAND_NO_SUBPAGE_WRITE to chip->options for MT29F8G08ADBDAH4 * Add NAND_NO_SUBPAGE_WRITE to chip->options for MT29F8G08ADBDAH4 −===Results===+ Otherwise driver goes for subpage program support  +   +===Driver code===  See the code here: See the code here:  * [https://git.elphel.com/Elphel/linux-elphel/blob/warrior/src/arch/arm/boot/dts/elphel393-zynq-base.dtsi#L299 device tree] * [https://git.elphel.com/Elphel/linux-elphel/blob/warrior/src/arch/arm/boot/dts/elphel393-zynq-base.dtsi#L299 device tree] −  Those ''arm,nand-cycle-t0-6'' can be removed+  Those ''arm,nand-cycle-t0-6'' are obsolete and can be removed from DT.  * [https://git.elphel.com/Elphel/linux-elphel/tree/d4f217cf68d8c3c9a84f680dd014b1a0de0571c8/src/drivers/mtd/nand/raw driver code] * [https://git.elphel.com/Elphel/linux-elphel/tree/d4f217cf68d8c3c9a84f680dd014b1a0de0571c8/src/drivers/mtd/nand/raw driver code] −  Didn't make a patch - the changes are for a specific MT29F8G08ADBDAH4. Although they might work for others as well we have only tested the chip we use.+  Didn't make a patch - the changes are for a specific MT29F8G08ADBDAH4. Although they might work for other chips as well we have only tested the one we use. Oleg

Updating nand flash driver from linux kernel 4.14 to 4.19

Thu, 09/26/2019 - 12:28

Created page with "==About== Below are the changes had to be made to update our nand flash driver code from kernel 4.14 to 4.19. The kernel versions mentioned are the rebases from the mainline k..."

New page

==About==
Below are the changes had to be made to update our nand flash driver code from kernel 4.14 to 4.19.
The kernel versions mentioned are the rebases from the mainline kernel in linux-xlnx repo with xilinx's code:
[https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v4.14 xlnx_rebase_v4.14]
[https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v4.19 xlnx_rebase_v4.19]

==NAND FLASH chip==
Micron MT29F8G08ADBDAH4:
* ON-DIE ECC feature supported - they call it internal ECC in the datasheet:
Internal ECC enables 5-bit detection and 4-bit error correction in 512 bytes
* Subpage (or partial) writes are not supported.

==Linux Kernel 4.14 and older==
Until 4.14 (including) our driver code additions worked unchanged. We basically needed a couple of things:
* LOCK/UNLOCK functions (which were removed in 4.14). The code existed in up to 4.13 release - so it was just taken from there.
* Function to work with the chip's (MT29F8G08ADBDAH4) OTP area. The original code was taken from [http://webcache.googleusercontent.com/search?q=cache:-k_EdkxxQCUJ:lists.infradead.org/pipermail/linux-mtd/2013-March/045994.html+&cd=2&hl=en&ct=clnk&gl=us&client=ubuntu here]

==Updating to 4.19==
===Notes===
* A lot has changed since 4.14
** new ''->exec_op'' hook vs old ''->cmdfunc''
* OTP function are not supported yet, and need a minor change to make them more universal
* Old LOCK/UNLOCK function need to be updated to the new codebase
* Set ''nand-ecc-mode = "on-die";'' in the device tree.
* Add NAND_NO_SUBPAGE_WRITE to chip->options for MT29F8G08ADBDAH4
===Results===
See the code here:
* [https://git.elphel.com/Elphel/linux-elphel/blob/warrior/src/arch/arm/boot/dts/elphel393-zynq-base.dtsi#L299 device tree]
Those ''arm,nand-cycle-t0-6'' can be removed
* [https://git.elphel.com/Elphel/linux-elphel/tree/d4f217cf68d8c3c9a84f680dd014b1a0de0571c8/src/drivers/mtd/nand/raw driver code]
Didn't make a patch - the changes are for a specific MT29F8G08ADBDAH4. Although they might work for others as well we have only tested the chip we use. Oleg

Poky migration from rocko to warrior

Tue, 09/10/2019 - 13:06

‎[SOLVED] Note 3: Entropy device hwrng

← Older revision Revision as of 19:06, 10 September 2019 (One intermediate revision by the same user not shown)Line 92: Line 92:  ** Haven't found if Xilinx uses any driver for /dev/hwrng ** Haven't found if Xilinx uses any driver for /dev/hwrng  ** TODO: Find out if the order of entropy sources can be changed ** TODO: Find out if the order of entropy sources can be changed −** That lag at boot is really annoying - 5-10 seconds?!!+** That lag at boot is really annoying - 5 secs?!!     ==<font color='green'>'''[SOLVED]'''</font> Note 4: PHP causing 'unsupported FP instruction in kernel mode'== ==<font color='green'>'''[SOLVED]'''</font> Note 4: PHP causing 'unsupported FP instruction in kernel mode'== Line 148: Line 148:    - <s>switched from '''-mfloat-abi=softfp''' to '''-mfloat-abi=hard''' - the problem seems to go away - but is it 100%?</s>   - <s>switched from '''-mfloat-abi=softfp''' to '''-mfloat-abi=hard''' - the problem seems to go away - but is it 100%?</s>    - used kmalloc instead of auto variable in mt9x001_pgm_initsensor() - no Oopses so far   - used kmalloc instead of auto variable in mt9x001_pgm_initsensor() - no Oopses so far  +  +* More notes on debugging  + - CONFIG_DEBUG_STACK_USAGE=y  +  and it reports how many bytes left in stack for various processes. For that particular process (php) the "bytes left" were '''4''' on successful boots and  +  ~'''1028''' after a huge variable (of 1024 bytes) got moved to heap.  + - Also there's a warning in Eclipse about "frame size" beaing larger than 1024     ==<font color='green'>'''[SOLVED]'''</font> Note 5: Bring up NAND OTP support== ==<font color='green'>'''[SOLVED]'''</font> Note 5: Bring up NAND OTP support== Oleg

Poky migration from rocko to warrior

Tue, 09/10/2019 - 13:06

‎[SOLVED] Note 3: Entropy device hwrng

← Older revision Revision as of 19:06, 10 September 2019 (2 intermediate revisions by the same user not shown)Line 92: Line 92:  ** Haven't found if Xilinx uses any driver for /dev/hwrng ** Haven't found if Xilinx uses any driver for /dev/hwrng  ** TODO: Find out if the order of entropy sources can be changed ** TODO: Find out if the order of entropy sources can be changed −** That lag at boot is really annoying - 5-10 seconds?!!+** That lag at boot is really annoying - 5 secs?!!    −==<font color='green'>'''SOLVED'''</font> Note 4: PHP causing 'unsupported FP instruction in kernel mode'==+==<font color='green'>'''[SOLVED]'''</font> Note 4: PHP causing 'unsupported FP instruction in kernel mode'==  * Kernel Oops: * Kernel Oops:    <font size='1'>[  35.872118] BUG: unsupported FP instruction in kernel mode   <font size='1'>[  35.872118] BUG: unsupported FP instruction in kernel mode Line 148: Line 148:    - <s>switched from '''-mfloat-abi=softfp''' to '''-mfloat-abi=hard''' - the problem seems to go away - but is it 100%?</s>   - <s>switched from '''-mfloat-abi=softfp''' to '''-mfloat-abi=hard''' - the problem seems to go away - but is it 100%?</s>    - used kmalloc instead of auto variable in mt9x001_pgm_initsensor() - no Oopses so far   - used kmalloc instead of auto variable in mt9x001_pgm_initsensor() - no Oopses so far  +  +* More notes on debugging  + - CONFIG_DEBUG_STACK_USAGE=y  +  and it reports how many bytes left in stack for various processes. For that particular process (php) the "bytes left" were '''4''' on successful boots and  +  ~'''1028''' after a huge variable (of 1024 bytes) got moved to heap.  + - Also there's a warning in Eclipse about "frame size" beaing larger than 1024     ==<font color='green'>'''[SOLVED]'''</font> Note 5: Bring up NAND OTP support== ==<font color='green'>'''[SOLVED]'''</font> Note 5: Bring up NAND OTP support== Oleg

Pages