Imaging solutions with Free Software & Open Hardware

Who's online

There are currently 0 users online.

Subscribe to Wiki Recent Changes feed
Track the most recent changes to the wiki in this feed. MediaWiki 1.28.0
Updated: 7 min 51 sec ago

Quad stereo tensorflow eclipse

Fri, 12/27/2019 - 14:55

← Older revision Revision as of 21:55, 27 December 2019 (10 intermediate revisions by the same user not shown)Line 2: Line 2:  * Install Eclipse * Install Eclipse  * Clone and Import [https://git.elphel.com/Elphel/imagej-elphel imagej-elphel] * Clone and Import [https://git.elphel.com/Elphel/imagej-elphel imagej-elphel] −<font color='tomato'>'''Note: if the project is updated/pulled outside Eclipse - need manual refresh'''</font>+<font color='red'>'''NOTE: if project is updated/pulled outside Eclipse - might need a manual refresh'''</font>  +* TF version is pulled from pom.xml  +* Trained TF model for EO sensors is auto-downloaded - [https://community.elphel.com/files/quad-stereo/ml/trained_model_v1.0.zip trained_model_v1.0.zip]  +* Get some image samples, provide paths  +* Before running the plugin (Eyesis_Correction), copy imagej options to /home/user/.imagejs/Eyesis_Correction.xml:  +<font size='1'>  + <?xml version="1.0" encoding="UTF-8"?>  + <!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">  + <properties>  +    <comment>last updated Thu Sep 08 14:09:47 MDT 2042</comment>  +    <entry key="ADVANCED_MODE">True</entry>  +    <entry key="DCT_MODE">True</entry>  +    <entry key="MODE_3D">False</entry>  +    <entry key="GPU_MODE">True</entry>  +    <entry key="LWIR_MODE">True</entry>  + </properties>  +</font>  +* Updated pom.xml to TF 1.15 - package exists  +* Install cuDNN all 3 packages - runtime, dev and docs. Used docs to verify installation - built mnistCUDNN:  + https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html  +* I think TF 1.15 maven package was built for CUDA 10.0 driver, and so it whines when 10.2 is installed.  + <font size=1 color=red>2019-12-27 13:05:15.754656: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcudart.so.10.0'; dlerror: libcudart.so.10.0: cannot open shared object file: No such file or directory  + 2019-12-27 13:05:15.754756: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcublas.so.10.0'; dlerror: libcublas.so.10.0: cannot open shared object file: No such file or directory  + 2019-12-27 13:05:15.754860: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcufft.so.10.0'; dlerror: libcufft.so.10.0: cannot open shared object file: No such file or directory  + 2019-12-27 13:05:15.754970: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcurand.so.10.0'; dlerror: libcurand.so.10.0: cannot open shared object file: No such file or directory  + 2019-12-27 13:05:15.755075: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcusolver.so.10.0'; dlerror: libcusolver.so.10.0: cannot open shared object file: No such file or directory  + 2019-12-27 13:05:15.755178: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcusparse.so.10.0'; dlerror: libcusparse.so.10.0: cannot open shared object file: No such file or directory  + 2019-12-27 13:05:15.762197: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7  + 2019-12-27 13:05:15.762227: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1641] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.  + Skipping registering GPU devices...</font>  +   +* TF 1.15 and CUDA 10.0 require GPU compute capability = 6.0, GeForce GTX 750Ti is 5.0:  + <font color=red size=1>2019-12-27 14:22:17.475717: I tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /home/oleg/GIT/imagej-elphel/target/classes/trained_model  + 2019-12-27 14:22:17.477009: I tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }  + 2019-12-27 14:22:17.503393: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3392030000 Hz  + 2019-12-27 14:22:17.504196: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f610dba1a20 initialized for platform Host (this does not guarantee that XLA will be used). Devices:  + 2019-12-27 14:22:17.504235: I tensorflow/compiler/xla/service/service.cc:176]  StreamExecutor device (0): Host, Default Version  + 2019-12-27 14:22:17.505378: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1  + 2019-12-27 14:22:17.517647: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero  + 2019-12-27 14:22:17.518168: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:  + name: GeForce GTX 750 Ti major: 5 minor: 0 memoryClockRate(GHz): 1.1105  + pciBusID: 0000:01:00.0  + 2019-12-27 14:22:17.518385: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0  + 2019-12-27 14:22:17.519624: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0  + 2019-12-27 14:22:17.675574: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0  + 2019-12-27 14:22:17.716621: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0  + 2019-12-27 14:22:18.160070: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0  + 2019-12-27 14:22:18.439862: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0  + 2019-12-27 14:22:18.443378: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7  + 2019-12-27 14:22:18.443483: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero  + 2019-12-27 14:22:18.444034: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero  + 2019-12-27 14:22:18.444510: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1700] Ignoring visible gpu device (device: 0, name: GeForce GTX 750 Ti, pci bus id: 0000:01:00.0, compute capability: 5.0) with  + '''Cuda compute capability 5.0. The minimum required Cuda capability is 6.0.'''  + 2019-12-27 14:22:18.498425: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:  + 2019-12-27 14:22:18.498455: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]      0  + 2019-12-27 14:22:18.498463: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0:  N  + 2019-12-27 14:22:18.504855: I tensorflow/cc/saved_model/loader.cc:202] Restoring SavedModel bundle.  + 2019-12-27 14:22:18.528948: I tensorflow/cc/saved_model/loader.cc:151] Running initialization op on SavedModel bundle at path: /home/oleg/GIT/imagej-elphel/target/classes/trained_model  + 2019-12-27 14:22:18.581034: I tensorflow/cc/saved_model/loader.cc:311] SavedModel load for tags { serve }; Status: success. Took 1105321 microseconds.  +</font>  +* So, <font color='darkgreen'>'''TF1.15 + CUDA 10.0 might work with GeForce GTX 1080 Ti (compute capability 6.1)'''</font> Oleg

Quad stereo tensorflow eclipse

Fri, 12/27/2019 - 13:32

← Older revision Revision as of 20:32, 27 December 2019 (7 intermediate revisions by the same user not shown)Line 2: Line 2:  * Install Eclipse * Install Eclipse  * Clone and Import [https://git.elphel.com/Elphel/imagej-elphel imagej-elphel] * Clone and Import [https://git.elphel.com/Elphel/imagej-elphel imagej-elphel] −<font color='tomato'>'''Note: if the project is updated/pulled outside Eclipse - need manual refresh'''</font>+<font color='red'>'''NOTE: if project is updated/pulled outside Eclipse - might need a manual refresh'''</font>  +* TF version is pulled from pom.xml  +* Trained TF model for EO sensors is auto-downloaded - [https://community.elphel.com/files/quad-stereo/ml/trained_model_v1.0.zip trained_model_v1.0.zip]  +* Get some image samples, provide paths  +* Before running the plugin (Eyesis_Correction), copy imagej options to /home/user/.imagejs/Eyesis_Correction.xml:  +<font size='1'>  + <?xml version="1.0" encoding="UTF-8"?>  + <!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">  + <properties>  +    <comment>last updated Thu Sep 08 14:09:47 MDT 2042</comment>  +    <entry key="ADVANCED_MODE">True</entry>  +    <entry key="DCT_MODE">True</entry>  +    <entry key="MODE_3D">False</entry>  +    <entry key="GPU_MODE">True</entry>  +    <entry key="LWIR_MODE">True</entry>  + </properties>  +</font>  +* Updaed pom.xml to TF 1.15 - package exists  +* Install cuDNN all 3 packages - runtime, dev and docs. Used docs to verify installation - built mnistCUDNN:  + https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html  +* I think TF 1.15 maven package was built for CUDA 10.0 driver, and so it whines when 10.2 is installed.  + <font size=1 color=red>2019-12-27 13:05:15.754656: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcudart.so.10.0'; dlerror: libcudart.so.10.0: cannot open shared object file: No such file or directory  + 2019-12-27 13:05:15.754756: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcublas.so.10.0'; dlerror: libcublas.so.10.0: cannot open shared object file: No such file or directory  + 2019-12-27 13:05:15.754860: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcufft.so.10.0'; dlerror: libcufft.so.10.0: cannot open shared object file: No such file or directory  + 2019-12-27 13:05:15.754970: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcurand.so.10.0'; dlerror: libcurand.so.10.0: cannot open shared object file: No such file or directory  + 2019-12-27 13:05:15.755075: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcusolver.so.10.0'; dlerror: libcusolver.so.10.0: cannot open shared object file: No such file or directory  + 2019-12-27 13:05:15.755178: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcusparse.so.10.0'; dlerror: libcusparse.so.10.0: cannot open shared object file: No such file or directory  + 2019-12-27 13:05:15.762197: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7  + 2019-12-27 13:05:15.762227: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1641] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.  + Skipping registering GPU devices...</font> Oleg

Quad stereo tensorflow eclipse

Fri, 12/27/2019 - 13:11

← Older revision Revision as of 20:11, 27 December 2019 (6 intermediate revisions by the same user not shown)Line 2: Line 2:  * Install Eclipse * Install Eclipse  * Clone and Import [https://git.elphel.com/Elphel/imagej-elphel imagej-elphel] * Clone and Import [https://git.elphel.com/Elphel/imagej-elphel imagej-elphel] −<font color='tomato'>'''Note: if the project is updated/pulled outside Eclipse - need manual refresh'''</font>+<font color='red'>'''NOTE: if project is updated/pulled outside Eclipse - might need a manual refresh'''</font>  +* TF version is pulled from pom.xml  +* Trained TF model for EO sensors is auto-downloaded - [https://community.elphel.com/files/quad-stereo/ml/trained_model_v1.0.zip trained_model_v1.0.zip]  +* Get some image samples, provide paths  +* Before running the plugin (Eyesis_Correction), copy imagej options to /home/user/.imagejs/Eyesis_Correction.xml:  +<font size='1'>  + <?xml version="1.0" encoding="UTF-8"?>  + <!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">  + <properties>  +    <comment>last updated Thu Sep 08 14:09:47 MDT 2042</comment>  +    <entry key="ADVANCED_MODE">True</entry>  +    <entry key="DCT_MODE">True</entry>  +    <entry key="MODE_3D">False</entry>  +    <entry key="GPU_MODE">True</entry>  +    <entry key="LWIR_MODE">True</entry>  + </properties>  +</font>  +* Updaed pom.xml to TF 1.15 - package exists  +* Install cuDNN all 3 packages - runtime, dev and docs. Used docs to verify installation - built mnistCUDNN:  + https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html  +* I think TF 1.15 maven package was built for CUDA 10.0 driver  + <font size=1 color=red>2019-12-27 13:05:15.754656: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcudart.so.10.0'; dlerror: libcudart.so.10.0: cannot open shared object file: No such file or directory  + 2019-12-27 13:05:15.754756: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcublas.so.10.0'; dlerror: libcublas.so.10.0: cannot open shared object file: No such file or directory  + 2019-12-27 13:05:15.754860: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcufft.so.10.0'; dlerror: libcufft.so.10.0: cannot open shared object file: No such file or directory  + 2019-12-27 13:05:15.754970: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcurand.so.10.0'; dlerror: libcurand.so.10.0: cannot open shared object file: No such file or directory  + 2019-12-27 13:05:15.755075: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcusolver.so.10.0'; dlerror: libcusolver.so.10.0: cannot open shared object file: No such file or directory  + 2019-12-27 13:05:15.755178: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcusparse.so.10.0'; dlerror: libcusparse.so.10.0: cannot open shared object file: No such file or directory  + 2019-12-27 13:05:15.762197: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7  + 2019-12-27 13:05:15.762227: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1641] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.  + Skipping registering GPU devices...</font> Oleg

Quad stereo tensorflow eclipse

Fri, 12/27/2019 - 12:04

‎ImageJ plugin

← Older revision Revision as of 19:04, 27 December 2019 (2 intermediate revisions by the same user not shown)Line 2: Line 2:  * Install Eclipse * Install Eclipse  * Clone and Import [https://git.elphel.com/Elphel/imagej-elphel imagej-elphel] * Clone and Import [https://git.elphel.com/Elphel/imagej-elphel imagej-elphel] −<font color='tomato'>'''Note: if the project is updated/pulled outside Eclipse - need manual refresh'''</font>+<font color='red'>'''NOTE: if project is updated/pulled outside Eclipse - might need a manual refresh'''</font>  +* TF version is pulled from pom.xml  +* Trained TF model for EO sensors is auto-downloaded - [https://community.elphel.com/files/quad-stereo/ml/trained_model_v1.0.zip trained_model_v1.0.zip]  +* Get some image samples, provide paths  +* Before running the plugin (Eyesis_Correction), copy imagej options to /home/user/.imagejs/Eyesis_Correction.xml:  +<font size='1'>  + <?xml version="1.0" encoding="UTF-8"?>  + <!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">  + <properties>  +    <comment>last updated Thu Sep 08 14:09:47 MDT 2042</comment>  +    <entry key="ADVANCED_MODE">True</entry>  +    <entry key="DCT_MODE">True</entry>  +    <entry key="MODE_3D">False</entry>  +    <entry key="GPU_MODE">True</entry>  +    <entry key="LWIR_MODE">True</entry>  + </properties>  +</font> Oleg

Quad stereo tensorflow eclipse

Fri, 12/27/2019 - 11:57

Created page with "==ImageJ plugin== * Install Eclipse * Clone and Import [https://git.elphel.com/Elphel/imagej-elphel imagej-elphel] <font color='tomato'>'''Note: if the project is updated/pull..."

New page

==ImageJ plugin==
* Install Eclipse
* Clone and Import [https://git.elphel.com/Elphel/imagej-elphel imagej-elphel]
<font color='tomato'>'''Note: if the project is updated/pulled outside Eclipse - need manual refresh'''</font> Oleg

Tensorflow with gpu

Mon, 12/23/2019 - 17:39

‎Notes

← Older revision Revision as of 00:39, 24 December 2019 (2 intermediate revisions by the same user not shown)Line 177: Line 177:       −==Walkthrough for CUDA 10.2 (Dec 2019)==+==Setup walkthrough for CUDA 10.2 (Dec 2019)==     ===Install CUDA=== ===Install CUDA=== Line 222: Line 222:     * In the docs it's clear that Docker version 19.03+ should use nvidia-docker2. For Docker of older versions - nvidia-docker v1 should be used.   * In the docs it's clear that Docker version 19.03+ should use nvidia-docker2. For Docker of older versions - nvidia-docker v1 should be used.   −* It's not immediately clear about the '''nvidia-container-runtime'''. nvidia-docker v1 & v2 already register it.+* It's not immediately clear about the '''nvidia-container-runtime'''. nvidia-docker v1 & v2 should have already registered it.     ====Notes==== ====Notes==== Line 241: Line 241:  * How to run tensorboard from the container: * How to run tensorboard from the container:    <font size='2'># from [https://briancaffey.github.io/2017/11/20/using-tensorflow-and-tensor-board-with-docker.html here]   <font size='2'># from [https://briancaffey.github.io/2017/11/20/using-tensorflow-and-tensor-board-with-docker.html here] −  # From the running container's command line (since it was run with 'bash' in the step above):+  # From the running container's command line (since it was run with 'bash' in the step above).  + # set a correct --logdir    root@e9efee9e3fd3:/# '''tensorboard --bind_all --logdir=/app/log.txt'''  # remove --bind_all for TF 1.15   root@e9efee9e3fd3:/# '''tensorboard --bind_all --logdir=/app/log.txt'''  # remove --bind_all for TF 1.15    # Then open a browser:   # Then open a browser:    '''http://localhost:6006'''</font>   '''http://localhost:6006'''</font> Oleg

Tensorflow with gpu

Mon, 12/23/2019 - 17:39

‎Notes

← Older revision Revision as of 00:39, 24 December 2019 (35 intermediate revisions by the same user not shown)Line 176: Line 176:    https://www.tensorflow.org/install/docker   https://www.tensorflow.org/install/docker    −==Walkthrough for CUDA 10.2 (Dec 2019)==+   +==Setup walkthrough for CUDA 10.2 (Dec 2019)==  +   +===Install CUDA===  +* In this [https://www.tensorflow.org/install/gpu guide] there's a [https://developer.nvidia.com/cuda-toolkit-archive link to CUDA toolkit].  +** The toolkit (CUDA Toolkit 10.2) also updated the system driver to 440.33.01  +** Will have to reboot  +   +===Docker===  +====Instructions====  +'''https://www.tensorflow.org/install/docker'''  +   +Quote:  + Docker is the easiest way to enable TensorFlow GPU support on Linux since only the NVIDIA® GPU driver is required on the host machine (the NVIDIA® CUDA® Toolkit does not need to be installed).  +   +====Docker images====  +Where to browse: https://hub.docker.com/r/tensorflow/tensorflow/:  +{| class='wikitable'  +!TF version  +!Python major version  +!GPU support  +!NAME:TAG for Docker command  +|-  +|align='center'|1.15  +|align='center'|3  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:1.15.0-gpu-py3'''  +|-  +|align='center'|2.0.0+  +|align='center'|3  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu-py3'''  +|-  +|align='center'|2.0.0+  +|align='center'|2  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu'''  +|}  +   +====nvidia-docker====  +Somehow it was already installed.  +   +* Check NVIDIA docker version  + ~$ nvidia-docker version  +   +* In the docs it's clear that Docker version 19.03+ should use nvidia-docker2. For Docker of older versions - nvidia-docker v1 should be used.  +* It's not immediately clear about the '''nvidia-container-runtime'''. nvidia-docker v1 & v2 should have already registered it.  +   +====Notes====  +* Can mount a local directory in a 'binding' mode - i.e., update files locally so they are updated in the docker container as well:  + <font size='2'># this will bind-mount directory '''target''' located in '''$(pwd)''', which is a dir the command is run from  + # to '''/app''' in the docker container  +  + ~$ '''docker run \'''  +    '''-it \'''  +    '''--rm \'''  +    '''--name devtest \'''  +    '''-p 0.0.0.0:6006:6006 \'''  +    '''--mount type=bind,source="$(pwd)"/target,target=/app \'''  +    '''--gpus all \'''  +    <font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu-py3</font> \'''  +    '''bash'''</font>  +   +* How to run tensorboard from the container:  + <font size='2'># from [https://briancaffey.github.io/2017/11/20/using-tensorflow-and-tensor-board-with-docker.html here]  + # From the running container's command line (since it was run with 'bash' in the step above).  + # set a correct --logdir  + root@e9efee9e3fd3:/# '''tensorboard --bind_all --logdir=/app/log.txt'''  # remove --bind_all for TF 1.15  + # Then open a browser:  + '''http://localhost:6006'''</font> Oleg

Tensorflow with gpu

Mon, 12/23/2019 - 17:39

‎Notes

← Older revision Revision as of 00:39, 24 December 2019 (36 intermediate revisions by the same user not shown)Line 175: Line 175:    # Test 3: Run a local script (and include a local dir) in contatiner:   # Test 3: Run a local script (and include a local dir) in contatiner:    https://www.tensorflow.org/install/docker   https://www.tensorflow.org/install/docker  +  +  +==Setup walkthrough for CUDA 10.2 (Dec 2019)==  +  +===Install CUDA===  +* In this [https://www.tensorflow.org/install/gpu guide] there's a [https://developer.nvidia.com/cuda-toolkit-archive link to CUDA toolkit].  +** The toolkit (CUDA Toolkit 10.2) also updated the system driver to 440.33.01  +** Will have to reboot  +  +===Docker===  +====Instructions====  +'''https://www.tensorflow.org/install/docker'''  +  +Quote:  + Docker is the easiest way to enable TensorFlow GPU support on Linux since only the NVIDIA® GPU driver is required on the host machine (the NVIDIA® CUDA® Toolkit does not need to be installed).  +  +====Docker images====  +Where to browse: https://hub.docker.com/r/tensorflow/tensorflow/:  +{| class='wikitable'  +!TF version  +!Python major version  +!GPU support  +!NAME:TAG for Docker command  +|-  +|align='center'|1.15  +|align='center'|3  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:1.15.0-gpu-py3'''  +|-  +|align='center'|2.0.0+  +|align='center'|3  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu-py3'''  +|-  +|align='center'|2.0.0+  +|align='center'|2  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu'''  +|}  +  +====nvidia-docker====  +Somehow it was already installed.  +  +* Check NVIDIA docker version  + ~$ nvidia-docker version  +  +* In the docs it's clear that Docker version 19.03+ should use nvidia-docker2. For Docker of older versions - nvidia-docker v1 should be used.  +* It's not immediately clear about the '''nvidia-container-runtime'''. nvidia-docker v1 & v2 should have already registered it.  +  +====Notes====  +* Can mount a local directory in a 'binding' mode - i.e., update files locally so they are updated in the docker container as well:  + <font size='2'># this will bind-mount directory '''target''' located in '''$(pwd)''', which is a dir the command is run from  + # to '''/app''' in the docker container  +  + ~$ '''docker run \'''  +    '''-it \'''  +    '''--rm \'''  +    '''--name devtest \'''  +    '''-p 0.0.0.0:6006:6006 \'''  +    '''--mount type=bind,source="$(pwd)"/target,target=/app \'''  +    '''--gpus all \'''  +    <font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu-py3</font> \'''  +    '''bash'''</font>  +  +* How to run tensorboard from the container:  + <font size='2'># from [https://briancaffey.github.io/2017/11/20/using-tensorflow-and-tensor-board-with-docker.html here]  + # From the running container's command line (since it was run with 'bash' in the step above).  + # set a correct --logdir  + root@e9efee9e3fd3:/# '''tensorboard --bind_all --logdir=/app/log.txt'''  # remove --bind_all for TF 1.15  + # Then open a browser:  + '''http://localhost:6006'''</font> Oleg

Tensorflow with gpu

Mon, 12/23/2019 - 17:02

‎nvidia-docker

← Older revision Revision as of 00:02, 24 December 2019 (34 intermediate revisions by the same user not shown)Line 175: Line 175:    # Test 3: Run a local script (and include a local dir) in contatiner:   # Test 3: Run a local script (and include a local dir) in contatiner:    https://www.tensorflow.org/install/docker   https://www.tensorflow.org/install/docker  +  +  +==Walkthrough for CUDA 10.2 (Dec 2019)==  +  +===Install CUDA===  +* In this [https://www.tensorflow.org/install/gpu guide] there's a [https://developer.nvidia.com/cuda-toolkit-archive link to CUDA toolkit].  +** The toolkit (CUDA Toolkit 10.2) also updated the system driver to 440.33.01  +** Will have to reboot  +  +===Docker===  +====Instructions====  +'''https://www.tensorflow.org/install/docker'''  +  +Quote:  + Docker is the easiest way to enable TensorFlow GPU support on Linux since only the NVIDIA® GPU driver is required on the host machine (the NVIDIA® CUDA® Toolkit does not need to be installed).  +  +====Docker images====  +Where to browse: https://hub.docker.com/r/tensorflow/tensorflow/:  +{| class='wikitable'  +!TF version  +!Python major version  +!GPU support  +!NAME:TAG for Docker command  +|-  +|align='center'|1.15  +|align='center'|3  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:1.15.0-gpu-py3'''  +|-  +|align='center'|2.0.0+  +|align='center'|3  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu-py3'''  +|-  +|align='center'|2.0.0+  +|align='center'|2  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu'''  +|}  +  +====nvidia-docker====  +Somehow it was already installed.  +  +* Check NVIDIA docker version  + ~$ nvidia-docker version  +  +* In the docs it's clear that Docker version 19.03+ should use nvidia-docker2. For Docker of older versions - nvidia-docker v1 should be used.  +* It's not immediately clear about the '''nvidia-container-runtime'''. nvidia-docker v1 & v2 should have already registered it.  +  +====Notes====  +* Can mount a local directory in a 'binding' mode - i.e., update files locally so they are updated in the docker container as well:  + <font size='2'># this will bind-mount directory '''target''' located in '''$(pwd)''', which is a dir the command is run from  + # to '''/app''' in the docker container  +  + ~$ '''docker run \'''  +    '''-it \'''  +    '''--rm \'''  +    '''--name devtest \'''  +    '''-p 0.0.0.0:6006:6006 \'''  +    '''--mount type=bind,source="$(pwd)"/target,target=/app \'''  +    '''--gpus all \'''  +    <font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu-py3</font> \'''  +    '''bash'''</font>  +  +* How to run tensorboard from the container:  + <font size='2'># from [https://briancaffey.github.io/2017/11/20/using-tensorflow-and-tensor-board-with-docker.html here]  + # From the running container's command line (since it was run with 'bash' in the step above):  + root@e9efee9e3fd3:/# '''tensorboard --bind_all --logdir=/app/log.txt'''  # remove --bind_all for TF 1.15  + # Then open a browser:  + '''http://localhost:6006'''</font> Oleg

Tensorflow with gpu

Mon, 12/23/2019 - 15:49

‎Docker images

← Older revision Revision as of 22:49, 23 December 2019 (33 intermediate revisions by the same user not shown)Line 175: Line 175:    # Test 3: Run a local script (and include a local dir) in contatiner:   # Test 3: Run a local script (and include a local dir) in contatiner:    https://www.tensorflow.org/install/docker   https://www.tensorflow.org/install/docker  +  +  +==Walkthrough for CUDA 10.2 (Dec 2019)==  +  +===Install CUDA===  +* In this [https://www.tensorflow.org/install/gpu guide] there's a [https://developer.nvidia.com/cuda-toolkit-archive link to CUDA toolkit].  +** The toolkit (CUDA Toolkit 10.2) also updated the system driver to 440.33.01  +** Will have to reboot  +  +===Docker===  +====Instructions====  +'''https://www.tensorflow.org/install/docker'''  +  +Quote:  + Docker is the easiest way to enable TensorFlow GPU support on Linux since only the NVIDIA® GPU driver is required on the host machine (the NVIDIA® CUDA® Toolkit does not need to be installed).  +  +====Docker images====  +Where to browse: https://hub.docker.com/r/tensorflow/tensorflow/:  +{| class='wikitable'  +!TF version  +!Python major version  +!GPU support  +!NAME:TAG for Docker command  +|-  +|align='center'|1.15  +|align='center'|3  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:1.15.0-gpu-py3'''  +|-  +|align='center'|2.0.0+  +|align='center'|3  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu-py3'''  +|-  +|align='center'|2.0.0+  +|align='center'|2  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu'''  +|}  +  +====nvidia-docker====  +Somehow it was already installed.  +  +* Check NVIDIA docker version  + ~$ nvidia-docker version  +  +* In the docs it's clear that Docker version 19.03+ should use nvidia-docker2. For Docker of older versions - nvidia-docker v1 should be used.  +* It's not immediately clear about the '''nvidia-container-runtime'''. nvidia-docker v1 & v2 already register it.  +  +====Notes====  +* Can mount a local directory in a 'binding' mode - i.e., update files locally so they are updated in the docker container as well:  + <font size='2'># this will bind-mount directory '''target''' located in '''$(pwd)''', which is a dir the command is run from  + # to '''/app''' in the docker container  +  + ~$ '''docker run \'''  +    '''-it \'''  +    '''--rm \'''  +    '''--name devtest \'''  +    '''-p 0.0.0.0:6006:6006 \'''  +    '''--mount type=bind,source="$(pwd)"/target,target=/app \'''  +    '''--gpus all \'''  +    <font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu-py3</font> \'''  +    '''bash'''</font>  +  +* How to run tensorboard from the container:  + <font size='2'># from [https://briancaffey.github.io/2017/11/20/using-tensorflow-and-tensor-board-with-docker.html here]  + # From the running container's command line (since it was run with 'bash' in the step above):  + root@e9efee9e3fd3:/# '''tensorboard --bind_all --logdir=/app/log.txt'''  # remove --bind_all for TF 1.15  + # Then open a browser:  + '''http://localhost:6006'''</font> Oleg

Tensorflow with gpu

Mon, 12/23/2019 - 14:56

‎Notes

← Older revision Revision as of 21:56, 23 December 2019 (31 intermediate revisions by the same user not shown)Line 175: Line 175:    # Test 3: Run a local script (and include a local dir) in contatiner:   # Test 3: Run a local script (and include a local dir) in contatiner:    https://www.tensorflow.org/install/docker   https://www.tensorflow.org/install/docker  +  +  +==Walkthrough for CUDA 10.2 (Dec 2019)==  +  +===Install CUDA===  +* In this [https://www.tensorflow.org/install/gpu guide] there's a [https://developer.nvidia.com/cuda-toolkit-archive link to CUDA toolkit].  +** The toolkit (CUDA Toolkit 10.2) also updated the system driver to 440.33.01  +** Will have to reboot  +  +===Docker===  +====Instructions====  +'''https://www.tensorflow.org/install/docker'''  +  +Quote:  + Docker is the easiest way to enable TensorFlow GPU support on Linux since only the NVIDIA® GPU driver is required on the host machine (the NVIDIA® CUDA® Toolkit does not need to be installed).  +  +====Docker images====  +Where to browse: https://hub.docker.com/r/tensorflow/tensorflow/:  +{| class='wikitable'  +!TF version  +!Python major version  +!GPU support  +!TAG for Docker command  +|-  +|align='center'|1.15  +|align='center'|3  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:1.15.0-gpu-py3'''  +|-  +|align='center'|2.0.0+  +|align='center'|3  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu-py3'''  +|-  +|align='center'|2.0.0+  +|align='center'|2  +|align='center'|yes  +|<font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu'''  +|}  +  +====nvidia-docker====  +Somehow it was already installed.  +  +* Check NVIDIA docker version  + ~$ nvidia-docker version  +  +* In the docs it's clear that Docker version 19.03+ should use nvidia-docker2. For Docker of older versions - nvidia-docker v1 should be used.  +* It's not immediately clear about the '''nvidia-container-runtime'''. nvidia-docker v1 & v2 already register it.  +  +====Notes====  +* Can mount a local directory in a 'binding' mode - i.e., update files locally so they are updated in the docker container as well:  + <font size='2'># this will bind-mount directory '''target''' located in '''$(pwd)''', which is a dir the command is run from  + # to '''/app''' in the docker container  +  + ~$ '''docker run \'''  +    '''-it \'''  +    '''--rm \'''  +    '''--name devtest \'''  +    '''-p 0.0.0.0:6006:6006 \'''  +    '''--mount type=bind,source="$(pwd)"/target,target=/app \'''  +    '''--gpus all \'''  +    <font color='darkgreen'>'''tensorflow/tensorflow:latest-gpu-py3</font> \'''  +    '''bash'''</font>  +  +* How to run tensorboard from the container:  + <font size='2'># from [https://briancaffey.github.io/2017/11/20/using-tensorflow-and-tensor-board-with-docker.html here]  + # From the running container's command line (since it was run with 'bash' in the step above):  + root@e9efee9e3fd3:/# '''tensorboard --bind_all --logdir=/app/log.txt'''  + # Then open a browser:  + '''http://localhost:6006'''</font> Oleg

Tensorflow with gpu

Mon, 12/23/2019 - 14:20

‎Notes

← Older revision Revision as of 21:20, 23 December 2019 (27 intermediate revisions by the same user not shown)Line 175: Line 175:    # Test 3: Run a local script (and include a local dir) in contatiner:   # Test 3: Run a local script (and include a local dir) in contatiner:    https://www.tensorflow.org/install/docker   https://www.tensorflow.org/install/docker  +  +  +==Walkthrough for CUDA 10.2 (Dec 2019)==  +  +===Install CUDA===  +* In this [https://www.tensorflow.org/install/gpu guide] there's a [https://developer.nvidia.com/cuda-toolkit-archive link to CUDA toolkit].  +** The toolkit (CUDA Toolkit 10.2) also updated the system driver to 440.33.01  +** Will have to reboot  +  +===Docker===  +====Instructions====  +'''https://www.tensorflow.org/install/docker'''  +  +Quote:  + Docker is the easiest way to enable TensorFlow GPU support on Linux since only the NVIDIA® GPU driver is required on the host machine (the NVIDIA® CUDA® Toolkit does not need to be installed).  +  +====Docker images====  +Where to browse: https://hub.docker.com/r/tensorflow/tensorflow/  +* tag for python2 + gpu: '''tensorflow/tensorflow:latest-gpu'''  +* tag for python3 + gpu: '''tensorflow/tensorflow:latest-gpu-py3'''  +  +====nvidia-docker====  +Somehow it was already installed.  +  +* Check NVIDIA docker version  + ~$ nvidia-docker version  +  +* In the docs it's clear that Docker version 19.03+ should use nvidia-docker2. For Docker of older versions - nvidia-docker v1 should be used.  +* It's not immediately clear about the '''nvidia-container-runtime'''. nvidia-docker v1 & v2 already register it.  +  +====Notes====  +* Can mount a local directory in a 'binding' mode - i.e., update files locally so they are updated in the docker container as well:  + <font size='2'># this will bind-mount directory '''target''' located in '''$(pwd)''', which is a dir the command is run from  + # to '''/app''' in the docker container  +  + ~$ '''docker run \'''  +    '''-it \'''  +    '''--rm \'''  +    '''--name devtest \'''  +    '''-p 0.0.0.0:6006:6006 \'''  +    '''--mount type=bind,source="$(pwd)"/target,target=/app \'''  +    '''--gpus all \'''  +    '''tensorflow/tensorflow:latest-gpu-py3 \'''  +    '''bash'''</font>  +  +* How to run tensorboard from the container:  + <font size='2'># from [https://briancaffey.github.io/2017/11/20/using-tensorflow-and-tensor-board-with-docker.html here]  + # From the running container's command line (since it was run with 'bash' in the step above):  + root@e9efee9e3fd3:/# '''tensorboard --bind_all --logdir=/app/log.txt'''  + # Then open a browser:  + '''http://localhost:6006'''</font> Oleg

Tensorflow with gpu

Mon, 12/23/2019 - 13:17

‎Walkthrough for CUDA 10.2 (Dec 2019)

← Older revision Revision as of 20:17, 23 December 2019 (10 intermediate revisions by the same user not shown)Line 175: Line 175:    # Test 3: Run a local script (and include a local dir) in contatiner:   # Test 3: Run a local script (and include a local dir) in contatiner:    https://www.tensorflow.org/install/docker   https://www.tensorflow.org/install/docker  +  +  +==Walkthrough for CUDA 10.2 (Dec 2019)==  +  +===Install CUDA===  +* In this [https://www.tensorflow.org/install/gpu guide] there's a [https://developer.nvidia.com/cuda-toolkit-archive link to CUDA toolkit].  +** The toolkit (CUDA Toolkit 10.2) also updated the system driver to 440.33.01  +** Reboot  +  +===Docker===  +  +====Instructions====  +https://www.tensorflow.org/install/docker  +  +====nvidia-docker====  +* Check NVIDIA docker version  + ~$ nvidia-docker version  +  +In the docs it's clear that Docker version 19.03+ should use nvidia-docker2. For Docker of older versions - nvidia-docker v1 should be used.  +  +It's not immediately clear about the '''nvidia-container-runtime'''. What? Why? Is it automatically installed with nvidia-docker thing?  +  +====Notes====  +* Can mount a local directory in a 'binding' mode - i.e., update files locally so they are updated in the docker container as well:  + <font size='3'># this will bind-mount directory '''target''' located in '''$(pwd)''', which is a dir the command is run from  + # to '''/app''' in the docker container  + docker run \  +    --gpus all \  +    --name sometest \  +    --mount type=bind,source="$(pwd)"/target,target=/app \  +    -it \  +    tensorflow/tensorflow:latest-gpu \  +    bash</font> Oleg

Tensorflow with gpu

Mon, 12/23/2019 - 12:15

‎Walkthrough for CUDA 10.2 (Dec 2019)

← Older revision Revision as of 19:15, 23 December 2019 (3 intermediate revisions by the same user not shown)Line 175: Line 175:    # Test 3: Run a local script (and include a local dir) in contatiner:   # Test 3: Run a local script (and include a local dir) in contatiner:    https://www.tensorflow.org/install/docker   https://www.tensorflow.org/install/docker  +  +  +==Walkthrough for CUDA 10.2 (Dec 2019)==  +  +===Install CUDA===  +* In this [https://www.tensorflow.org/install/gpu guide] there's a [https://developer.nvidia.com/cuda-toolkit-archive link to CUDA toolkit].  +** The toolkit (CUDA Toolkit 10.2) also updated the system driver to 440.33.01  +** Reboot Oleg

Publications

Thu, 12/05/2019 - 15:01

← Older revision Revision as of 22:01, 5 December 2019 Line 9: Line 9:  |[https://arxiv.org/pdf/1911.06975.pdf Filippov, Andrey and Dzhimiev, Oleg "Long Range 3D with Quadocular Thermal (LWIR) Camera" arXiv preprint arXiv:1911.06975 (2019).] |[https://arxiv.org/pdf/1911.06975.pdf Filippov, Andrey and Dzhimiev, Oleg "Long Range 3D with Quadocular Thermal (LWIR) Camera" arXiv preprint arXiv:1911.06975 (2019).]  |Elphel publication |Elphel publication  +|-  +|2019  +|[https://www.researchgate.net/profile/Cui_Xiangbin/publication/335406867_The_conditions_of_the_formation_and_existence_of_Blue_Ice_Areas_in_the_ice_flow_transition_region_from_the_Antarctic_Ice_Sheet_to_the_Amery_Ice_Shelf_in_the_Larsemann_Hills_area/links/5d675ab5299bf11adf29bb92/The-conditions-of-the-formation-and-existence-of-Blue-Ice-Areas-in-the-ice-flow-transition-region-from-the-Antarctic-Ice-Sheet-to-the-Amery-Ice-Shelf-in-the-Larsemann-Hills-area.pdf Markov, Aleksey and Polyakov, Sergey and Sun, Bо and Lukin, Valeriy and Popov, Sergey and Yang, Huigen and Zhang, Tijun and Cui, Xiangbin and Guo, Jingxue and Cui, Penghui and others "Polar Science"]  +|Used Ephel camera in experimental setup  +  |- |-  |2019 |2019  |[https://documat.unirioja.es/descarga/articulo/6802185.pdf Campbell, Andrew, Alan Both, and Qian Chayn Sun. "Detecting and mapping traffic signs from Google Street View images using deep learning and GIS."] |[https://documat.unirioja.es/descarga/articulo/6802185.pdf Campbell, Andrew, Alan Both, and Qian Chayn Sun. "Detecting and mapping traffic signs from Google Street View images using deep learning and GIS."]  |Elphel cameras for GSV referenced   |Elphel cameras for GSV referenced    +  |- |-  |2019 |2019 Andrey.filippov

Publications

Tue, 11/26/2019 - 18:12

← Older revision Revision as of 01:12, 27 November 2019 (One intermediate revision by the same user not shown)Line 9: Line 9:  |[https://arxiv.org/pdf/1911.06975.pdf Filippov, Andrey and Dzhimiev, Oleg "Long Range 3D with Quadocular Thermal (LWIR) Camera" arXiv preprint arXiv:1911.06975 (2019).] |[https://arxiv.org/pdf/1911.06975.pdf Filippov, Andrey and Dzhimiev, Oleg "Long Range 3D with Quadocular Thermal (LWIR) Camera" arXiv preprint arXiv:1911.06975 (2019).]  |Elphel publication |Elphel publication  +|-  +|2019  +|[https://documat.unirioja.es/descarga/articulo/6802185.pdf Campbell, Andrew, Alan Both, and Qian Chayn Sun. "Detecting and mapping traffic signs from Google Street View images using deep learning and GIS."]  +|Elphel cameras for GSV referenced  +|-  +|2019  +|[https://documat.unirioja.es/descarga/articulo/6802185.pdf Díaz, Hernán Porras, Duvan Yahir Sanabria Echeverry, and Johan Alexander Ortiz Ferreira. "Tendencia mundial en tecnologías de sistemas de mapeo móvil implementadas con láser."]  +|Elphel Eyesis referenced  |- |-  |2019 |2019 Andrey.filippov

Publications

Tue, 11/19/2019 - 14:34

← Older revision Revision as of 21:34, 19 November 2019 (2 intermediate revisions by the same user not shown)Line 5: Line 5:  ! Citation/link ! Citation/link  ! Comments ! Comments  +|-  +|2019  +|[https://arxiv.org/pdf/1911.06975.pdf Filippov, Andrey and Dzhimiev, Oleg "Long Range 3D with Quadocular Thermal (LWIR) Camera" arXiv preprint arXiv:1911.06975 (2019).]  +|Elphel publication  |- |-  |2019 |2019 Line 19: Line 23:  |- |-  |2018 |2018 −|[https://arxiv.org/pdf/1811.08032 Filippov, Andrey, and Oleg Dzhimiev. "See far with TPNET: a Tile Processor and a CNN Symbiosis." arXiv preprint arXiv:1811.08032 (2018).]+|[https://arxiv.org/pdf/1811.08032 Filippov, Andrey, and Dzhimiev, Oleg. "See far with TPNET: a Tile Processor and a CNN Symbiosis." arXiv preprint arXiv:1811.08032 (2018).]  |Elphel publication |Elphel publication  |- |- Andrey.filippov

Poky migration from rocko to warrior

Thu, 10/03/2019 - 17:49

‎[SOLVED] Note 14: fixdep: Permission denied

← Older revision Revision as of 23:49, 3 October 2019 (6 intermediate revisions by the same user not shown)Line 260: Line 260:    ...   ...    ---[ end Kernel panic - not syncing: Fatal exception in interrupt   ---[ end Kernel panic - not syncing: Fatal exception in interrupt  +  +==<font color='green'>'''[SOLVED]'''</font> Note 14: fixdep: Permission denied==  +* Description:  + - We've had this error for a while, probably since kernel 4.0  + - usually happened when running do_compile_kernelmodules  + - EXTRA_OEMAKE = "-s -w '''-B''' KCFLAGS='-v'"  + - That '''-B''' forces to rebuild all targets and we also have '''-j8''' (in PARALLEL_MAKE variable) for the parallel build  +  - so when running the parallel build fixdep gets rebuilt several times and at some point  +    one of the targets (e.g. sortextable or kallsyms) calls it while fixdep is being compiled and overwritten for another target (probably)  +  - the exec rights are correct after the fact  +  +* Solution:  +  Removed '''-B'''. It make fixdep build only once and the problem is gone.  +  +* Note:  +  ~$ make -h  +    ...  +    -B, --always-make          Unconditionally make all targets.  +    ... Oleg

Updating nand flash driver from linux kernel 4.14 to 4.19

Thu, 09/26/2019 - 14:10

‎Updating to 4.19

← Older revision Revision as of 20:10, 26 September 2019 (4 intermediate revisions by the same user not shown)Line 9: Line 9:  * ON-DIE ECC feature supported - they call it internal ECC in the datasheet: * ON-DIE ECC feature supported - they call it internal ECC in the datasheet:    Internal ECC enables 5-bit detection and 4-bit error correction in 512 bytes   Internal ECC enables 5-bit detection and 4-bit error correction in 512 bytes −* Subpage (or partial) writes are not supported.      ==Linux Kernel 4.14 and older== ==Linux Kernel 4.14 and older== Line 20: Line 19:  * A lot has changed since 4.14 * A lot has changed since 4.14  ** new ''->exec_op'' hook vs old ''->cmdfunc'' ** new ''->exec_op'' hook vs old ''->cmdfunc''  +** Xilinx tries to keep up with their driver - they might have tested this on [https://lwn.net/Articles/756725/ Micron MT29F2G08ABAEAWP (On-die capable) and AMD/Spansion S34ML01G1]  * OTP function are not supported yet, and need a minor change to make them more universal * OTP function are not supported yet, and need a minor change to make them more universal −* Old LOCK/UNLOCK function need to be updated to the new codebase+ That minor change is to set ops.mode to MTD_OPS_RAW instead of normal. Hooks for normal ops −* Set ''nand-ecc-mode = "on-die";'' in the device tree.+ can enable/disable ECC features which will reset OTP mode (which is also a feature set)  +   +* Old LOCK/UNLOCK functions from pre 4.14 kernel need to be updated to the new codebase  + Those op_parser patterns.  + Also UNLOCK command is 2 parts: 0x23 BOUND_LO 0x24 BOUND_HI - so, before 0x24 has to be some sort of a delay or the op should be split into 2.  + Datasheet does not provide any timing info  +   +* Set ''nand-ecc-mode = "on-die";'' in the device tree.  + In 4.14 it used to be detected as "on-die", but not anymore.  * Add NAND_NO_SUBPAGE_WRITE to chip->options for MT29F8G08ADBDAH4 * Add NAND_NO_SUBPAGE_WRITE to chip->options for MT29F8G08ADBDAH4 −===Results===+ Otherwise driver goes for subpage program support. '''Need to test this.'''  +   +===Driver code===  See the code here: See the code here:  * [https://git.elphel.com/Elphel/linux-elphel/blob/warrior/src/arch/arm/boot/dts/elphel393-zynq-base.dtsi#L299 device tree] * [https://git.elphel.com/Elphel/linux-elphel/blob/warrior/src/arch/arm/boot/dts/elphel393-zynq-base.dtsi#L299 device tree] −  Those ''arm,nand-cycle-t0-6'' can be removed+  Those ''arm,nand-cycle-t0-6'' are obsolete and can be removed from DT.  * [https://git.elphel.com/Elphel/linux-elphel/tree/d4f217cf68d8c3c9a84f680dd014b1a0de0571c8/src/drivers/mtd/nand/raw driver code] * [https://git.elphel.com/Elphel/linux-elphel/tree/d4f217cf68d8c3c9a84f680dd014b1a0de0571c8/src/drivers/mtd/nand/raw driver code] −  Didn't make a patch - the changes are for a specific MT29F8G08ADBDAH4. Although they might work for others as well we have only tested the chip we use.+  Didn't make a patch - the changes are for a specific MT29F8G08ADBDAH4. Although they might work for other chips as well we have only tested the one we use. Oleg

Updating nand flash driver from linux kernel 4.14 to 4.19

Thu, 09/26/2019 - 12:47

‎Updating to 4.19

← Older revision Revision as of 18:47, 26 September 2019 (One intermediate revision by the same user not shown)Line 20: Line 20:  * A lot has changed since 4.14 * A lot has changed since 4.14  ** new ''->exec_op'' hook vs old ''->cmdfunc'' ** new ''->exec_op'' hook vs old ''->cmdfunc''  +** Xilinx tries to keep up with their driver  * OTP function are not supported yet, and need a minor change to make them more universal * OTP function are not supported yet, and need a minor change to make them more universal −* Old LOCK/UNLOCK function need to be updated to the new codebase+ That minor change is to set ops.mode to MTD_OPS_RAW instead of normal. Hooks for normal ops −* Set ''nand-ecc-mode = "on-die";'' in the device tree.+ can enable/disable ECC features which will reset OTP mode (which is also a feature set)  +   +* Old LOCK/UNLOCK functions from pre 4.14 kernel need to be updated to the new codebase  + Those op_parser patterns.  + Also UNLOCK command is 2 parts: 0x23 BOUND_LO 0x24 BOUND_HI - so, before 0x24 has to be some sort of a delay or the op should be split into 2.  + Datasheet does not provide any timing info  +   +* Set ''nand-ecc-mode = "on-die";'' in the device tree.  + In 4.14 it used to be detected as "on-die", but not anymore.  * Add NAND_NO_SUBPAGE_WRITE to chip->options for MT29F8G08ADBDAH4 * Add NAND_NO_SUBPAGE_WRITE to chip->options for MT29F8G08ADBDAH4 −===Results===+ Otherwise driver goes for subpage program support  +   +===Driver code===  See the code here: See the code here:  * [https://git.elphel.com/Elphel/linux-elphel/blob/warrior/src/arch/arm/boot/dts/elphel393-zynq-base.dtsi#L299 device tree] * [https://git.elphel.com/Elphel/linux-elphel/blob/warrior/src/arch/arm/boot/dts/elphel393-zynq-base.dtsi#L299 device tree] −  Those ''arm,nand-cycle-t0-6'' can be removed+  Those ''arm,nand-cycle-t0-6'' are obsolete and can be removed from DT.  * [https://git.elphel.com/Elphel/linux-elphel/tree/d4f217cf68d8c3c9a84f680dd014b1a0de0571c8/src/drivers/mtd/nand/raw driver code] * [https://git.elphel.com/Elphel/linux-elphel/tree/d4f217cf68d8c3c9a84f680dd014b1a0de0571c8/src/drivers/mtd/nand/raw driver code] −  Didn't make a patch - the changes are for a specific MT29F8G08ADBDAH4. Although they might work for others as well we have only tested the chip we use.+  Didn't make a patch - the changes are for a specific MT29F8G08ADBDAH4. Although they might work for other chips as well we have only tested the one we use. Oleg

Pages