Tried `opencv-python-headless` instead of `opencv-python`: Getting tensorflow error:

2021-12-07 11:16:32.167 An update to the [server] config option section was detected. To have these changes be reflected, please restart streamlit.

2021-12-07 11:16:32.365736: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/appuser/venv/lib/python3.7/site-packages/cv2/../../lib64:

2021-12-07 11:16:32.365819: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.




2021-12-07 11:16:34.718615: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/appuser/venv/lib/python3.7/site-packages/cv2/../../lib64:

2021-12-07 11:16:34.718691: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)

2021-12-07 11:16:34.718718: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (localhost): /proc/driver/nvidia/version does not exist

2021-12-07 11:16:34.718996: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA

To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.

I want to install these dependancies

import cv2
import numpy as np
import streamlit as st
import tensorflow as tf
from run import tiff_call
from skimage import io as io_
from tensorflow.keras import backend as K

there is some problem regarding opencv, I tried the solution to use the alternative opencv-python-headless but still facing the problem.

In the requirements.txt file I have mentioned:

tensorflow==2.7.0
streamlit==0.82.0
numpy==1.19.5
scikit-image==0.19.0
opencv-python-headless==4.5.4.60

Pls suggest how I can resolve this issue.

Hi @Hrushi, welcome to the Streamlit community!

I suspect you need to switch your dependency to say tensorflow-cpu, since no GPU or graphics drivers will be present on Streamlit Cloud.

Best,
Randy

I tried it with tensorflow-cpu but now it gives this error.

[manager] Python dependencies were installed from /app/brain-tumor-detection/requirements.txt using pip.

  Stopping...

2021-12-08 06:29:27.571 An update to the [server] config option section was detected. To have these changes be reflected, please restart streamlit.




2021-12-08 06:29:30.375884: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA

To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.

[client] Preparing system...

[client] Spinning up manager process...

I waited for a long time still it didn’t gave any updates. I believe the problem is not with tensorflow but opencv. Please check it and let me know.

Hi @Hrushi :wave:

Those messages in your log are benign warnings. All they’re saying is that TensorFlow is unable to find certain libraries and drivers that are necessary for TensorFlow to run on a GPU.

Ignore above cudart dlerror if you do not have a GPU set up on your machine.

You can safely ignore these messages. Both TensorFlow and OpenCV have been successfully installed. I verified by forking your repo. There’s at least one error about incorrect file paths to load_and_prep_image(), but it’s unrelated to this TensorFlow/OpenCV non-issue. :grinning_face_with_smiling_eyes:

Best,
Snehan

1 Like

Streamlit doesn’t supports .tiff files so I have to explicitly convert the .tiff files to .jpg or any other suitable format. I take these .jpg files and then find the original .tiff file in the dataset directory with the name being intact.

This approach works well on my local machine. The error to highlight here is I guess:

2021-12-08 06:41:43.044801: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/appuser/venv/lib/python3.7/site-packages/cv2/../../lib64:

2021-12-08 06:41:43.044858: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.




2021-12-08 06:41:45.585831: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/appuser/venv/lib/python3.7/site-packages/cv2/../../lib64:

The cv2 module doesn’t loads properly.

To reiterate, the lines you’ve shared are benign warnings that can be ignored. I was able to deploy a forked copy of your app. It worked as expected when I uploaded images whose file names corresponded to the .tiff files in lgg-mri-segmentation/kaggle_3m/.

Specifically, I uploaded Brain-Tumor-Detection/TCGA_CS_5393_19990606_11.jpg at main · Hrushi11/Brain-Tumor-Detection · GitHub and received the following output:

The only times I received an error was when I uploaded an image (404.jpg) whose file name did not exist in lgg-mri-segmentation/kaggle_3m/. Here’s the related traceback:

2021-12-08 08:13:24.679 Uncaught app exception

Traceback (most recent call last):

  File "/home/appuser/venv/lib/python3.7/site-packages/streamlit/script_runner.py", line 338, in _run_script

    exec(code, module.__dict__)

  File "/app/brain-tumor-detection/app.py", line 74, in <module>

    file_Uploader()

  File "/app/brain-tumor-detection/app.py", line 64, in file_Uploader

    img = load_and_prep_image(path)

  File "/app/brain-tumor-detection/app.py", line 45, in load_and_prep_image

    img = io_.imread(image)

  File "/home/appuser/venv/lib/python3.7/site-packages/skimage/io/_io.py", line 53, in imread

    img = call_plugin('imread', fname, plugin=plugin, **plugin_args)

  File "/home/appuser/venv/lib/python3.7/site-packages/skimage/io/manage_plugins.py", line 207, in call_plugin

    return func(*args, **kwargs)

  File "/home/appuser/venv/lib/python3.7/site-packages/skimage/io/_plugins/tifffile_plugin.py", line 30, in imread

    return tifffile_imread(fname, **kwargs)

  File "/home/appuser/venv/lib/python3.7/site-packages/tifffile/tifffile.py", line 891, in imread

    with TiffFile(files, **kwargs_file) as tif:

  File "/home/appuser/venv/lib/python3.7/site-packages/tifffile/tifffile.py", line 3131, in __init__

    fh = FileHandle(arg, mode=mode, name=name, offset=offset, size=size)

  File "/home/appuser/venv/lib/python3.7/site-packages/tifffile/tifffile.py", line 10447, in __init__

    self.open()

  File "/home/appuser/venv/lib/python3.7/site-packages/tifffile/tifffile.py", line 10460, in open

    self._fh = open(self._file, self._mode)

FileNotFoundError: [Errno 2] No such file or directory: '/app/brain-tumor-detection/lgg-mri-segmentation/kaggle_3m/404.tif/404.tif'

Both TensorFlow and OpenCV load as expected. The above error is related to the code logic, not specific libraries.

Best,
Snehan

2 Likes

Thanks a lot for your quick support it indeed was a logic error and sorry for bothering you for such a trivial error.

Best,
Hrushikesh Kachgunde

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.