NotImplementedError: cannot instantiate 'WindowsPath' on your system

I am getting this error when i am deploying the application. Dont know why this is happening.

  1. Share the link to the public deployed app.
    https://cardamage-assessment.streamlit.app/
  2. Share the link to your app’s public GitHub repository (including a requirements file).
    GitHub - koushik395/Vehicle-damage-deployment
  3. Share the full text of the error message (not a screenshot).
2024-02-11 11:51:19.871 Uncaught app exception
Traceback (most recent call last):
  File "/home/adminuser/venv/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 535, in _run_script
    exec(code, module.__dict__)
  File "/mount/src/vehicle-damage-deployment/streamlit_app.py", line 108, in <module>
    main()
  File "/mount/src/vehicle-damage-deployment/streamlit_app.py", line 79, in main
    model = load_yolo_model()
  File "/home/adminuser/venv/lib/python3.9/site-packages/streamlit/runtime/caching/cache_utils.py", line 212, in wrapper
    return cached_func(*args, **kwargs)
  File "/home/adminuser/venv/lib/python3.9/site-packages/streamlit/runtime/caching/cache_utils.py", line 241, in __call__
    return self._get_or_create_cached_value(args, kwargs)
  File "/home/adminuser/venv/lib/python3.9/site-packages/streamlit/runtime/caching/cache_utils.py", line 268, in _get_or_create_cached_value
    return self._handle_cache_miss(cache, value_key, func_args, func_kwargs)
  File "/home/adminuser/venv/lib/python3.9/site-packages/streamlit/runtime/caching/cache_utils.py", line 324, in _handle_cache_miss
    computed_value = self._info.func(*func_args, **func_kwargs)
  File "/mount/src/vehicle-damage-deployment/streamlit_app.py", line 29, in load_yolo_model
    model = torch.hub.load('ultralytics/yolov5', 'custom', path='best.pt')
  File "/home/adminuser/venv/lib/python3.9/site-packages/torch/hub.py", line 563, in load
    repo_or_dir = _get_cache_or_reload(repo_or_dir, force_reload, trust_repo, "load",
  File "/home/adminuser/venv/lib/python3.9/site-packages/torch/hub.py", line 220, in _get_cache_or_reload
    _check_repo_is_trusted(repo_owner, repo_name, owner_name_branch, trust_repo=trust_repo, calling_fn=calling_fn)
  File "/home/adminuser/venv/lib/python3.9/site-packages/torch/hub.py", line 276, in _check_repo_is_trusted
    Path(filepath).touch()
  File "/usr/local/lib/python3.9/pathlib.py", line 1084, in __new__
 raise NotImplementedError("cannot instantiate %r on your system"
NotImplementedError: cannot instantiate 'WindowsPath' on your system
  1. Share the Streamlit and Python versions.
    streamlit-1.31.0
1 Like

Hi @koushik. Is it working locally?

1 Like

I have no idea what exactly is going on under the hood. Maybe the model was trained under Windows and the paths ended up hard-coded in the model.
Streamlit Cloud is a Linux Debian based environment. Here is a possible untested hack, but it might have unwanted side effects:

import pathlib
import platform

plt = platform.system()
st.write(plt)  # just for debugging
if plt == 'Linux':
    pathlib.PosixPath = pathlib.WindowsPath  # or maybe the other way round
1 Like

i trained my model on google colab. I tried your solution it is working locally but it is not working on streamlit deployment.The error continues to occur.

1 Like

yes it is working locally…

1 Like

Not working the error still continues

1 Like

Bro once try the following requirements.txt and reboot the app

torch
tensorflow
streamlit
ultralytcis
opencv-python-contrib-headless 

Also i have a doubt that why you are using the following code in your streamlit_app.py in the rectangle colored box as shown below :point_down:

1 Like

the code which is there in the rectangle is because without that it is giving me the posixPath error when i am running locally

1 Like

i think i already have the dependencies in requirements.txt you have mentioned. I have added the opencv-python-contrib-headless . But still it is giving me the error

1 Like

Ok, but what is the use of that? Is it helpful for loading the model? If not Remo those lines and reboot the app ones!

1 Like

it is helping me locally. I removed those lines but the error still continues

1 Like

I think the error is related to loading the yolo model from torch hub as the error says it…Dont know why is that

1 Like

Now i am getting this kind of error after rebooting

Traceback (most recent call last):
  File "/home/adminuser/venv/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 535, in _run_script
    exec(code, module.__dict__)
  File "/mount/src/vehicle-damage-deployment/streamlit_app.py", line 107, in <module>
    main()
  File "/mount/src/vehicle-damage-deployment/streamlit_app.py", line 79, in main
    model = load_yolo_model()
            ^^^^^^^^^^^^^^^^^
  File "/home/adminuser/venv/lib/python3.11/site-packages/streamlit/runtime/caching/cache_utils.py", line 212, in wrapper
    return cached_func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/adminuser/venv/lib/python3.11/site-packages/streamlit/runtime/caching/cache_utils.py", line 241, in __call__
    return self._get_or_create_cached_value(args, kwargs)

           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/adminuser/venv/lib/python3.11/site-packages/streamlit/runtime/caching/cache_utils.py", line 268, in _get_or_create_cached_value
    return self._handle_cache_miss(cache, value_key, func_args, func_kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/adminuser/venv/lib/python3.11/site-packages/streamlit/runtime/caching/cache_utils.py", line 324, in _handle_cache_miss
    computed_value = self._info.func(*func_args, **func_kwargs)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mount/src/vehicle-damage-deployment/streamlit_app.py", line 32, in load_yolo_model
    model = torch.hub.load('ultralytics/yolov5', 'custom', path='best.pt')
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/adminuser/venv/lib/python3.11/site-packages/torch/hub.py", line 566, in load
    model = _load_local(repo_or_dir, model, *args, **kwargs)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/adminuser/venv/lib/python3.11/site-packages/torch/hub.py", line 595, in _load_local
    model = entry(*args, **kwargs)
            ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/appuser/.cache/torch/hub/ultralytics_yolov5_master/hubconf.py", line 88, in custom
    return _create(path, autoshape=autoshape, verbose=_verbose, device=device)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/appuser/.cache/torch/hub/ultralytics_yolov5_master/hubconf.py", line 34, in _create
    from models.common import AutoShape, DetectMultiBackend
  File "/home/appuser/.cache/torch/hub/ultralytics_yolov5_master/models/common.py", line 16, in <module>
    import cv2
  File "/home/adminuser/venv/lib/python3.11/site-packages/cv2/__init__.py", line 181, in <module>
    bootstrap()
  File "/home/adminuser/venv/lib/python3.11/site-packages/cv2/__init__.py", line 153, in bootstrap
    native_module = importlib.import_module("cv2")
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ImportError: libGL.so.1: cannot open shared object file: No such file or directory
1 Like

Then its something error related to model loading. Hey i think you need to use load_weights() for .hdf5 model. Here is the reference of Stack Overflow. Read the second thread of this post. It will helps you.

1 Like

No …check out the answers in the link you provided…we need to load our model using load_model.
By the way the error is showing at .py yolov5 model.

1 Like

Thanks koushik for clearing the confusion @koushik . But i have seen the ultralytcis documentation. There they mentioned code like follows:-

import torch

model = torch.hub.load('ultralytics/yolov5', 'custom', path='path/to/best.pt')  # local model
model = torch.hub.load('path/to/yolov5', 'custom', path='path/to/best.pt', source='local')  # local repo

Have u seen that they didn’t mentioned model.eval()???

Also they have mentioned that following information before the above mentioned code.

(This example loads a custom 20-class VOC-trained
YOLOv5s model ‘best.pt’ with PyTorch Hub.)

1 Like

I have seen few people using model.eval() in their projects. So i used it.
So should i remove it or ? should i use the second way of loading the model ?
model = torch.hub.load(‘path/to/yolov5’, ‘custom’, path=‘path/to/best.pt’, source=‘local’) # local repo

1 Like

Even i am loading the model pointed to the best.pt model.

1 Like

Hi @koushik . Once try by removing model.eval() !!
If you are interested to use the second type the make sure that you need to point out to the original yolov5 models as first argument and the followed with custom then followed with your best.ot model navigation.

1 Like

Can u tell me about the first argument?
I am confused what it should be

1 Like