I have been trying to deploy my app using different methods but am running into various related errors:
Deploying using environment.yml file gives me the following error: Installer returned a non-zero exit code; Error during processing dependencies; Streamlit server consistently failed status checks
When previously I was deploying with a requirements.txt file, I was able to deploy but was getting strange errors when trying to use the functionality. For instance, the keras models would not load and I would get the following error: You may be trying to load on a different device from the computational device. Consider setting the ***experimental _io_device*** option in tf.saved_model.LoadOptions to the io device such as ‘/job:localhost’.
I suspect that the ML models you have locally may have been created with a different (e.g. much older or incompatible) version of the respective ML library.
Hi Franky - there you go. Interesting hypothesis - very possible as I wrote some of the code a couple of months ago so any updates in that time would certainly not be reflected. I saved the models a few days ago however after rerunning the Python script so I can only think that the environment.yml file in my repo has some unsupported libraries. Thanks
I did the same, and i had the same issues you described, see my pull request.
I think the models were trained with older versions of the ML frameworks.
I got some models working by downgrading some of the ML libraries, see my pull request.
I am not an ML expert and this ecosystem is changing so fast, but if you have the ability to retrain your models, i would do that with the newer ML libraries.
I would setup a clean local dev environment, either with docker or with a python virtual environment. Otherwise you will end up with a mess