I’m trying to build an app that loads a model that is saved in Tensorflow’s SavedModel Format. The app is rendering correctly and loading the model when I do it from my end. I successfully managed to push the model to GitHub using GitLFS(Because the model is bigger than 1GB). The app has deployed successfully, but the moment I try to load the model in the deployed app, it crashes instantly. The deployment logs after the app crashed are shown below. Please suggest anything that I can do, so that I don’t run into these issues
2023-07-04 17:04:10.968606: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-07-04 17:04:11.007242: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-07-04 17:04:11.008726: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-07-04 17:04:11.978552: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
[17:06:41] ❗️ Streamlit server consistently failed status checks