Onnx deployment issue

My app works fine in the local environment but when deployed gets this error message!

Traceback (most recent call last):

  File "/home/appuser/venv/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 564, in _run_script

    exec(code, module.__dict__)

  File "/app/facerecapp/main.py", line 53, in <module>

    val1= np.expand_dims(extract_features(np_image1),axis=0)

  File "/app/facerecapp/main.py", line 17, in extract_features

    ort_sess = ort.InferenceSession(modelpath, None)

  File "/home/appuser/venv/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 347, in __init__

    self._create_inference_session(providers, provider_options, disabled_optimizers)

  File "/home/appuser/venv/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 384, in _create_inference_session

    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)

onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from r100_glint360k.onnx failed:Protobuf parsing failed.

Can’t understand the issue here! Link to the github for the project here: github_repo . It simply takes two images, runs feature extraction using a model(saved in onnx format) and gives similarity score. Any suggestion about what is going on. Thank you!!

Welcome to the forum @abhatta1234! :raised_hands:

It looks like you are getting an error related to the ONNX runtime library that you are using. The error message is saying that the protobuf parsing failed, which suggests that there might be something wrong with the ONNX model file that you are trying to load.

Protobuf is the file format that ONNX uses to store models, and it looks like there might be an issue with the format of the model file in your case.

I’m not super familiar with this but maybe one thing you can try is to use the onnx.checker.check_model() function to validate the model file and see if there are any errors?

I hope this helps! Let me know if you have any other questions.

Charly

Thank you so much for your answer! It seem like there was some problem with .onnx model upload.

How do you suggest uploading large pretrained model to github for deployment? I tried gitlfs and that gave me some trouble with onnx. Now, I also tried uploading to the model to google cloud. But, I couldn’t find any resource to load the .onnx model during runtime inferencing from google cloud blobs.

Thank you!

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.