Error in loading the saved optimizer state

Hi,
I’m getting this error message.
“ValueError: This app has encountered an error. The original error message is redacted to prevent data leaks. Full error details have been recorded in the logs (if you’re on Streamlit Cloud, click on ‘Manage app’ in the lower right of your app).”

URL for Streamlit app: - https://glaucocare.streamlit.app/

Github Repo URL:- https://github.com/ShyamaleeT/glaucocare

“WARNING:absl:Error in loading the saved optimizer state. As a result, your model is starting with a freshly initialized optimizer. 2024-03-23 02:25:35.418 Uncaught app exception”

Hi @Thisara_Shyamalee,

Thanks for sharing this question!

Can you share the full logs here.

1 Like

Hi @tonykip,
Here it is.

Link for Full Log: https://drive.google.com/file/d/14hQwFtHc3RTJdDYtT5PB2rNpg7k23hwO/view?usp=sharing

Thank you for the extra info. So it seems the error is suggesting there is a problem with the optimizer’s configuration stored within the model1.h5. You could use try-except blocks around the model download code to get a better error message and also use caching to manage disk resources like so:

def download_model(url, file_path):
    # Function to download the model file if it doesn't exist
    if not os.path.isfile(file_path):
        subprocess.run([f'curl --output {file_path} "{url}"'], shell=True)

@st.cache_resource
def load_model(model_path):
    # Attempt to load model with error handling & caching
    try:
        model = tf.keras.models.load_model(model_path, compile=False)
        return model
    except Exception as e:
        print(f"Error loading model {model_path}: {e}")
        return None

# Download and load the models
download_model("https://media.githubusercontent.com/media/ShyamaleeT/glaucocare/main/sep_5.h5", "model.h5")
download_model("https://media.githubusercontent.com/media/ShyamaleeT/glaucocare/main/models/OD_Segmentation.h5", "models/model1.h5")
download_model("https://media.githubusercontent.com/media/ShyamaleeT/glaucocare/main/models/OC_Segmentation.h5", "models/model2.h5")

model = load_model("model.h5")
model1 = load_model("models/model1.h5")
model2 = load_model("models/model2.h5")

Hello @tonykip
Thank you for the reply. I added the above code exactly as it is. However, I am still encountering an error. Could you please take another look?

Link for the full Log: logs-shyamaleet-glaucocare-main-glaucocare.py-2024-04-07T06_21_38.445Z.txt - Google Drive

Thanks for the logs. So, the inputs parameter in the Model constructor is receiving a list of tensors, not just tensors. Ensure that you are passing just the tensor or a list of tensors directly, not a list within a list. Try changing this line:

heatmap_model = Model([model.inputs], [conv_layer.output, model.output])

to this:

heatmap_model = Model(model.inputs, [conv_layer.output, model.output])

If model.inputs is already a list of tensors, you don’t need to wrap it in another list. Let me know if this resolves the last error in your logs.

Hi @tonykip,

I appreciate your help, but unfortunately, that solution doesn’t resolve the last error in the logs. However, I’m getting the same error,
WARNING:absl:Compiled the loaded model, but the compiled metrics have yet to be built. model.compile_metrics will be empty until you train or evaluate the model.
WARNING:absl:Error in loading the saved optimizer state. As a result, your model is starting with a freshly initialized optimizer.

Link for the log:- logs-shyamaleet-glaucocare-main-glaucocare.py-2024-04-15T05_52_53.391Z.txt - Google Drive