Deep Learning model in Cache explodes

Hello everyone! I love streamlit because it makes simple to create a good web app for Data Science. I found recently that I can use @st.cahce( ) to avoid reloading my DL model each time I upload an image to be classified but I haven´t been able to make it work. It´s important to metion that if I don´t use the @st.cahce( ) method it works perfectly but it takes too long to load each run.

I have seen some related posts that suggest different sollutions but any of them have worked for me

I would appreciate anyone to help me.

Streamlit version: 0.74.1
Tensorflow version: 1.15.0


def load_model():
    model = tf.keras.models.load_model('DenseNet-SparseConcat.h5')
    graph = tf.get_default_graph()
    return model, graph

def processed_image(image_data):
    image_array = np.asarray(image_data)
    image_expand = np.expand_dims(image_array, axis = 0)
    image_norm = image_expand / 255
    image_processed = (image_norm - np.mean(image_norm)) / np.std(image_norm)
    return image_processed

def predictions(image_processed,model,graph):
    with graph.as_default():
        predictions = model.predict(image_processed)
    return predictions

model, graph = load_model()
img_processed = processed_image(image) 
img_predictions = predictions(img_processed,model,graph)

Shown error:

FailedPreconditionError: Error while reading resource variable conv5_block12_2_conv/kernel from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/conv5_block12_2_conv/kernel/class tensorflow::Var does not exist. [[{{node conv5_block12_2_conv/Conv2D/ReadVariableOp}}]]

Hi @ian_Perrilliat, welcome to the Streamlit community!

Is your model publicly available, so that we can try to debug why caching doesn’t seem to work?


Thank you for the response @randyzwitch. Do you mean the architecture code or the .h5 file?

Just the model file.