St.cache doesn't work on cloud but works locally

Hi everyone! My streamlit app worked locally and on streamlit cloud. But after I added st.cache, the app continues working locally but doesn’t work on cloud anymore. On cloud it starts running the cached function, but after the cached function ends it doesn’t go further and gives an error ’ Oh no.Error running app. Streamlit server consistently failed status checks’ without much explanation.
Do you know what could be the problem and how can I fix it?
This is my app https://spikescape-cache.streamlitapp.com/

I thought that there could be a problem with a memory limit on the cloud when I use st.cache. But exactly the same program but without st.cache works fine.

Could you try switching st.cache with st.experimental_memo?

Hi @blackary thank you for your reply! I tried st.experimental_memo, but it didn’t help. It worked the same way as st.cache - run the cached function, but crashed with the same error ‘Streamlit server consistently failed status checks’ after.

OK, thought it was worth a shot, as that resolves a number of problems for users. Can you please share some minimal reproducible code which shows this error?

Sorry for late reply.
I made a repo with reproducible error of running function pdb_files_loader(pdb_ids) in cache. When I run it without cashing it works though.
https://stepdasha-spike-app-streamlittest-8swm47.streamlit.app/

@stepdasha Can you please share the source code for the minimal reproducible example?

@blackary

import biotite.database.rcsb as rcsb
import datetime
import streamlit as st
import os
import biotite.structure.io as strucio
import biotite.structure as struc


def get_spike_ids(uniprot_id="P0DTC2", min_weight=400, max_resolution=4.0):
    query_by_uniprot_id = rcsb.FieldQuery(
        "rcsb_polymer_entity_container_identifiers.reference_sequence_identifiers.database_accession",
        exact_match=uniprot_id,
    )
    today = datetime.datetime.now()

    query_by_resolution = rcsb.FieldQuery(
        "rcsb_entry_info.resolution_combined", less_or_equal=max_resolution
    )

    query_by_polymer_weight = rcsb.FieldQuery(
        "rcsb_entry_info.molecular_weight", greater=min_weight
    )

    query_by_method = rcsb.FieldQuery(
        "exptl.method", exact_match="ELECTRON MICROSCOPY"
    )

    query = rcsb.CompositeQuery(
        [
            query_by_uniprot_id,
            query_by_resolution,
            # query_by_polymer_count,
            query_by_method,
            query_by_polymer_weight,
        ],
        "and",
    )
    pdb_ids = rcsb.search(query)

    # remove post fusion strcuture 6xra

    pdb_ids.remove('6XRA')
    # print(f"Number of spike structures on  {today.year}-{today.month}-{today.day} with "
    #      f"resolution less than or equal to {max_resolution} with mass more than or equal to {min_weight}: {len(pdb_ids)}")
    # print("Selected PDB IDs:\n", *pdb_ids)
    st.write(f"Number of spike structures on  {today.year}-{today.month}-{today.day} with "
             f"resolution less than or equal to {max_resolution}A with mass more than or equal to {min_weight}kDa: {len(pdb_ids)}")
    return (pdb_ids)


@st.cache(suppress_st_warning=True, show_spinner=False)
# @st.experimental_memo(suppress_st_warning=True, show_spinner=False)
def pdb_files_loader(pdb_ids):
    if not os.path.exists('PDB'):
        os.mkdir('PDB')

    len_pdbid = len(pdb_ids)
    my_bar = st.progress(0)

    proteins = {}
    for count, i in enumerate(pdb_ids):
        # print('object begin', cmd.get_object_list('(all)'))
        # cmd.delete("*")
        my_bar.progress((count + 1) / len_pdbid)

        i = i.lower()
        # download structure
        try:
            file = rcsb.fetch(i, "pdb", target_path="PDB/")
            # print('pdb fetched ')
        except:
            file = rcsb.fetch(i, "cif", target_path="PDB/")
            # print('cif fetched ')

        # laod strcutrues to pycharm
        proteins[str(i)] = strucio.load_structure(file)
        # proteins.append(strucio.load_structure(file))
    return proteins


pdb_ids = get_spike_ids(uniprot_id="P0DTC2", min_weight=400, max_resolution=4.0)
proteins = pdb_files_loader(pdb_ids)
protein = proteins['6xm0']

I have checked our logs, and confirmed that the app has been getting killed periodically because it goes over the 3GB memory limit.

In general that means you will either need to refactor the app to use less memory, or move it to your own hosted server that has more memory.

I wonder if it would be possible to save some of the data in an external database, so that you don’t have to keep it all in memory, and then query that database in your app.

@blackary thank you very much! I will think how to use a database. May I ask, do you by any chance know why the app doesn’t get killed when I remove the @st.cache line but also load this amount of data?

@stepdasha My guess is that without the cache, the objects get loaded temporarily in memory, but then disappear when you’re no longer using them in your code. When you use the cache, they stick around forever (or as long as you set the ttl to be), using more and more memory until it hits the limit of the node it’s running on.

FWIW, even though st.experimental_memo didn’t seen to resolve this issue, it is the recommended way to cache data currently (though it will likely be moved out of experimental, and may get renamed to something else in future versions of streamlit), as it generally causes fewer issues than st.cache.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.