Expiring the experimental_singleton cache at regular intervals

I’m looking at using experimental_singleton, and we want the cache to expire every few hours. But it doesn’t seem like there’s native support for this.

(Unlike when using experimental_memo, where you can configure an expiration time using ttl)

FWIW, the use case here is that we have a web app that pulls data from a database that’s being updated online. We’re caching the query result and sharing it across users using experimental_singleton

Every few hours, we want the front-end to be able to pull the updated data from the database. In other words, we need to expire the singleton cache every few hours.

Any guidance on how to do this would be very much appreciated.

1 Like

Why isn’t experimental_memo applicable in this case?

@asehmi OP wants to share the same data across all the users, memo caches for each user to my understanding so if they have many concurrent users memory will still increase greatly for the app as a whole, whereas if OP can use singleton then that would not be the case.

@bpiv400 One way you could go about this is to create a singleton function that is called similarly to all the others, that caches a datetime.datetime.now() response that can then be checked against the current time any user is using your app, and if that time delta is greater than a certain amount clear the cache and redo all the operations. Pseudo code below:

@st.experimental_singleton
def last_cache_datetime():
    return datetime.datetime.now()

Then in the main script can do something like:

cache_time = last_cache_datetime()
current_time = datetime.datetime.now()

if current_time - cache_time > datetime.timedelta(hours=24):
    last_cache_datetime.clear()
    [other_singleton_function].clear()
    cache_time = last_cache_datetime()
1 Like

Hi - What you say isn’t clear from the docs. I think experimental_memo works across all user sessions, but could be wrong. It should be easy to test. Otherwise, use cache which also has a ttl parameter. Your solution should also work nicely.

A

i asked similar question in Is @st.experimental_memo globally stored? and looking back at docs in st.experimental_memo - Streamlit Docs “Memoized data is stored in “pickled” form, which means that the return value of a memoized function must be pickleable.”

@asehmi I think you are correct. streamlit is likely pickling the data then just pulling from on subsequent function calls. the main thing between singleton vs memo is that singleton is the same object (not copy) vs memo is copy. so each of them are values for user but singleton allows to create objects that perhaps (a) cant pickle easily (b) larger objects/other reasons may want shared across users (and not create copy for each user when called).

believe session_state is the only one that is user specific. perhaps i am wrong but reading more in docs seems like that is likely the case

Each caller of a memoized function gets its own copy of the cached data."

Yes, experimental_memo gives each session it’s own copy of the pickled data. Not ideal for large data sets, and there’s no ttl param.

@asehmi are you saying experimental_memo doesnt have ttl param, if so docs st.experimental_memo - Streamlit Docs specifies it does?

You’re right I’ve deleted that (st.experimental_singleton is the one).

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.