Before I knew st.cache was deprecating, I had the following code working well. Now, since st.cache is deprecated, I tried to replace st.cache simply with st.cache_data, however, I get the following error.
The hash_funcs parameter is a thing of the past and part of the deprecated st.cache API. You can omit hash_funcs={dict: lambda _: None} in your updated code.
I noticed that if I use st.cache_data decorator above my function as in this example, the plot doesn’t get updated when I change x or y-axis values from the dropdown. Also, whenever I am changing some variables, the plots are not being recalculated, it just renders the original plot every time I reload. Without the caching decorator, it works fine. The interesting thing is it was working fine before I changed @st.cache(hash_funcs={dict: lambda _: None}) to @st.cache_data
Any reason why it is behaving like this?
Without looking at the rest of your app, it’s non-trivial to tell. But I can guess with high certainty why it’s the case that your plots used to update but no longer do so:
Using @st.cache(hash_funcs={dict: lambda _: None}) effectively meant that the cached_dict object was never cached in the first place! When you changed any values from the dropdown, plot was recomputed every time. That is, using @st.cache(hash_funcs={dict: lambda _: None}) was equivalent to not using any caching decorator.
In our caching docs, we explain in great detail the behavior of st.cache_data. When you decorate plot with @st.cache_data, it tells Streamlit that whenever the function is called, it checks two things:
The values of the input parameters (in this case, none!).
The code inside the function.
Since neither the input parameters (as none exist) nor the code inside the function change, there is never a “cache miss” after the first run. As such, plot is called only once during the first run.
To fix the issue, pass a suitable input parameter to plot that changes when you update values from the dropdown you’re talking about. If fig is what changes with dropdown options, that may be a good candidate for an input parameter.
@snehankekre - Thanks for this clarification. Apparently, my initial @st.cache(hash_funcs={dict: lambda _: None}) was not doing anything. The fact that I was able to zoom the plot faster (with very large datasets) was that I am saving the fig object in a dictionary, then when I am displaying the plot I am simply calling my fig object from the dictionary.
This clarifies a lot. Thanks!
Thanks for stopping by! We use cookies to help us understand how you interact with our website.
By clicking “Accept all”, you consent to our use of cookies. For more information, please see our privacy policy.
Cookie settings
Strictly necessary cookies
These cookies are necessary for the website to function and cannot be switched off. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms.
Performance cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us understand how visitors move around the site and which pages are most frequently visited.
Functional cookies
These cookies are used to record your choices and settings, maintain your preferences over time and recognize you when you return to our website. These cookies help us to personalize our content for you and remember your preferences.
Targeting cookies
These cookies may be deployed to our site by our advertising partners to build a profile of your interest and provide you with content that is relevant to you, including showing you relevant ads on other websites.