Updating Line Graph Faster with st.slider

Is there a way to update images corresponding to a slider in real-time? Currently, I have the following code that is experiencing a lot of lag:

slider8 = record(st.select_slider, "Normal Trendline (After)")
chart_to_show_normal_trend_after = slider8("Normal Trendline Chart (After)", [i for i in range(1,11) for _ in range(5)])
chart = "ny_trendlines/visualization" + str(int(chart_to_show_normal_trend_after) - 1) + ".png"
st.image(chart)
i+=1

I have a bunch of images if Altair charts, and I bind them to the slider, but when I move the slider it takes a long time for the new image to update. I also tried just generating the Altair chart natively inside Streamlit, but the slider is still really slow. Is there a way to get past this lag?

I think I understand your question, but it’s not really clear to me what’s going on here. Can you post a longer snippet?

Hi Randy! Thank you for the response. This snippet has everything relevant to the chart, so I am not sure what to add. Let me try and explain it:

I have a local folder called “visualizations” which has a lot of Altair images (labeled 0-9) saved as PNG files. Essentially, I bind the numbers on the slider to the corresponding image. So as I slide the slider, the image updates. The problem is that it updates really slowly, and I am not sure how to make it go faster.

I tried another approach with a different graph where I generated the Altair graph within Streamlit, but that also updated really slowly.

I was more confused about what the record and slider8 functions were doing, but maybe it doesn’t matter.

If you have images to load, I suspect it goes as fast as it can, since it’s reading from disk. If you wrap the image loading code into a function, you can use st.cache() to cache the results. At least in that case, the load will only be slower the first time, then read from RAM in the future (presumably, a lot faster).

On generating the Altair graphs being slow, how much data are you using to generate the graphs?

I will try using st.cache for that. Can you just wrap it around a function call?

It is a fair bit of data, but nothing too crazy. It essentially reads data in from a dataframe that contains a numerical “mobility” value for all 50 states over a period of about 97 days. My slider picks a value n that range, and then I use the slider value to filter out the dataframe so that it only has values for the given day. It then generates a chloropleth map using the data. So, in short, I am trying to make an interactive chloropleth map of the US, and it works, but it updates very slowly. Here is the relevant code snippet:

day_slider = st.sidebar.slider(label="Days Since February 28th", min_value=0,
						   max_value=97, step=1)
	mobility_daily_selection = mobility_daily_with_ids[mobility_daily_with_ids\
							   ['days_since_Feb_28th'] == day_slider]
	
	mobility_chart = create_mobility_chart(mobility_daily_selection)

	return mobility_chart


def create_mobility_chart(mobility_df):
	state_geomap_data = alt.topo_feature(data.us_10m.url, 'states')
	mobility_chart = alt.Chart(state_geomap_data).mark_geoshape().encode(
		alt.Color('mobility:Q', scale=alt.Scale(domain=(0.0, 235.0)))
	).transform_lookup(
		lookup='id',
		from_=alt.LookupData(mobility_df, 'id', ['mobility'])
	).properties(
		width=950,
		height=750
	).project(
		type='albersUsa'
	).resolve_scale(
		color='independent'
	)

Yes, make a function that returns each image, then use the @st.cache() decorator.

If you have ~5000 data points and your charts are slow, I’d look in your code and see where improvements can be made. From a quick scan, I suspect the transform_lookup line to be slower than you’d like since it’s a join. Perhaps you can do your join outside of the Altair chart, one time for all of the data, and cache it to get the speed you want.

I have tried this, but then I run into another issue. I was able to generate this interactive map inside a Jupyter notebook by doing the join before the graph generation, and the slider updated quickly.

However, to deal with the data, I have to use geopandas. When I add the code into Streamlit, it gives me a “dtype geometry not understood” error.

Is there a way around this?