Working DB persistence into the workflow

Hi!

I am wanting to develop a streamlit app that will allow me to operate on the following workflow -

  1. Load data from a DB
  2. Tweak ML parameters and re run the model to see changes interactively
  3. Repeat 2 as necessary
  4. Save the parameters and data back out to the DB

I have a streamlit app that can do 2 and 3, but iโ€™m not sure how to work steps 1 and 4. Since the app will re-execute from top to bottom? I imagine that the cache system could help here, but it seems that is more meant for optimizing the tweak and re-run cycle than for allowing read/writes to an external storage site.

Thank you in advance!

You can use session state to solve the problem of loading the data and protecting it from the re-execution

Thank you for the answer! I have been looking for Documentation on session state and canโ€™t find it anywhere, would you mind pointing me in the right direction? (I have yet to find a comprehensive API reference for the framework)

Last time I checked sessionstate wasnโ€™t in the official release. But itโ€™s pretty simple to use. Just add this file to your repository https://gist.github.com/tvst/036da038ab3e999a64497f42de966a92

Hereโ€™s a simple example of how to use it:

from SessionState import SessionState
import pandas as pd

session_state = SessionState.get(data = None, var1 = 0, var2 = 0)

if st.button('Load data'):
    session_state.data = pd.read_csv("path")
2 Likes