Use same dataframe across different pages

I am building an app to manage the matches of my group’s boardgames. My app will serve both the goals to input data and to explore it.

I am trying to update it to make use of the wonderful multipage feature. It will have a structure like this:
└─── pages/

The script will download the data and, if needed, update it. I need that data, stored in pandas dataframes, to be kept when opening other pages that will build upon the dataframes serving different analysis.
At the moment i get this error: NameError: name 'data_matches' is not defined.

How can I save the dataframes letting streamlit know I will need these in the other pages?
I hope I have explained myself :slight_smile:

1 Like

I would also like to get a direction for this - how to share variables across pages, especially large data frames.

Hi @CaiusX , have a look at session state in the Streamlit docs.


1 Like

Hi @FGrattoni and welcome to the community!

The general pattern in Streamlit to reuse immutable downloaded data is to wrap in a st.memo function as follows:

def download_my_dataframr():
    return df

Then, you can get your dataframe by calling your download_my_dataframr() function.

Generally, st.session_state should be used only if you expect to mutate your dataframe over time.

Happy app creating! :balloon:

Hello and thank you very much for the responses.

I tried using the @st.experimental_memo decorator before the function I used to download the data in as @Adrien_Treuille suggested. However it didn’t work for me. I am not sure why. Furthermore, even if it worked I am not sure it would have been ideal in my case, since my dataset is also modified by the script. The script is used to insert one entry at a time in the database, therefore running it multiple times. So when doing so the dataset itself needs to be updated.

On the other hand, I used @Shawn_Pereira’s suggestion of using the st.session_state. I had always seen the function used with small variables so I was not sure it would have worked ideally with big datasets.

Thank you very much again :smiley: :balloon: