Using Streamlit cache with Polars

Polars is a new Python library that often executes much faster (10x) than Pandas. You can convert Pandas dataframes to Polars dataframe and viceversa with the Polars to_polars and to_pandas function.

I am converting my Pandas functions to Polars, function by function.

I have noticed that the Streamlit cache does not seem to support Polars dataframe. It gives me an error if I try to input a Polars dataframe in a “cached” function.

My solution is to convert back every Polars dataframe to Pandas so each function returns a Pandas dataframe.

I was wondering if there were plans to support Polars dataframe in terms of st.cache.

Thanks

Fabio

1 Like

Hey @Fabio,

Check out this related GitHub Issue and please upvote the Issue if you’d like our team to prioritize it. Thanks!

Hi @Fabio :wave:

@st.cache was deprecated in Streamlit 1.18.0. So st.cache will never support caching Polars dataframes. We recommend using one of the new caching decorators @st.cache_data as a replacement to cache data.

Here’s an example demonstrating caching of a Polars dataframe:

import polars as pl
import streamlit as st

@st.cache_data
def load_data():
    return pl.DataFrame(
        {
            "A": [1, 2, 3, 4, 5],
            "B": [5, 4, 3, 2, 1],
            "fruits": ["apple", "banana", "pear", "apple", "banana"],
        }
    )


df = load_data()

st.write(df)

image

2 Likes

Hi,

I am actually using the new cache function and that (cached read csv) worked for me to.

What does not work is when I feed my polars DF into a cached function.

I can try to repeat the error if you wish.

Thanks!

Fabio

Yes, please share a minimal reproducible example :smile:

@Fabio I’m guessing you’re running into the UnhashableParamError when passing a Polars dataframe as an argument to a cache-decorated function. To tell Streamlit to stop hashing the argument, add a leading underscore to the argument’s name in the function signature:

import polars as pl

import streamlit as st


@st.cache_data
def load_data():
    return pl.DataFrame(
        {
            "A": [1, 2, 3, 4, 5],
            "B": [5, 4, 3, 2, 1],
            "fruits": ["apple", "banana", "pear", "apple", "banana"],
        }
    )


df = load_data()

st.write(df)


@st.cache_data
def show_columns(_polars_df):
    return _polars_df.columns

columns = show_columns(df)
st.write(columns)

image

Although the excluded parameter won’t be hashed, Streamlit still caches the output.

On taking another look, I realize you’re correct in thinking that the function will not rerun if the excluded parameter (when it is the only param to the function) doesn’t change.

What you can do in this case is pass another input param to the cached function that changes whenever the polars dataframe changes. One such option is to use polars.DataFrame.hash_rows in conjunction with polars.Series.view. The first method hashes and combines the rows in the polars DataFrame. As the result is an unhashable polars.Series object, we convert it to a NumPy array containing the UInt64 hashes:

import polars as pl
import streamlit as st

@st.cache_data
def load_data():
    print("loading data")
    return pl.DataFrame(
        {
            "A": [1, 2, 3, 4, 5],
            "B": [5, 4, 3, 2, 1],
            "fruits": ["apple", "banana", "pear", "apple", "banana"],
        }
    )


st.button("Rerun")

df = load_data()

st.write(df)

@st.cache_data
def show_columns(_polars_df, hash):
    print("showing columns")
    return _polars_df.columns


if st.checkbox("Edit data"):
    df = df.drop("fruits")
    st.write(df)

columns = show_columns(df, df.hash_rows(seed=42).view())
st.write(columns)

This method ensures that whenever the underlying unhashable Polars dataframe changes, the function is re-run because the array of hashes changes.

1 Like

Yes this works!! And probably takes less computer resources than converting back and forth. Thanks a million!

1 Like

@snehankekre,

apparently with the upcoming Pandas 2.0 converting from Pandas to Polars and vice-versa will become a “free” operation (both have underlying arrow structure), which I think solves the issue.

1 Like

@st.resource appears to treat polar data frames better, I think the serialisation/pickle aspect of @st.cache causes inflation of the polars df as well as inconsistencies when a hash is calculated?

2 Likes

I am new to Streamlit, and I agree with @matth, I was using the @sst.cache_data decorator before the function that loads my Polars dataframe via .read_csv() (~3.5 GB), and it was slowing down the loading time and visualizations considerably as compared to just directly reading the data (i.e., without using the decorator or a function).

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.